# HRTNet
**Repository Path**: dlml2/HRTNet
## Basic Information
- **Project Name**: HRTNet
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-01-03
- **Last Updated**: 2025-01-03
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# HRTransNet: Searching for Efficient High-Resolution Networks with Attention
## Overview
We develop a multi-target multi-branch supernet method, which not only retains the
multi-branch structure of HRNet, but also finds the proper location for placing multi-head self-attention module. Our search algorithm is optimized towards multiple objectives (e.g., latency and mIoU) and capable of finding architectures on Pareto frontier with arbitrary number of branches in a single search. We further present a series of HRTransNet that searched for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers between branches from different resolutions and fuse to high resolution for both efficiency and effectiveness.
HRTNet search space
HRTNet searchable modules
Highlights:
* **1**: We design a novel searching framework incorporating with multi-branch space for high resolution representation and genetic-based multi-objective.
* **2**: We present a series of HRTransNet that combines a light-weight convolution module to reduce the computation cost while preserving high-resolution information and a memory efficient self-attention module to attend long-range dependencies.
* **3**: HRTNet achieves extremely fast speed, low flops, low parameters and maintains competitive accuracy.
## Results
HRTNet search results
HRTNet models
HRTNet results on Cityscapes
## Prerequisites
- Ubuntu 16.04
- Python 3.7
- CUDA 10.2 (lower versions may work but were not tested)
- NVIDIA GPU (>= 11G graphic memory) + CuDNN v7.3
This repository has been tested on GTX 2080Ti. Configurations (e.g batch size, image patch size) may need to be changed on different platforms.
## Installation
* Clone this repo:
```bash
git clone https://github.com/HRTNet/HRTNet.git
cd HRTNet
```
* Install dependencies:
```bash
bash install.sh
```
## Usage
### 0. Prepare the dataset
* Download the [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3) and [gtFine_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=1) from the Cityscapes.
* Prepare the annotations by using the [createTrainIdLabelImgs.py](https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/createTrainIdLabelImgs.py).
* Put the [file of image list](tools/datasets/cityscapes/) into where you save the dataset.
### 1. Train from scratch
* `cd HRTNet/train`
* Set the dataset path via `ln -s $YOUR_DATA_PATH ../DATASET`
* Set the output path via `mkdir ../OUTPUT`
* Train from scratch
```
export DETECTRON2_DATASETS="$Your_DATA_PATH"
NGPUS=8
python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py --world_size $NGPUS --seed 12367 --config ../configs/cityscapes/semantic.yaml
```
### 2. Evaluation
We provide training models and logs, which can be downloaded from [Google Drive](https://drive.google.com/drive/folders/10jR2H5JwuJq9UuPGWutyw50MHESBZZ3w?usp=sharing).
```bash
cd train
```
* Download the pretrained weights of the from [Google Drive](https://drive.google.com/drive/folders/10jR2H5JwuJq9UuPGWutyw50MHESBZZ3w?usp=sharing).
* Set `config.model_path = $YOUR_MODEL_PATH` in `semantic.yaml`.
* Set `config.json_file = $HRTNet_MODEL` in `semantic.yaml`.
* Start the evaluation process:
```bash
CUDA_VISIBLE_DEVICES=0 python test.py
```