# AnyLoc_raw **Repository Path**: wangerniu/any-loc_raw ## Basic Information - **Project Name**: AnyLoc_raw - **Description**: No description available - **Primary Language**: Unknown - **License**: BSD-3-Clause - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-01-11 - **Last Updated**: 2024-01-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # AnyLoc: Towards Universal Visual Place Recognition [![License: BSD-3](https://img.shields.io/badge/License-BSD--3-yellow.svg?style=flat-square)](https://opensource.org/license/BSD-3-clause/) [![stars](https://img.shields.io/github/stars/AnyLoc/AnyLoc?style=social)](https://github.com/AnyLoc/AnyLoc/stargazers) [![arXiv](https://img.shields.io/badge/arXiv-2308.00688-b31b1b.svg)](https://arxiv.org/abs/2308.00688) [![githubio](https://img.shields.io/badge/-anyloc.github.io-blue?logo=Github&color=grey)](https://anyloc.github.io/) [![github](https://img.shields.io/badge/GitHub-Anyloc%2FAnyloc-blue?logo=Github)](https://github.com/AnyLoc/AnyLoc) [![YouTube](https://img.shields.io/badge/YouTube-FF0000?style=flat&logo=youtube&logoColor=white)](https://youtu.be/ITo8rMInatk) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Space-AnyLoc-blue)](https://huggingface.co/spaces/TheProjectsGuy/AnyLoc) [![Open In Colab: Global Descriptors](https://img.shields.io/badge/IIITH--OneDrive-Global%20Descriptors-blue?logo=googlecolab&label=&labelColor=grey)](https://colab.research.google.com/github/AnyLoc/AnyLoc/blob/main/demo/anyloc_vlad_generate_colab.ipynb) [![Open In Colab: Cluster visualizations](https://img.shields.io/badge/IIITH--OneDrive-Cluster%20Visualizations-blue?logo=googlecolab&label=&labelColor=grey)](https://colab.research.google.com/github/AnyLoc/AnyLoc/blob/main/demo/images_vlad_clusters.ipynb) [![Public Release on IIITH-OneDrive](https://img.shields.io/badge/IIITH--OneDrive-Public%20Material-%23D83B01?logo=microsoftonedrive&logoColor=%230078D4&label=&labelColor=grey)][public-release-link] [![Hugging Face Paper](https://img.shields.io/badge/%F0%9F%A4%97-HF--Paper-blue)](https://huggingface.co/papers/2308.00688) [public-release-link]: https://iiitaphyd-my.sharepoint.com/:f:/g/personal/robotics_iiit_ac_in/EtpBLzBFfqdHljqQMnm6xdoBzW-4KFLXieXDVN4vPg84Lg?e=BP6ZW1 ## Table of contents - [AnyLoc: Towards Universal Visual Place Recognition](#anyloc-towards-universal-visual-place-recognition) - [Table of contents](#table-of-contents) - [Contents](#contents) - [Included Repositories](#included-repositories) - [Included Datasets](#included-datasets) - [PapersWithCode Badges](#paperswithcode-badges) - [Getting Started](#getting-started) - [Using the SOTA: AnyLoc-VLAD-DINOv2](#using-the-sota-anyloc-vlad-dinov2) - [Using the APIs](#using-the-apis) - [DINOv2](#dinov2) - [VLAD](#vlad) - [DINOv1](#dinov1) - [Validating the Results](#validating-the-results) - [NVIDIA NGC Singularity Container Setup](#nvidia-ngc-singularity-container-setup) - [Dataset Setup](#dataset-setup) - [References](#references) - [Cite Our Work](#cite-our-work) ## Contents The contents of this repository are as follows | S. No. | Item | Description | | :---: | :--- | :----- | | 1 | [demo](./demo/) | Contains standalone demo scripts (Quick start, Jupyter Notebook, and Gradio app) to run our `AnyLoc-VLAD-DINOv2` method. Also contains guides for APIs. This folder is self-contained (doesn't use anything outside it). | | 2 | [scripts](./scripts/) | Contains all scripts for development. Use the `-h` option for argument information. | | 3 | [configs.py](./configs.py) | Global configurations for the repository | | 4 | [utilities](./utilities.py) | Utility Classes & Functions (includes DINOv2 hooks & VLAD) | | 5 | [conda-environment.yml](./conda-environment.yml) | The conda environment (it could fail to install OpenAI CLIP as it includes a `git+` URL). We suggest you use the [setup_conda.sh](./setup_conda.sh) script. | | 6 | [requirements.txt](./requirements.txt) | Requirements file for pip virtual environment. Probably out of date. | | 7 | [custom_datasets](./custom_datasets/) | Custom datalaoder implementations for VPR. | | 8 | [examples](./examples/) | Miscellaneous example scripts | | 9 | [MixVPR](./MixVPR/) | Minimal MixVPR inference code | | 10 | [clip_wrapper.py](./clip_wrapper.py) | A wrapper around two CLIP implementations (OpenAI and OpenCLIP). | | 11 | [models_mae.py](./models_mae.py) | MAE implementation | | 12 | [dino_extractor.py](./dino_extractor.py) | DINO (v1) feature extractor | | 13 | [CONTRIBUTING.md](./CONTRIBUTING.md) | Note for contributors | | 14 | [paper_utils](./paper_utils/) | Paper scripts (formatting for figures, etc.) | ### Included Repositories Includes the following repositories (currently not submodules) as subfolders. | Directory | Link | Cloned On | Description | | :--- | :---- | :---- | :-----| | [dvgl-benchmark](./dvgl-benchmark/) | [gmberton/deep-visual-geo-localization-benchmark](https://github.com/gmberton/deep-visual-geo-localization-benchmark) | 2023-02-12 | For benchmarking | | [datasets-vg](./datasets-vg/) | [gmberton/datasets_vg](https://github.com/gmberton/datasets_vg) | 2023-02-13 | For dataset download and formatting | | [CosPlace](./CosPlace/) | [gmberton/CosPlace](https://github.com/gmberton/CosPlace) | 2023-03-20 | Baseline Comparisons | ### Included Datasets We release all the benchmarking datasets in our [public release][public-release-link]. 1. Download the `.tar.gz` files from [here][public-release-link] > `Datasets-All` (for the datasets you want to use) 2. Unzip them using `tar -xvzf ./NAME.tar.gz`. They should unzip into a directory with `NAME`. - If you're using our benchmarking [scripts](./scripts/), this directory where you're storing the datasets is the parameter `--prog.data-vg-dir` (in most scripts). - See [Dataset Setup](#dataset-setup) for detailed information (including how the data directory structure should look after unzipping) We thank the following sources for the rich datasets 1. Baidu Autonomous Driving Business Unit for the Baidu Mall dataset present in `baidu_datasets.tar.gz` 2. Queensland University of Technology for the Gardens Point dataset present in `gardens.tar.gz` 3. York University for the 17 Places dataset present in `17places.tar.gz` 4. Tokyo Institute of Technology, INRIA, and CTU Prague for the Pitts-30k dataset present in `pitts30k.tar.gz` 5. Queensland University of Technology for the St. Lucia dataset present in `st_lucia.tar.gz` 6. University of Oxford for the Oxford dataset present in `Oxford_Robotcar.tar.gz` 7. AirLab at CMU for the Hawkins dataset present in `hawkins_long_corridor.tar.gz`, the Laurel Caverns dataset present in `laurel_caverns.tar.gz`, and the Nardo Air dataset present in `test_40_midref_rot0.tar.gz` (not rotated) and `test_40_midref_rot90.tar.gz` (rotated). 8. Fraunhofer FKIE and TU Munich for the VP-Air dataset present in `VPAir.tar.gz` 9. Ifremer and University of Toulon for the Mid-Atlantic Ridge dataset present in `eiffel.tar.gz` Most of the contents of the zipped folders are from the original sources. We generate the ground truth for some of the datasets as `.npy` files; see [this issue](https://github.com/AnyLoc/AnyLoc/issues/8#issuecomment-1712450557) for more information. The copyright of each dataset is held by the original sources. ## PapersWithCode Badges [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-17-places)](https://paperswithcode.com/sota/visual-place-recognition-on-17-places?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-baidu-mall)](https://paperswithcode.com/sota/visual-place-recognition-on-baidu-mall?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-gardens-point)](https://paperswithcode.com/sota/visual-place-recognition-on-gardens-point?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-hawkins)](https://paperswithcode.com/sota/visual-place-recognition-on-hawkins?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-laurel-caverns)](https://paperswithcode.com/sota/visual-place-recognition-on-laurel-caverns?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-mid-atlantic)](https://paperswithcode.com/sota/visual-place-recognition-on-mid-atlantic?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-nardo-air)](https://paperswithcode.com/sota/visual-place-recognition-on-nardo-air?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-nardo-air-r)](https://paperswithcode.com/sota/visual-place-recognition-on-nardo-air-r?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-oxford-robotcar-4)](https://paperswithcode.com/sota/visual-place-recognition-on-oxford-robotcar-4?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-vp-air)](https://paperswithcode.com/sota/visual-place-recognition-on-vp-air?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-st-lucia)](https://paperswithcode.com/sota/visual-place-recognition-on-st-lucia?p=anyloc-towards-universal-visual-place) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/anyloc-towards-universal-visual-place/visual-place-recognition-on-pittsburgh-30k)](https://paperswithcode.com/sota/visual-place-recognition-on-pittsburgh-30k?p=anyloc-towards-universal-visual-place) ## Getting Started > **Tip**: You can explore the [HuggingFace Space](https://huggingface.co/spaces/TheProjectsGuy/AnyLoc) and the Colab notebooks (no GPU needed). Clone this repository ```bash git clone https://github.com/AnyLoc/AnyLoc.git cd AnyLoc ``` Set up the conda environment (you can also use `mamba` instead of `conda`; the script will automatically detect it) ```bash conda create -n anyloc python=3.9 conda activate anyloc bash ./setup_conda.sh # If you want to install the developer tools as well bash ./setup_conda.sh --dev ``` The setup takes about 11 GB of disk space. You can also use an existing conda environment, say `vl-vpr`, by doing ```bash bash ./setup_conda.sh vl-vpr ``` Note the following: - All our public release files can be found in our [public release][public-release-link]. - If the conda environment setup is taking time, you could just unzip `conda-env.tar.gz` (GB) in your `~/anaconda3/envs` folder (but compatibility is not guaranteed). - The `./scripts` folder is for validating our results and seeing the main scripts. Most applications are in the `./demo` folder. See the list of [demos](./demo/) before running anything. - If you're running something in the `./scripts` folder, run it with `pwd` in this (repository) folder. For example, python scripts are run as `python ./scripts/