diff --git a/3d-reconstruction/ngp-nerf/pytorch/README.md b/3d-reconstruction/ngp-nerf/pytorch/README.md index 009fba6701a123e7e6492bd811f1336caeb6344d..1b088c17177f6e5c1cc15184cde7dbe5ae169971 100644 --- a/3d-reconstruction/ngp-nerf/pytorch/README.md +++ b/3d-reconstruction/ngp-nerf/pytorch/README.md @@ -1,21 +1,22 @@ -# NGP-NeRF +# HashNeRF + ## Model description -A PyTorch implementation of the NeRF part (grid encoder, density grid ray sampler) in instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. +A PyTorch implementation (Hash) of the NeRF part (grid encoder, density grid ray sampler) in instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. -## Step 1: Installing +## Step 1: Installation ```bash pip3 install -r requirements.txt ``` -## Step 2: Prepare dataset +## Step 2: Preparing datasets We use the same data format as instant-ngp, [fox](https://github.com/NVlabs/instant-ngp/tree/master/data/nerf/fox) and blender dataset [nerf_synthetic](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1).Please download and put them under `./data`. For custom dataset, you should: 1. take a video / many photos from different views -2. put the video under a path like ./data/custom/video.mp4 or the images under -./data/custom/images/*.jpg. +2. put the video under a path like ./data/custom/video.mp4 or the images under ./data/custom/images/*.jpg. 3. call the preprocess code: (should install ffmpeg and colmap first! refer to the file for more options) + ```bash python3 scripts/colmap2nerf.py --video ./data/custom/video.mp4 --run_colmap # if use video python3 scripts/colmap2nerf.py --images ./data/custom/images/ --run_colmap # if use images @@ -58,25 +59,7 @@ python3 main_nerf.py data/custom_data --workspace trial_nerf -O |----------------------|------------------------------------------|-------------|----------|------------|-------------|-------------------------|-----------| | 0.0652 | SDK V2.2,bs:1,1x,fp16 | 10 | 11.9 | 82 | 0.903 | 28.1 | 1 | - - ## Reference -**Q**: How to choose the network backbone? - -**A**: The `-O` flag which uses pytorch's native mixed precision is suitable for most cases. I don't find very significant improvement for `--tcnn` and `--ff`, and they require extra building. Also, some new features may only be available for the default `-O` mode. - -**Q**: CUDA Out Of Memory for my dataset. - -**A**: You could try to turn off `--preload` which loads all images in to GPU for acceleration (if use `-O`, change it to `--fp16 --cuda_ray`). Another solution is to manually set `downscale` in `NeRFDataset` to lower the image resolution. - -**Q**: How to adjust `bound` and `scale`? - -**A**: You could start with a large `bound` (e.g., 16) or a small `scale` (e.g., 0.3) to make sure the object falls into the bounding box. The GUI mode can be used to interactively shrink the `bound` to find the suitable value. Uncommenting [this line](https://github.com/ashawkey/torch-ngp/blob/main/nerf/provider.py#L219) will visualize the camera poses, and some good examples can be found in [this issue](https://github.com/ashawkey/torch-ngp/issues/59). - -**Q**: Noisy novel views for realistic datasets. - -**A**: You could try setting `bg_radius` to a large value, e.g., 32. It trains an extra environment map to model the background in realistic photos. A larger `bound` will also help. - - -More information ref: https://github.com/ashawkey/torch-ngp \ No newline at end of file +- [torch-ngp](https://github.com/ashawkey/torch-ngp) +- [DearPyGui](https://github.com/hoffstadt/DearPyGui) \ No newline at end of file diff --git a/README.md b/README.md index c914bbb27c5a6e406536558347c76681e8e2a541..1080ac14196245c7661393df606712de3defc57d 100644 --- a/README.md +++ b/README.md @@ -491,7 +491,7 @@ DeepSparkHub甄选上百个应用算法和模型,覆盖AI和通用计算各领 模型名称 | 框架 | 数据集 -------- | ------ | ---- -[NGP-NeRF](3d-reconstruction/ngp-nerf/pytorch/README.md) | PyTorch | fox +[HashNeRF](3d-reconstruction/ngp-nerf/pytorch/README.md) | PyTorch | fox -------