# FlashVDM **Repository Path**: mirrors_Tencent/FlashVDM ## Basic Information - **Project Name**: FlashVDM - **Description**: Unleashing Vecset Diffusion Model for Fast Shape Generation / within 1 Second (ICCV 2025 Highlight) - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-03-20 - **Last Updated**: 2026-02-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

⚡️⚡️ Try it Now with **[Hunyuan3D-2](https://github.com/Tencent/Hunyuan3D-2)** for super fast high-quality shape generation within 1 second on 4090. https://github.com/user-attachments/assets/a2cbc5b8-be22-49d7-b1c3-7aa2b20ba460
## News - **[2025-07-29]**: Released ULIP and Uni3D evaluation [code](evaluation/). - **[2025-06-27]**: FlashVDM is accepted by ICCV 2025 as highlight paper. - **[2025-03-19]**: FlashVDM is released and integrated into [Hunyuan3D-2](https://github.com/Tencent/Hunyuan3D-2). ## What is FlashVDM? FlashVDM is a general framework for accelerating shape generation Vecset Diffusion Model (VDM), such as [Hunyuan3D-2](https://github.com/Tencent/Hunyuan3D-2), [Michelangelo](https://github.com/NeuralCarver/Michelangelo), [CraftsMan3D](https://github.com/wyysf-98/CraftsMan3D), [CLAY](https://github.com/CLAY-3D/OpenCLAY), [TripoSG](https://arxiv.org/abs/2502.06608), [Dora](https://github.com/Seed3D/Dora) and etc. It features two techniques for both VAE and DiT acceleration: 1. ***Lightning Vecset Decoder*** that drastically lowers decoding FLOPs without any loss in decoding quality, achieving over **45x speedup**. 2. ***Progressive Flow Distillation*** that enables flexible diffusion sampling with as few as **5 inference steps** and comparable quality. #### Official Supported Model - [Hunyuan3D-2](https://github.com/Tencent-Hunyuan/Hunyuan3D-2?tab=readme-ov-file#-models-zoo) - [Hunyuan3D-2mini](https://github.com/Tencent-Hunyuan/Hunyuan3D-2?tab=readme-ov-file#-models-zoo) - [Hunyuan3D-2mv](https://github.com/Tencent-Hunyuan/Hunyuan3D-2?tab=readme-ov-file#-models-zoo) - [Hunyuan3D-2.1](https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1) - [Hunyuan3D-Omni](https://github.com/Tencent-Hunyuan/Hunyuan3D-Omni) #### Community Supported Model - [HoloPart](https://github.com/VAST-AI-Research/HoloPart): Generative 3D Part Amodal Segmentation - [TripoSG](https://github.com/VAST-AI-Research/TripoSG): High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models - [DetailGen3D](https://github.com/VAST-AI-Research/DetailGen3D): Generative 3D Geometry Enhancement via Data-Dependent Flow - [CraftsMan3D](https://github.com/wyysf-98/CraftsMan3D): High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner - [Step1X-3D](https://github.com/stepfun-ai/Step1X-3D): Towards High-Fidelity and Controllable Generation of Textured 3D Assets - [PartPacker](https://github.com/NVlabs/PartPacker): Efficient Part-level 3D Object Generation via Dual Volume Packing ## How to Use? Visit **[Hunyuan3D-2](https://github.com/Tencent/Hunyuan3D-2)** to access the integration of FlashVDM with Hunyuan3D-2. ```diff from hy3dgen.rembg import BackgroundRemover from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained( 'tencent/Hunyuan3D-2', - subfolder='hunyuan3d-dit-v2-0', + subfolder='hunyuan3d-dit-v2-0-turbo', use_safetensors=True, ) +pipeline.enable_flashvdm() pipeline( image=image, - num_inference_steps=50, + num_inference_steps=5, )[0] ``` ## Citation If you found this repository helpful, please cite our report: ```bibtex @misc{lai2025unleashingvecsetdiffusionmodel, title={Unleashing Vecset Diffusion Model for Fast Shape Generation}, author={Zeqiang Lai and Yunfei Zhao and Zibo Zhao and Haolin Liu and Fuyun Wang and Huiwen Shi and Xianghui Yang and Qingxiang Lin and Jingwei Huang and Yuhong Liu and Jie Jiang and Chunchao Guo and Xiangyu Yue}, year={2025}, eprint={2503.16302}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2503.16302}, } ```