# preallocate-cuda-memory **Repository Path**: guyi2000/preallocate-cuda-memory ## Basic Information - **Project Name**: preallocate-cuda-memory - **Description**: This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others. - **Primary Language**: Python - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-06-02 - **Last Updated**: 2024-06-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: AI, PyTorch, cuda-memory-allocation ## README # Preallocate CUDA memory for pytorch This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others. You can use the following command directly on the command line: ```bash python -m preallocate_cuda_memory ``` Or you can use in python file: ```python import preallocate_cuda_memory as pc mc = pc.MemoryController(0) # 0 is the GPU index mc.occupy_all_available_memory() mc.free_memory() ``` If you find any issues, please feel free to contact the author by raising an issue on GitHub.