[](https://github.com/intel/neural-compressor)
[](https://github.com/intel/neural-compressor/releases)
[](https://github.com/intel/neural-compressor/blob/master/LICENSE)
[](https://github.com/intel/neural-compressor)
[](https://pepy.tech/project/neural-compressor)
[Architecture](./docs/source/design.md#architecture) | [Workflow](./docs/source/design.md#workflows) | [Documentations](https://intel.github.io/neural-compressor)
---
IntelĀ® Neural Compressor aims to provide popular model compression techniques such as Static Quantization, Dynamic Quantization, SmoothQuant, Weight-Only Quantization, Quantization-Aware Training, Mixed Precision, etc.
* Support advanced quantization of Large Language Models (LLMs) and Vision-Language Models (VLMs) such as LLaMA, Qwen, DeepSeek, Flux, FramePack, etc.,
across diverse quantization techniques and low-precision data types through integration with [AutoRound](https://github.com/intel/auto-round).
* Support a wide range of Intel hardware such as [Intel Gaudi Al Accelerators](https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi.html), [Intel Core Ultra Processors](https://www.intel.com/content/www/us/en/products/details/processors/core-ultra.html), [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html),
[Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/overview.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/overview.html) with extensive testing;
support AMD CPU, ARM CPU, and NVidia GPU with limited testing.
## What's New
* [2026/03] FP8 quantization support for [Keras/JAX](./docs/source/JAX.md) (experimental)
* [2026/03] FP8 KV cache/Attention static quantization with [AutoRound](./docs/source/PT_AutoRound.md) (experimental)
* [2025/12] [NVFP4 quantization](./docs/source/PT_NVFP4Quant.md) experimental support
* [2025/10] [MXFP8 / MXFP4 quantization](./docs/source/PT_MXQuant.md) experimental support
* [2025/09] FP8 dynamic quantization, including Linear, FusedMoE on Intel Gaudi AI Accelerators
* [2025/05] FP8 static quantization of DeepSeek V3/R1 model on Intel Gaudi AI Accelerators
* [2025/03] VLM quantization in transformers-like API on Intel CPU/GPU
## Installation
Choose the necessary framework dependencies to install based on your deploy environment.
### Install Framework for PyTorch Backend (on-demand)
Intel Neural Compressor supports PyTorch with CPU, GPU and HPU. Please install the corresponding PyTorch version based on your hardware environment.
* [Install intel_extension_for_pytorch for CPU](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)
* [Install intel_extension_for_pytorch for Intel GPU](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)
* [Use Docker Image with torch installed for HPU](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#bare-metal-fresh-os-single-click)
**Note**: There is a version mapping between Intel Neural Compressor and Gaudi Software Stack, please refer to this [table](./docs/source/gaudi_version_map.md) and make sure to use a matched combination.
* [Install torch for other platform](https://pytorch.org/get-started/locally)
### Install Neural Compressor from pypi
```
# Framework extension API + PyTorch dependency
pip install neural-compressor-pt
# Framework extension API + TensorFlow dependency
pip install neural-compressor-tf
# Framework extension API + JAX dependency, available since v3.8
pip install neural-compressor-jax
```
**Note**: Further installation methods can be found under [Installation Guide](./docs/source/installation_guide.md). check out our [FAQ](./docs/source/faq.md) for more details.
## Getting Started
After successfully installing these packages, try your first quantization program. **Following example code demonstrates FP8 Quantization**, it is supported by Intel Gaudi2 AI Accelerator.
To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built).
Run a container with an interactive shell, [more info](https://docs.habana.ai/en/latest/Installation_Guide/Additional_Installation/Docker_Installation.html#docker-installation)
```
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.24.0/ubuntu24.04/habanalabs/pytorch-installer-2.10.0:latest
```
> Note: Since Habana software >= 1.21.0, `PT_HPU_LAZY_MODE=0` is the default setting. However, most low-precision functions (such as `convert_from_uint4`) do not support this setting. Therefore, we recommend setting `PT_HPU_LAZY_MODE=1` to maintain compatibility.
Run the example,
```python
from neural_compressor.torch.quantization import (
FP8Config,
prepare,
convert,
)
import torch
import torchvision.models as models
model = models.resnet18()
qconfig = FP8Config(fp8_config="E4M3")
model = prepare(model, qconfig)
# Customer defined calibration. Below is a dummy calibration
model(torch.randn(1, 3, 224, 224).to("hpu"))
model = convert(model)
output = model(torch.randn(1, 3, 224, 224).to("hpu")).to("cpu")
print(output.shape)
```
More [FP8 quantization doc](./docs/source/PT_FP8Quant.md).
**Following example code demonstrates weight-only large language model loading** on Intel Gaudi2 AI Accelerator.
```python
from neural_compressor.torch.quantization import load
model_name = "TheBloke/Llama-2-7B-GPTQ"
model = load(
model_name_or_path=model_name,
format="huggingface",
device="hpu",
torch_dtype=torch.bfloat16,
)
```
**Note:** Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.
## Documentation
## Selected Publications/Events
* arXiv: [Faster Inference of LLMs using FP8 on the Intel Gaudi](https://arxiv.org/abs/2503.09975) (Mar 2025)
* PyTorch landscape: [PyTorch general optimizations](https://landscape.pytorch.org/) (Mar 2025)
* Blog on SqueezeBits: [[Intel Gaudi] #4. FP8 Quantization](https://blog.squeezebits.com/intel-gaudi-4-fp8-quantization--40269) (Jan 2025)
> **Note**:
> View [Full Publication List](https://github.com/intel/neural-compressor/blob/master/docs/source/publication_list.md).
## Additional Content
* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)
* [Legal Information](./docs/source/legal_information.md)
* [Security Policy](SECURITY.md)
## Communication
- [GitHub Issues](https://github.com/intel/neural-compressor/issues): mainly for bug reports, new feature requests, question asking, etc.
- [Email](mailto:inc.maintainers@intel.com): welcome to raise any interesting research ideas on model compression techniques by email for collaborations.