# DeepSeek-OCR **Repository Path**: mirrors/DeepSeek-OCR ## Basic Information - **Project Name**: DeepSeek-OCR - **Description**: DeepSeek-OCR 是利用视觉模态压缩长文本上下文的新方法 - **Primary Language**: Python - **License**: MIT - **Default Branch**: main - **Homepage**: https://www.oschina.net/p/deepseek-ocr - **GVP Project**: No ## Statistics - **Stars**: 4 - **Forks**: 3 - **Created**: 2025-10-21 - **Last Updated**: 2025-11-10 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
📥 Model Download | 📄 Paper Link | 📄 Arxiv Paper Link |
DeepSeek-OCR: Contexts Optical Compression
Explore the boundaries of visual-text compression.
## Release - [2025/10/23]🚀🚀🚀 DeepSeek-OCR is now officially supported in upstream [vLLM](https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-OCR.html#installing-vllm). Thanks to the [vLLM](https://github.com/vllm-project/vllm) team for their help. - [2025/10/20]🚀🚀🚀 We release DeepSeek-OCR, a model to investigate the role of vision encoders from an LLM-centric viewpoint. ## Contents - [Install](#install) - [vLLM Inference](#vllm-inference) - [Transformers Inference](#transformers-inference) ## Install >Our environment is cuda11.8+torch2.6.0. 1. Clone this repository and navigate to the DeepSeek-OCR folder ```bash git clone https://github.com/deepseek-ai/DeepSeek-OCR.git ``` 2. Conda ```Shell conda create -n deepseek-ocr python=3.12.9 -y conda activate deepseek-ocr ``` 3. Packages - download the vllm-0.8.5 [whl](https://github.com/vllm-project/vllm/releases/tag/v0.8.5) ```Shell pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118 pip install vllm-0.8.5+cu118-cp38-abi3-manylinux1_x86_64.whl pip install -r requirements.txt pip install flash-attn==2.7.3 --no-build-isolation ``` **Note:** if you want vLLM and transformers codes to run in the same environment, you don't need to worry about this installation error like: vllm 0.8.5+cu118 requires transformers>=4.51.1 ## vLLM-Inference - VLLM: >**Note:** change the INPUT_PATH/OUTPUT_PATH and other settings in the DeepSeek-OCR-master/DeepSeek-OCR-vllm/config.py ```Shell cd DeepSeek-OCR-master/DeepSeek-OCR-vllm ``` 1. image: streaming output ```Shell python run_dpsk_ocr_image.py ``` 2. pdf: concurrency ~2500tokens/s(an A100-40G) ```Shell python run_dpsk_ocr_pdf.py ``` 3. batch eval for benchmarks ```Shell python run_dpsk_ocr_eval_batch.py ``` **[2025/10/23] The version of upstream [vLLM](https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-OCR.html#installing-vllm):** ```shell uv venv source .venv/bin/activate # Until v0.11.1 release, you need to install vLLM from nightly build uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly ``` ```python from vllm import LLM, SamplingParams from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor from PIL import Image # Create model instance llm = LLM( model="deepseek-ai/DeepSeek-OCR", enable_prefix_caching=False, mm_processor_cache_gb=0, logits_processors=[NGramPerReqLogitsProcessor] ) # Prepare batched input with your image file image_1 = Image.open("path/to/your/image_1.png").convert("RGB") image_2 = Image.open("path/to/your/image_2.png").convert("RGB") prompt = "![]() |
![]() |
![]() |
![]() |