# MiniCPM-V
**Repository Path**: ai-aigc/MiniCPM-V
## Basic Information
- **Project Name**: MiniCPM-V
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-09-17
- **Last Updated**: 2025-09-17
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
| Model |
Size |
Avg Score ↑ |
Total Inference Time ↓ |
GPU Mem ↓ |
| Qwen2.5-VL-7B-Instruct |
8.3B |
71.6 |
3h |
60G |
| GLM-4.1V-9B-Thinking |
10.3B |
73.6 |
2.63h |
32G |
| MiniCPM-V 4.5 |
8.7B |
73.5 |
0.26h |
28G |
OpenCompass 和 Video-MME 均采用 A100*8卡 推理,其中 Video-MME 的推理时间未统计视频抽帧时间
### 典型示例
点击查看更多示例
我们使用 [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS) 将 MiniCPM-V 4.5 部署在 iPad M4 ,并录制以下演示录屏,视频未经任何编辑。
## MiniCPM-o 2.6
MiniCPM-o 2.6 是 MiniCPM-o 系列的最新、性能最佳模型。该模型基于 SigLip-400M、Whisper-medium-300M、ChatTTS-200M 和 Qwen2.5-7B 构建,共 8B 参数,通过端到端方式训练和推理。相比 MiniCPM-V 2.6,该模型在性能上有了显著提升,并支持了实时语音对话和多模态流式交互的新功能。MiniCPM-o 2.6 的主要特性包括:
- 🔥 **领先的视觉能力。**
MiniCPM-o 2.6 在 OpenCompass 榜单上(综合 8 个主流多模态评测基准)平均得分 70.2,**以 8B 量级的大小在单图理解方面超越了 GPT-4o-202405、Gemini 1.5 Pro 和 Claude 3.5 Sonnet 等主流商用闭源多模态大模型**。此外,它的多图和视频理解表现也**优于 GPT-4V 和 Claude 3.5 Sonnet**,并展现出了优秀的上下文学习能力。
- 🎙 **出色的语音能力。**
MiniCPM-o 2.6 **支持可配置声音的中英双语实时对话**。MiniCPM-o 2.6 在语音理解任务(如 ASR 和 STT 等)**优于 GPT-4o-realtime**,并在语音对话的语义和声学评估中展现了**开源模型中最高的语音生成性能**。它还支持情绪/语速/风格控制、语音克隆、角色扮演等进阶能力。
- 🎬 **强大的多模态流式交互能力。**
作为一项新功能,MiniCPM-o 2.6 能够**接受连续的视频和音频流,并和用户进行实时语音交互**。在针对实时视频理解、全模态视音频理解、多模态上下文理解的综合评测基准 StreamingBench 中,MiniCPM-o 2.6 取得开源社区最佳水平,并**超过了 GPT-4o-202408 和 Claude 3.5 Sonnet**。
- 💪 **强大的 OCR 能力及其他功能。**
MiniCPM-o 2.6 进一步优化了 MiniCPM-V 2.6 的众多视觉理解能力,其可以处理任意长宽比的图像,像素数可达 180 万(如 1344x1344)。在 OCRBench 上取得**25B 以下最佳水平,超过 GPT-4o-202405 等商用闭源模型**。基于最新的 [RLHF-V](https://rlhf-v.github.io/)、[RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 和 [VisCPM](https://github.com/OpenBMB/VisCPM) 技术,其具备了**可信的多模态行为**,在 MMHal-Bench 上超过了 GPT-4o 和 Claude 3.5,并支持英语、中文、德语、法语、意大利语、韩语等**30多种语言**。
- 🚀 **卓越的效率。**
除了对个人用户友好的模型大小,MiniCPM-o 2.6 还表现出**最先进的视觉 token 密度**(即每个视觉 token 编码的像素数量)。它**仅需 640 个 token 即可处理 180 万像素图像,比大多数模型少 75%**。这一特性优化了模型的推理速度、首 token 延迟、内存占用和功耗。因此,MiniCPM-o 2.6 可以支持 iPad 等终端设备上的高效**多模态实时流式交互**。
- 💫 **易于使用。**
MiniCPM-o 2.6 可以通过多种方式轻松使用:(1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) 支持在本地设备上进行高效的 CPU 推理,(2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) 和 [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) 格式的量化模型,有 16 种尺寸,(3) [vLLM](#基于-llamacppollamavllm-的高效推理) 支持高吞吐量和内存高效的推理,(4) 通过[LLaMA-Factory](./docs/llamafactory_train_and_infer.md)框架针对新领域和任务进行微调,(5) 使用 [Gradio](#本地-webui-demo-) 快速设置本地 WebUI 演示,(6) 部署于服务器的在线 [demo](https://minicpm-omni-webdemo-us.modelbest.cn/)。
**模型架构。**
- **端到端全模态架构。** 通过**端到端**的方式连接和训练不同模态的编/解码模块以充分利用丰富的多模态知识。模型完全使用 CE 损失端到端训练。
- **全模态流式机制。** (1) 我们将不同模态的离线编/解码器改造为适用于**流式输入/输出**的在线模块。 (2) 我们针对大语言模型基座设计了**时分复用的全模态流式信息处理机制**,将平行的不同模态的信息流拆分重组为周期性时间片序列。
- **可配置的声音方案。** 我们设计了新的多模态系统提示,包含传统文本系统提示词,和**用于指定模型声音的语音系统提示词**。模型可在推理时灵活地通过文字或语音样例控制声音风格,并支持端到端声音克隆和音色创建等高级能力。
### 性能评估
点击查看视觉理解能力详细评测结果。
**图像理解能力**
| Model |
Size |
Token Density+ |
OpenCompass |
OCRBench |
MathVista mini |
ChartQA |
MMVet |
MMStar |
MME |
MMB1.1 test |
AI2D |
MMMU val |
HallusionBench |
TextVQA val |
DocVQA test |
MathVerse mini |
MathVision |
MMHal Score |
| Proprietary |
| GPT-4o-20240513 |
- |
1088 |
69.9 |
736 |
61.3 |
85.7 |
69.1 |
63.9 |
2328.7 |
82.2 |
84.6 |
69.2 |
55.0 |
- |
92.8 |
50.2 |
30.4 |
3.6 |
| Claude3.5-Sonnet |
- |
750 |
67.9 |
788 |
61.6 |
90.8 |
66.0 |
62.2 |
1920.0 |
78.5 |
80.2 |
65.9 |
49.9 |
- |
95.2 |
- |
- |
3.4 |
| Gemini 1.5 Pro |
- |
- |
64.4 |
754 |
57.7 |
81.3 |
64.0 |
59.1 |
2110.6 |
73.9 |
79.1 |
60.6 |
45.6 |
73.5 |
86.5 |
- |
19.2 |
- |
| GPT-4o-mini-20240718 |
- |
1088 |
64.1 |
785 |
52.4 |
- |
66.9 |
54.8 |
2003.4 |
76.0 |
77.8 |
60.0 |
46.1 |
- |
- |
- |
- |
3.3 |
| Open Source |
| Cambrian-34B |
34B |
1820 |
58.3 |
591 |
50.3 |
75.6 |
53.2 |
54.2 |
2049.9 |
77.8 |
79.5 |
50.4 |
41.6 |
76.7 |
75.5 |
- |
- |
- |
| GLM-4V-9B |
13B |
784 |
59.1 |
776 |
51.1 |
- |
58.0 |
54.8 |
2018.8 |
67.9 |
71.2 |
46.9 |
45.0 |
- |
- |
- |
- |
- |
| Pixtral-12B |
12B |
256 |
61.0 |
685 |
56.9 |
81.8 |
58.5 |
54.5 |
- |
72.7 |
79.0 |
51.1 |
47.0 |
75.7 |
90.7 |
- |
- |
- |
| DeepSeek-VL2-27B (4B) |
27B |
672 |
66.4 |
809 |
63.9 |
86.0 |
60.0 |
61.9 |
2253.0 |
81.2 |
83.8 |
54.0 |
45.3 |
84.2 |
93.3 |
- |
- |
3.0 |
| Qwen2-VL-7B |
8B |
784 |
67.1 |
866 |
58.2 |
83.0 |
62.0 |
60.7 |
2326.0 |
81.8 |
83.0 |
54.1 |
50.6 |
84.3 |
94.5 |
31.9 |
16.3 |
3.2 |
| LLaVA-OneVision-72B |
72B |
182 |
68.1 |
741 |
67.5 |
83.7 |
60.6 |
65.8 |
2261.0 |
85.0 |
85.6 |
56.8 |
49.0 |
80.5 |
91.3 |
39.1 |
- |
3.5 |
| InternVL2.5-8B |
8B |
706 |
68.3 |
822 |
64.4 |
84.8 |
62.8 |
62.8 |
2344.0 |
83.6 |
84.5 |
56.0 |
50.1 |
79.1 |
93.0 |
39.5 |
19.7 |
3.4 |
| MiniCPM-V 2.6 |
8B |
2822 |
65.2 |
852* |
60.6 |
79.4 |
60.0 |
57.5 |
2348.4* |
78.0 |
82.1 |
49.8* |
48.1* |
80.1 |
90.8 |
25.7 |
18.3 |
3.6 |
| MiniCPM-o 2.6 |
8B |
2822 |
70.2 |
897* |
71.9* |
86.9* |
67.5 |
64.0 |
2372.0* |
80.5 |
85.8 |
50.4* |
51.9 |
82.0 |
93.5 |
41.4* |
23.1* |
3.8 |
* 我们使用思维链提示词来评估这些基准,对于 MME 我们只在 Cognition 任务上使用了思维链。
+ Token Density:每个视觉 token 在最大分辨率下编码的像素数,即最大分辨率下的像素数 / 视觉 token 数。
注意:闭源模型的 Token Density 由 API 收费方式估算得到。
**多图和视频理解能力**
| Model |
Size |
BLINK val |
Mantis Eval |
MIRB |
Video-MME (wo / w subs) |
| Proprietary |
| GPT-4o-20240513 |
- |
68 |
- |
- |
71.9/77.2 |
| GPT4V |
- |
54.6 |
62.7 |
53.1 |
59.9/63.3 |
| Open-source |
| LLaVA-NeXT-Interleave 14B |
14B |
52.6 |
66.4 |
30.2 |
- |
| LLaVA-OneVision-72B |
72B |
55.4 |
77.6 |
- |
66.2/69.5 |
| MANTIS 8B |
8B |
49.1 |
59.5 |
34.8 |
- |
| Qwen2-VL-7B |
8B |
53.2 |
69.6* |
67.6* |
63.3/69.0 |
| InternVL2.5-8B |
8B |
54.8 |
67.7 |
52.5 |
64.2/66.9 |
| MiniCPM-V 2.6 |
8B |
53 |
69.1 |
53.8 |
60.9/63.6 |
| MiniCPM-o 2.6 |
8B |
56.7 |
71.9 |
58.6 |
63.9/67.9 |
* 正式开源模型权重的评测结果。
点击查看语音理解和生成能力的详细评测结果。
**语音理解能力**
| Task |
Size |
ASR (zh) |
ASR (en) |
AST |
Emotion |
| Metric |
|
CER↓ |
WER↓ |
BLEU↑ |
ACC↑ |
| Dataset |
|
AISHELL-1 |
Fleurs zh |
WenetSpeech test-net |
LibriSpeech test-clean |
GigaSpeech |
TED-LIUM |
CoVoST en2zh |
CoVoST zh2en |
MELD emotion |
| Proprietary |
| GPT-4o-Realtime |
- |
7.3* |
5.4* |
28.9* |
2.6* |
12.9* |
4.8* |
37.1* |
15.7* |
33.2* |
| Gemini 1.5 Pro |
- |
4.5* |
5.9* |
14.3* |
2.9* |
10.6* |
3.0* |
47.3* |
22.6* |
48.4* |
| Open-Source |
| Qwen2-Audio-7B |
8B |
- |
7.5 |
- |
1.6 |
- |
- |
45.2 |
24.4 |
55.3 |
| Qwen2-Audio-7B-Instruct |
8B |
2.6* |
6.9* |
10.3* |
3.1* |
9.7* |
5.9* |
39.5* |
22.9* |
17.4* |
| GLM-4-Voice-Base |
9B |
2.5 |
- |
- |
2.8 |
- |
- |
- |
- |
| MiniCPM-o 2.6 |
8B |
1.6 |
4.4 |
6.9 |
1.7 |
8.7 |
3.0 |
48.2 |
27.2 |
52.4 |
* 正式开源模型权重的评测结果。
**语音生成能力。**
| Task |
Size |
SpeechQA |
| Metric |
|
ACC↑ |
G-Eval (10 point)↑ |
Semantic ELO score↑ |
Acoustic ELO score↑ |
Overall ELO score↑ |
UTMOS↑ |
ASR-WER↓ |
| Dataset |
|
Speech Llama Q. |
Speech Web Q. |
Speech Trivia QA |
Speech AlpacaEval |
AudioArena |
| Proprietary |
| GPT-4o-Realtime |
|
71.7 |
51.6 |
69.7 |
7.4 |
1157 |
1203 |
1200 |
4.2 |
2.3 |
| Open-Source |
| GLM-4-Voice |
9B |
50.0 |
32.0 |
36.4 |
5.1 |
999 |
1147 |
1035 |
4.1 |
11.7 |
| Llama-Omni |
8B |
45.3 |
22.9 |
10.7 |
3.9 |
960 |
878 |
897 |
3.2 |
24.3 |
| VITA-1.5 |
8B |
46.7 |
28.1 |
23.3 |
2.0 |
- |
- |
- |
- |
- |
| Moshi |
7B |
43.7 |
23.8 |
16.7 |
2.4 |
871 |
808 |
875 |
2.8 |
8.2 |
| Mini-Omni |
1B |
22.0 |
12.8 |
6.9 |
2.5 |
926 |
803 |
865 |
3.4 |
10.0 |
| MiniCPM-o 2.6 |
8B |
61.0 |
40.0 |
40.2 |
5.1 |
1088 |
1163 |
1131 |
4.2 |
9.8 |
所有的结果都基于 AudioEvals。
**端到端声音克隆能力。**
| Task |
TTS |
| Metric |
SIMO↑ |
SIMO↑ |
| Dataset |
Seed-TTS test-zh |
Seed-TTS test-en |
| F5-TTS |
76 |
67 |
| CosyVoice |
75 |
64 |
| FireRedTTS |
63 |
46 |
| MiniCPM-o 2.6 |
57 |
47 |
点击查看多模态流式交互能力评测详细结果。
**多模态流式交互能力**: StreamingBench 分数
| Model |
Size |
Real-Time Video Understanding |
Omni-Source Understanding |
Contextual Understanding |
Overall |
| Proprietary |
| Gemini 1.5 Pro |
- |
77.4 |
67.8 |
51.1 |
70.3 |
| GPT-4o-202408 |
- |
74.5 |
51.0 |
48.0 |
64.1 |
| Claude-3.5-Sonnet |
- |
74.0 |
41.4 |
37.8 |
59.7 |
| Open-source |
| VILA-1.5 |
8B |
61.5 |
37.5 |
26.7 |
49.5 |
| LongVA |
7B |
63.1 |
35.9 |
30.2 |
50.7 |
| LLaVA-Next-Video-34B |
34B |
69.8 |
41.7 |
34.3 |
56.7 |
| Qwen2-VL-7B |
8B |
71.2 |
40.7 |
33.1 |
57.0 |
| InternVL2-8B |
8B |
70.1 |
42.7 |
34.1 |
57.0 |
| VITA-1.5 |
8B |
70.9 |
40.8 |
35.8 |
57.4 |
| LLaVA-OneVision-7B |
8B |
74.3 |
40.8 |
31.0 |
58.4 |
| InternLM-XC2.5-OL-7B |
8B |
75.4 |
46.2 |
33.6 |
60.8 |
| MiniCPM-V 2.6 |
8B |
72.4 |
40.2 |
33.4 |
57.7 |
| MiniCPM-o 2.6 |
8B |
79.9 |
53.4 |
38.5 |
66.0 |
### 典型示例
以下为 MiniCPM-o 2.6 的 iPad Pro 实机演示和 web demo 演示样例:
## 历史版本模型
| 模型 | 介绍信息和使用教程 |
|:----------------------|:-------------------:|
| MiniCPM-V 4.0 | [文档](./docs/minicpm_v4_zh.md) |
| MiniCPM-V 2.6 | [文档](./docs/minicpm_v2dot6_zh.md) |
| MiniCPM-Llama3-V 2.5 | [文档](./docs/minicpm_llama3_v2dot5.md) |
| MiniCPM-V 2.0 | [文档](./docs/minicpm_v2.md) |
| MiniCPM-V 1.0 | [文档](./docs/minicpm_v1.md) |
| OmniLMM-12B | [文档](./omnilmm.md) |
## Chat with Our Demo on Gradio 🤗
我们提供由 Hugging Face Gradio

支持的在线和本地 Demo。Gradio 是目前最流行的模型部署框架,支持流式输出、进度条、process bars 和其他常用功能。
### Online Demo
欢迎试用 Online Demo: [MiniCPM-V 2.6](http://120.92.209.146:8887/) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2) 。
### 本地 WebUI Demo
您可以使用以下命令轻松构建自己的本地 WebUI Demo。更详细的部署教程请参考[文档](https://modelbest.feishu.cn/wiki/RnjjwnUT7idMSdklQcacd2ktnyN)。
**实时流式视频/语音通话demo:**
1. 启动model server:
```shell
pip install -r requirements_o2.6.txt
python web_demos/minicpm-o_2.6/model_server.py
```
请确保 `transformers==4.44.2`,其他版本目前可能会有兼容性问题,我们正在解决。
如果你使用的低版本的 Pytorch,你可能会遇到这个错误`"weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'`, 请在模型初始化的时候添加 `self.minicpmo_model.tts.float()`
2. 启动web server:
```shell
# Make sure Node and PNPM is installed.
sudo apt-get update
sudo apt-get install nodejs npm
npm install -g pnpm
cd web_demos/minicpm-o_2.6/web_server
# 为https创建自签名证书, 要申请浏览器摄像头和麦克风权限须启动https.
bash ./make_ssl_cert.sh # output key.pem and cert.pem
pnpm install # install requirements
pnpm run dev # start server
```
浏览器打开`https://localhost:8088/`,开始体验实时流式视频/语音通话.
**Chatbot图文对话demo:**
```shell
pip install -r requirements_o2.6.txt
python web_demos/minicpm-o_2.6/chatbot_web_demo_o2.6.py
```
浏览器打开`http://localhost:8000/`,开始体验图文对话Chatbot.
## 推理
### 模型库
| 模型 | 设备 | 资源 | 简介 | 下载链接 |
|:--------------|:-:|:----------:|:-------------------|:---------------:|
| MiniCPM-V 4.5| GPU | 18 GB | 提供出色的端侧单图、多图、视频理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5) |
| MiniCPM-V 4.5 gguf | CPU | 8 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-gguf) |
| MiniCPM-V 4.5 int4 | GPU | 9 GB | int4量化版,更低显存占用 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-int4) |
| MiniCPM-V 4.5 AWQ | GPU | 9 GB | int4量化版,更低显存占用 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-AWQ) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-AWQ) |
| MiniCPM-o 2.6| GPU | 18 GB | 最新版本,提供端侧 GPT-4o 级的视觉、语音、多模态流式交互能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6) |
| MiniCPM-o 2.6 gguf | CPU | 8 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6-gguf) |
| MiniCPM-o 2.6 int4 | GPU | 9 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) [

](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6-int4) |
更多[历史版本模型](#legacy-models)
### 多轮对话
如果您希望开启长思考模式,请向 `chat` 函数传入参数 `enable_thinking=True`
```shell
pip install -r requirements_o2.6.txt
```
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
torch.manual_seed(100)
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB')
enable_thinking=False # If `enable_thinking=True`, the long-thinking mode is enabled.
# First round chat
question = "What is the landform in the picture?"
msgs = [{'role': 'user', 'content': [image, question]}]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
enable_thinking=enable_thinking
)
print(answer)
# Second round chat, pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": [answer]})
msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]})
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
你可以得到如下推理结果:
```shell
# round1
The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here These unique formations result from the dissolution of soluble rocks like limestone over millions of years through water erosion.
This scene closely resembles the famous karst landscape of Guilin and Yangshuo in China’s Guangxi Province. The area features dramatic, pointed limestone peaks rising dramatically above serene rivers and lush green forests, creating a breathtaking and iconic natural beauty that attracts millions of visitors each year for its picturesque views.
# round2
When traveling to a karst landscape like this, here are some important tips:
1. Wear comfortable shoes: The terrain can be uneven and hilly.
2. Bring water and snacks for energy during hikes or boat rides.
3. Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots.
4. Respect local customs and nature regulations by not littering or disturbing wildlife.
By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains.
```
#### 多图对话
点击查看 MiniCPM-V-4_5 多图输入的 Python 代码。
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True)
image1 = Image.open('image1.jpg').convert('RGB')
image2 = Image.open('image2.jpg').convert('RGB')
question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
msgs = [{'role': 'user', 'content': [image1, image2, question]}]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
#### 少样本上下文对话
点击查看 MiniCPM-V-4 少样本上下文对话的 Python 代码。
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True)
question = "production date"
image1 = Image.open('example1.jpg').convert('RGB')
answer1 = "2023.08.04"
image2 = Image.open('example2.jpg').convert('RGB')
answer2 = "2007.04.24"
image_test = Image.open('test.jpg').convert('RGB')
msgs = [
{'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
{'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
{'role': 'user', 'content': [image_test, question]}
]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
#### 视频对话
点击查看 MiniCPM-V-4_5 视频输入的 3D-Resampler 推理的 Python 代码。
```python
## The 3d-resampler compresses multiple frames into 64 tokens by introducing temporal_ids.
# To achieve this, you need to organize your video data into two corresponding sequences:
# frames: List[Image]
# temporal_ids: List[List[Int]].
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
from decord import VideoReader, cpu # pip install decord
from scipy.spatial import cKDTree
import numpy as np
import math
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
MAX_NUM_FRAMES=180 # Indicates the maximum number of frames received after the videos are packed. The actual maximum number of valid frames is MAX_NUM_FRAMES * MAX_NUM_PACKING.
MAX_NUM_PACKING=3 # indicates the maximum packing number of video frames. valid range: 1-6
TIME_SCALE = 0.1
def map_to_nearest_scale(values, scale):
tree = cKDTree(np.asarray(scale)[:, None])
_, indices = tree.query(np.asarray(values)[:, None])
return np.asarray(scale)[indices]
def group_array(arr, size):
return [arr[i:i+size] for i in range(0, len(arr), size)]
def encode_video(video_path, choose_fps=3, force_packing=None):
def uniform_sample(l, n):
gap = len(l) / n
idxs = [int(i * gap + gap / 2) for i in range(n)]
return [l[i] for i in idxs]
vr = VideoReader(video_path, ctx=cpu(0))
fps = vr.get_avg_fps()
video_duration = len(vr) / fps
if choose_fps * int(video_duration) <= MAX_NUM_FRAMES:
packing_nums = 1
choose_frames = round(min(choose_fps, round(fps)) * min(MAX_NUM_FRAMES, video_duration))
else:
packing_nums = math.ceil(video_duration * choose_fps / MAX_NUM_FRAMES)
if packing_nums <= MAX_NUM_PACKING:
choose_frames = round(video_duration * choose_fps)
else:
choose_frames = round(MAX_NUM_FRAMES * MAX_NUM_PACKING)
packing_nums = MAX_NUM_PACKING
frame_idx = [i for i in range(0, len(vr))]
frame_idx = np.array(uniform_sample(frame_idx, choose_frames))
if force_packing:
packing_nums = min(force_packing, MAX_NUM_PACKING)
print(video_path, ' duration:', video_duration)
print(f'get video frames={len(frame_idx)}, packing_nums={packing_nums}')
frames = vr.get_batch(frame_idx).asnumpy()
frame_idx_ts = frame_idx / fps
scale = np.arange(0, video_duration, TIME_SCALE)
frame_ts_id = map_to_nearest_scale(frame_idx_ts, scale) / TIME_SCALE
frame_ts_id = frame_ts_id.astype(np.int32)
assert len(frames) == len(frame_ts_id)
frames = [Image.fromarray(v.astype('uint8')).convert('RGB') for v in frames]
frame_ts_id_group = group_array(frame_ts_id, packing_nums)
return frames, frame_ts_id_group
video_path="video_test.mp4"
fps = 5 # fps for video
force_packing = None # You can set force_packing to ensure that 3D packing is forcibly enabled; otherwise, encode_video will dynamically set the packing quantity based on the duration.
frames, frame_ts_id_group = encode_video(video_path, fps, force_packing=force_packing)
question = "Describe the video"
msgs = [
{'role': 'user', 'content': frames + [question]},
]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
use_image_id=False,
max_slice_nums=1,
temporal_ids=frame_ts_id_group
)
print(answer)
```
#### 语音对话
初始化模型
```python
import torch
import librosa
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
model.init_tts()
model.tts.float()
```
##### Mimick
点击查看 MiniCPM-o 2.6 端到端语音理解生成的 Python 代码。
- `Mimick` 任务反映了模型的端到端语音建模能力。模型接受音频输入,输出语音识别(ASR)转录结果,并随后以高相似度重建原始音频。重建的音频相似度和原始音频越高,表明模型有越高的语音端到端建模基础能力。
```python
mimick_prompt = "Please repeat each user's speech, including voice style and speech content."
audio_input, _ = librosa.load('xxx.wav', sr=16000, mono=True)
msgs = [{'role': 'user', 'content': [mimick_prompt,audio_input]}]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
temperature=0.3,
generate_audio=True,
output_audio_path='output.wav', # save the tts result to output_audio_path
)
```
##### 可配置声音的语音对话
点击查看个性化配置 MiniCPM-o 2.6 对话声音的 Python 代码。
```python
ref_audio, _ = librosa.load('./assets/voice_01.wav', sr=16000, mono=True) # load the reference audio
# Audio RolePlay: # With this mode, model will role-play the character based on the audio prompt.
sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_roleplay', language='en')
user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
# Audio Assistant: # With this mode, model will speak with the voice in ref_audio as a AI assistant.
# sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_assistant', language='en')
# user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # Try to ask something!
```
```python
msgs = [sys_prompt, user_question]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result.wav',
)
# round two
history = msgs.append({'role': 'assistant', 'content': res})
user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
msgs = history.append(user_question)
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_round_2.wav',
)
print(res)
```
##### 更多语音任务
点击查看 MiniCPM-o 2.6 完成更多语音任务的 Python 代码。
```python
'''
Audio Understanding Task Prompt:
Speech:
ASR with ZH(same as AST en2zh): 请仔细听这段音频片段,并将其内容逐字记录。
ASR with EN(same as AST zh2en): Please listen to the audio snippet carefully and transcribe the content.
Speaker Analysis: Based on the speaker's content, speculate on their gender, condition, age range, and health status.
General Audio:
Audio Caption: Summarize the main content of the audio.
Sound Scene Tagging: Utilize one keyword to convey the audio's content or the associated scene.
'''
task_prompt = "\n"
audio_input, _ = librosa.load('xxx.wav', sr=16000, mono=True)
msgs = [{'role': 'user', 'content': [task_prompt,audio_input]}]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result.wav',
)
print(res)
```
```python
'''
Speech Generation Task Prompt:
Human Instruction-to-Speech: see https://voxinstruct.github.io/VoxInstruct/
Example:
# 在新闻中,一个年轻男性兴致勃勃地说:“祝福亲爱的祖国母亲美丽富强!”他用低音调和低音量,慢慢地说出了这句话。
# Delighting in a surprised tone, an adult male with low pitch and low volume comments:"One even gave my little dog a biscuit" This dialogue takes place at a leisurely pace, delivering a sense of excitement and surprise in the context.
Voice Cloning or Voice Creation: With this mode, model will act like a TTS model.
'''
# Human Instruction-to-Speech:
task_prompt = '' #Try to make some Human Instruction-to-Speech prompt
msgs = [{'role': 'user', 'content': [task_prompt]}] # you can try to use the same audio question
# Voice Cloning mode: With this mode, model will act like a TTS model.
# sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='voice_cloning', language='en')
# text_prompt = f"Please read the text below."
# user_question = {'role': 'user', 'content': [text_prompt, "content that you want to read"]} # using same voice in sys_prompt to read the text. (Voice Cloning)
# user_question = {'role': 'user', 'content': [text_prompt, librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # using same voice in sys_prompt to read 'xxx.wav'. (Voice Creation)
msgs = [sys_prompt, user_question]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result.wav',
)
```
#### 多模态流式交互
点击查看 MiniCPM-o 2.6 多模态流式交互的 Python 代码。
```python
import math
import numpy as np
from PIL import Image
from moviepy.editor import VideoFileClip
import tempfile
import librosa
import soundfile as sf
import torch
from transformers import AutoModel, AutoTokenizer
def get_video_chunk_content(video_path, flatten=True):
video = VideoFileClip(video_path)
print('video_duration:', video.duration)
with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_audio_file:
temp_audio_file_path = temp_audio_file.name
video.audio.write_audiofile(temp_audio_file_path, codec="pcm_s16le", fps=16000)
audio_np, sr = librosa.load(temp_audio_file_path, sr=16000, mono=True)
num_units = math.ceil(video.duration)
# 1 frame + 1s audio chunk
contents= []
for i in range(num_units):
frame = video.get_frame(i+1)
image = Image.fromarray((frame).astype(np.uint8))
audio = audio_np[sr*i:sr*(i+1)]
if flatten:
contents.extend(["", image, audio])
else:
contents.append(["", image, audio])
return contents
model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16)
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
model.init_tts()
# If you are using an older version of PyTorch, you might encounter this issue "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16', Please convert the TTS to float32 type.
# model.tts.float()
# https://huggingface.co/openbmb/MiniCPM-o-2_6/blob/main/assets/Skiing.mp4
video_path="assets/Skiing.mp4"
sys_msg = model.get_sys_prompt(mode='omni', language='en')
# if use voice clone prompt, please set ref_audio
# ref_audio_path = '/path/to/ref_audio'
# ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True)
# sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode='omni', language='en')
contents = get_video_chunk_content(video_path)
msg = {"role":"user", "content": contents}
msgs = [sys_msg, msg]
# please set generate_audio=True and output_audio_path to save the tts result
generate_audio = True
output_audio_path = 'output.wav'
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
temperature=0.5,
max_new_tokens=4096,
omni_input=True, # please set omni_input=True when omni inference
use_tts_template=True,
generate_audio=generate_audio,
output_audio_path=output_audio_path,
max_slice_nums=1,
use_image_id=False,
return_dict=True
)
print(res)
```
点击查看多模态流式推理设置。
注意:流式推理存在轻微的性能下降,因为音频编码并非全局的。
```python
# a new conversation need reset session first, it will reset the kv-cache
model.reset_session()
contents = get_video_chunk_content(video_path, flatten=False)
session_id = '123'
generate_audio = True
# 1. prefill system prompt
res = model.streaming_prefill(
session_id=session_id,
msgs=[sys_msg],
tokenizer=tokenizer
)
# 2. prefill video/audio chunks
for content in contents:
msgs = [{"role":"user", "content": content}]
res = model.streaming_prefill(
session_id=session_id,
msgs=msgs,
tokenizer=tokenizer
)
# 3. generate
res = model.streaming_generate(
session_id=session_id,
tokenizer=tokenizer,
temperature=0.5,
generate_audio=generate_audio
)
audios = []
text = ""
if generate_audio:
for r in res:
audio_wav = r.audio_wav
sampling_rate = r.sampling_rate
txt = r.text
audios.append(audio_wav)
text += txt
res = np.concatenate(audios)
sf.write("output.wav", res, samplerate=sampling_rate)
print("text:", text)
print("audio saved to output.wav")
else:
for r in res:
text += r['text']
print("text:", text)
```
### 多卡推理
您可以通过将模型的层分布在多个低显存显卡(12 GB 或 16 GB)上,运行 MiniCPM-Llama3-V 2.5。请查看该[教程](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md),详细了解如何使用多张低显存显卡载入模型并进行推理。
### Mac 推理
点击查看 MiniCPM-Llama3-V 2.5 / MiniCPM-V 2.0 基于Mac MPS运行 (Apple silicon 或 AMD GPUs)的示例。
```python
# test.py Need more than 16GB memory to run.
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, low_cpu_mem_usage=True)
model = model.to(device='mps')
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True)
model.eval()
image = Image.open('./assets/hk_OCR.jpg').convert('RGB')
question = 'Where is this photo taken?'
msgs = [{'role': 'user', 'content': question}]
answer, context, _ = model.chat(
image=image,
msgs=msgs,
context=None,
tokenizer=tokenizer,
sampling=True
)
print(answer)
```
运行:
```shell
PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
```
### 基于 llama.cpp、Ollama、vLLM 的高效推理
llama.cpp 用法请参考[我们的fork llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md), 在iPad上可以支持 16~18 token/s 的流畅推理(测试环境:iPad Pro + M4)。
Ollama 用法请参考[我们的fork Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md), 在iPad上可以支持 16~18 token/s 的流畅推理(测试环境:iPad Pro + M4)。
点击查看, vLLM 现已官方支持MiniCPM-o 2.6、MiniCPM-V 2.6、MiniCPM-Llama3-V 2.5 和 MiniCPM-V 2.0。
1. 安装 vLLM(>=0.7.1):
```shell
pip install vllm
```
2. 运行示例代码:(注意:如果使用本地路径的模型,请确保模型代码已更新到Hugging Face上的最新版)
* [图文示例](https://docs.vllm.ai/en/latest/getting_started/examples/vision_language.html)
* [音频示例](https://docs.vllm.ai/en/latest/getting_started/examples/audio_language.html)
## 微调
### 简易微调
我们支持使用 Huggingface Transformers 库简易地微调 MiniCPM-V 4.0、MiniCPM-o 2.6、MiniCPM-V 2.6、MiniCPM-Llama3-V 2.5 和 MiniCPM-V 2.0 模型。
[参考文档](./finetune/readme.md)
### 使用 Align-Anything
我们支持使用北大团队开发的 [Align-Anything](https://github.com/PKU-Alignment/align-anything) 框架微调 MiniCPM-o 系列模型,同时支持 DPO 和 SFT 在视觉和音频模态上的微调。Align-Anything 是一个用于对齐全模态大模型的高度可扩展框架,开源了[数据集、模型和评测](https://huggingface.co/datasets/PKU-Alignment/align-anything)。它支持了 30+ 开源基准,40+ 模型,以及包含SFT、SimPO、RLHF在内的多种算法,并提供了 30+ 直接可运行的脚本,适合初学者快速上手。
最佳实践: [MiniCPM-o 2.6](https://github.com/PKU-Alignment/align-anything/tree/main/scripts).
### 使用 LLaMA-Factory
我们支持使用 LLaMA-Factory 微调 MiniCPM-o 2.6 和 MiniCPM-V 2.6。LLaMA-Factory 提供了一种灵活定制 200 多个大型语言模型(LLM)微调(Lora/Full/Qlora)解决方案,无需编写代码,通过内置的 Web 用户界面 LLaMABoard 即可实现训练/推理/评估。它支持多种训练方法,如 sft/ppo/dpo/kto,并且还支持如 Galore/BAdam/LLaMA-Pro/Pissa/LongLoRA 等高级算法。
最佳实践: [MiniCPM-V 4.0 | MiniCPM-o 2.6 | MiniCPM-V 2.6](./docs/llamafactory_train_and_infer.md).
### 使用 SWIFT 框架
我们支持使用 SWIFT 框架微调 MiniCPM-V 系列模型。SWIFT 支持近 200 种大语言模型和多模态大模型的训练、推理、评测和部署。支持 PEFT 提供的轻量训练方案和完整的 Adapters 库支持的最新训练技术如 NEFTune、LoRA+、LLaMA-PRO 等。
参考文档:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md),[MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) [MiniCPM-V 2.6](https://github.com/modelscope/ms-swift/issues/1613).
## MiniCPM-V & o 使用手册
欢迎探索我们整理的[使用手册 (Cookbook)](https://github.com/OpenSQZ/MiniCPM-V-CookBook),其中提供了针对 MiniCPM-V 和 MiniCPM-o 模型系列的全面、开箱即用的解决方案。本手册赋能开发者快速构建集成了视觉、语音和直播能力的多模态 AI 应用。主要特性包括:
**易用的文档**
我们的详尽[文档网站](https://minicpm-o.readthedocs.io/en/latest/index.html)以清晰、条理分明的方式呈现每一份解决方案。
**广泛的用户支持**
我们支持从个人用户到企业和研究者的广泛用户群体。
* **个人用户**:借助[Ollama](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_ollama.md)和[Llama.cpp](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_llamacpp.md),仅需极简设置即可轻松进行模型推理。
* **企业用户**:通过[vLLM](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_vllm.md)和[SGLang](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-v4_sglang.md)实现高吞吐量、可扩展的高性能部署。
* **研究者**:利用包括[Transformers](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_full.md)、[LLaMA-Factory](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_llamafactory.md)、[SWIFT](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/swift.md)和[Align-anything](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/align_anything.md)在内的先进框架,进行灵活的模型开发和前沿实验。
**多样化的部署场景**
我们的生态系统为各种硬件环境和部署需求提供最优解决方案。
* **Web Demo**:使用[FastAPI](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/README.md)快速启动交互式多模态 AI Web 演示。
* **量化部署**:通过[GGUF](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-v4_gguf_quantize.md)和[BNB](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/bnb/minicpm-v4_bnb_quantize.md)量化技术,最大化效率并最小化资源消耗。
* **边缘设备**:将强大的 AI 体验带到[iPhone 和 iPad](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/ios_demo/ios.md),支持离线及隐私敏感的应用场景。
## 基于 MiniCPM-V & MiniCPM-o 的更多项目
- [text-extract-api](https://github.com/CatchTheTornado/text-extract-api): 利用 OCR 和 Ollama 模型的本地化文档提取与解析API,支持PDF、Word、PPTX 
- [comfyui_LLM_party](https://github.com/heshengtao/comfyui_LLM_party): 基于 ComfyUI 的 LLM Agent 框架,用于构建并集成 LLM 工作流 
- [Ollama-OCR](https://github.com/imanoop7/Ollama-OCR): 通过 Ollama 调用视觉语言模型,从图像和 PDF 中提取文本的 OCR 工具 
- [comfyui-mixlab-nodes](https://github.com/MixLabPro/comfyui-mixlab-nodes): ComfyUI 多功能节点合集,支持工作流一键转APP、语音识别合成、3D等功能 
- [OpenAvatarChat](https://github.com/HumanAIGC-Engineering/OpenAvatarChat): 可在单台PC上完整运行的模块化、开源交互式数字人对话系统 
- [pensieve](https://github.com/arkohut/pensieve): 完全本地化、保护隐私的被动式屏幕记录工具,自动截屏并建立索引,可通过Web界面进行检索 
- [paperless-gpt](https://github.com/icereed/paperless-gpt): 利用LLM和视觉模型,为 paperless-ngx 实现AI驱动的文档自动化处理与OCR功能 
- [Neuro](https://github.com/kimjammer/Neuro): Neuro-Sama的复刻版,完全依赖消费级硬件上的本地模型运行 
## FAQs
点击查看 [FAQs](./docs/faqs.md)
## 模型局限性
我们实验发现 MiniCPM-o 2.6 存在一些显著的局限性,需要进一步研究和改进:
- **不稳定的语音输出。** 语音生成可能会受到背景噪音和无意义声音的影响,表现不稳定。
- **重复响应。** 当遇到连续相似的用户请求时,模型往往会重复相同的回答。
- **Web Demo 延迟较高。** 用户在使用远程服务器上部署的 web demo 时可能会产生较高延迟。我们推荐用户在本地部署来获得更低延迟的体验。
## 模型协议
* 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM-V/blob/main/LICENSE) 协议开源
* 为帮助我们进一步了解并支持社区用户,若您能考虑填写一份简短的登记问卷,我们将深表感谢。 ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g).
## 声明
作为多模态大模型,MiniCPM-o/V 系列模型(包括 OmniLMM)通过学习大量的多模态数据来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
因此用户在使用本项目的系列模型生成的内容时,应自行负责对其进行评估和验证。如果由于使用本项目的系列开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
## 机构
本项目由以下机构共同开发:
-

[清华大学自然语言处理实验室](https://nlp.csai.tsinghua.edu.cn/)
-

[面壁智能](https://modelbest.cn/)
## 🌟 Star History
## 支持技术和其他多模态项目
👏 欢迎了解 MiniCPM-o/V 背后的支持技术和更多我们的多模态项目!
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLPR](https://github.com/OpenBMB/RLPR) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
## 引用
如果您觉得我们模型/代码/论文有帮助,请给我们 ⭐ 和 引用 📝,感谢!
```bib
@article{yao2024minicpm,
title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
journal={arXiv preprint arXiv:2408.01800},
year={2024}
}
```