# MMCTAgent **Repository Path**: mirrors_microsoft/MMCTAgent ## Basic Information - **Project Name**: MMCTAgent - **Description**: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-11-16 - **Last Updated**: 2026-01-31 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
[![arXiv](https://img.shields.io/badge/arXiv-2405.18358-b31b1b.svg)](https://arxiv.org/abs/2405.18358) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
# [**MMCTAgent**](https://arxiv.org/abs/2405.18358)
Multi-Modal Critical Thinking Agent Framework for Complex Visual Reasoning

πŸŽ₯ Demo Video β€’ πŸ“„ Research Paper β€’ πŸš€ Quick Start

![Demo GIF](docs/multimedia/gif/Demo_MMCT.gif) **▢️ [Watch Demo Video](https://youtu.be/Lxt1b_U-a68)**
## Overview MMCTAgent is a state-of-the-art multi-modal AI framework that brings human-like critical thinking to visual reasoning tasks. it combines advanced planning, self-critique, and tool-based reasoning to deliver superior performance in complex image and video understanding applications. ### Why MMCTAgent? - **🧠 Self Reflection Framework**: MMCTAgent emulates iteratively analyzing multi-modal information, decomposing complex queries, planning strategies, and dynamically evolving its reasoning. Designed as a research framework, MMCTAgent integrates critical thinking elements such as verification of final answers and self-reflection through a novel approach that defines a vision-based critic and identifies task-specific evaluation criteria, thereby enhancing its decision-making abilities. - **πŸ”¬ Enables Querying over Multimodal Collections**: It enables modular design to plug-in right audio, visual extraction and processing tools, combined with Multimodal LLMs to ingest and query over large number of videos and image data. - **πŸš€ Easy Integration**: Its modular design allows for easy integration into existing workflows and adding domain-specific tools, facilitating adoption across various domains requiring advanced visual reasoning capabilities.

Video Pipeline - Main Architecture

## **Key Features** ### **Critical Thinking Architecture** MMCTAgent is inspired by human cognitive processes and integrates a structured reasoning loop: - **Planner**: Generates an initial response using relevant tools for visual or multi-modal input. - **Critic**: Evaluates the Planner’s response and provides feedback to improve accuracy and decision-making. --- ### **Modular Agents** MMCTAgent includes two specialized agents:
ImageAgent [![](docs/multimedia/image-agent.png)](https://arxiv.org/abs/2405.18358) A reasoning engine tailored for static image understanding. It supports a configurable set of tools via the `ImageQnaTools` enum: - `object_detection` – Detects objects in an image. - `ocr` – Extracts embedded text content. - `recog` – Recognizes scenes, faces, or objects. - `vit` – Applies vision llm for high-level visual reasoning. > The Critic can be toggled via the `use_critic_agent` flag.
VideoAgent Optimized for deep video understanding: **Video Question Answering** [![](docs/multimedia/video-agent.png)](https://arxiv.org/abs/2405.18358) Applies a fixed toolchain orchestrated by the Planner: - `GET_VIDEO_SUMMARY` – Retrieves the most relevant video for the query, along with its summary. - `GET_OBJECT_COLLECTION` – Retrieves the most relevant video for the query, along with its detected objects. - `GET_CONTEXT` – Extracts transcript, visual summary chunks and object collection info relevant to the query. - `GET_RELEVANT_FRAMES` – Provides semantically similar keyframes related to the query. This tool is based on the CLIP embedding. - `QUERY_FRAME` – Queries specific video keyframes to extract detailed information and provide additional visual context to the Planner. > The Critic agent helps validate and refine answers, improving reasoning depth. For more details, refer to the full research article: **[MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning](https://arxiv.org/abs/2405.18358)** Published on **arXiv** – [arxiv.org/abs/2405.18358](https://arxiv.org/abs/2405.18358)
--- ## **Table of Contents** - [Getting Started](#getting-started) - [Provider System](#provider-system) - [Configuration](#configuration) - [Project Structure](#project-structure) - [Contributing](#contributing) - [Citations](#citation) - [License](#license) - [Support](#support) --- ## **Getting Started** ### **Installation** 1. **Clone the Repository** ```bash git clone https://github.com/microsoft/MMCTAgent.git cd MMCTAgent ``` 2. **System Dependencies** Install FFmpeg **Linux/Ubuntu:** ```bash sudo apt-get update sudo apt-get install ffmpeg libsm6 libxext6 -y ``` **Windows:** - Download FFmpeg from [ffmpeg.org](https://ffmpeg.org/download.html) - Add the `bin` folder to your system PATH 3. **Python Environment Setup** **Option A: Using Conda (Recommended)** ```bash conda create -n mmct-agent python=3.11 conda activate mmct-agent ``` **Option B: Using venv** ```bash python -m venv mmct-agent # Linux/Mac source mmct-agent/bin/activate # Windows mmct-agent\Scripts\activate.bat ``` 4. **Install Dependencies** Choose the installation option based on your needs: **Option A: Image Pipeline** ```bash pip install --upgrade pip pip install ".[image-agent]" ``` **Option B: Video Pipeline** ```bash pip install --upgrade pip pip install ".[video-agent]" ``` **Option C: All Features (Image + Video + MCP Server)** ```bash pip install --upgrade pip pip install ".[all]" ``` 5. **Quick Start Examples** #### Image Analysis with MMCTAgent ```python from mmct.image_pipeline import ImageAgent, ImageQnaTools from mmct.providers.azure import AzureLLMProvider from mmct.config.providers import ImageAgentProviderConfig from azure.identity import DefaultAzureCredential, AzureCliCredential, ChainedTokenCredential import asyncio credentials = ChainedTokenCredential(AzureCliCredential(),DefaultAzureCredential()) # Or directly use api_key # Initializing the provider provider = ImageAgentProviderConfig( llm_provider=AzureLLMProvider( endpoint = "", deployment_name="", model_name="", api_version="api_version", credentials=credentials, ) ) # Initialize the Image Agent with desired tools image_agent = ImageAgent( query="What objects are visible in this image and what text can you read?", image_path="path/to/your/image.jpg", tools=[ImageQnaTools.object_detection, ImageQnaTools.ocr, ImageQnaTools.vit], use_critic_agent=True, # Enable critical thinking stream=False, provider = provider ) # Run the analysis response = asyncio.run(image_agent()) print(f"Analysis Result: {response.response}") ``` #### Video Analysis with VideoAgent. Ingest a video through MMCT Video Ingestion Pipeline. ```python from mmct.video_pipeline import IngestionPipeline, Languages from mmct.config.providers import IngestionProviderConfig from mmct.providers.azure import ( AzureLLMProvider, AzureEmbeddingProvider, AISearchChapterProvider, AISearchKeyframesProvider, AISearchObjectCollectionProvider, AzureStorageProvider, WhisperTranscriptionProvider ) from mmct.providers.local import ClipImageEmbeddingProvider from mmct.video_pipeline.utils.helper import get_file_hash from azure.identity import DefaultAzureCredential, AzureCliCredential, ChainedTokenCredential credentials = ChainedTokenCredential(AzureCliCredential(), DefaultAzureCredential()) # Initializing the provider provider = IngestionProviderConfig( llm_provider=AzureLLMProvider( endpoint="https://.openai.azure.com/", deployment_name="", model_name="", api_version="", credentials=credentials, ), embedding_provider=AzureEmbeddingProvider( endpoint="https://.openai.azure.com/", deployment_name="", api_version="", credentials=credentials, ), image_embedding_provider=ClipImageEmbeddingProvider(), vectordb_chapter=AISearchChapterProvider( endpoint="https://.search.windows.net", index_name="", credentials=credentials, ), vectordb_keyframes=AISearchKeyframesProvider( endpoint="https://.search.windows.net", index_name="", credentials=credentials, ), vectordb_object_registry=AISearchObjectCollectionProvider( endpoint="https://.search.windows.net", index_name="", credentials=credentials, ), storage_provider=AzureStorageProvider( storage_account_name="", keyframe_container_name="", credentials=credentials, ), transcription_provider=WhisperTranscriptionProvider( endpoint="https://.openai.azure.com/", api_version="", deployment_name="", credentials=credentials, ), ) video_path = "path-of-your-video" video_id = await get_file_hash(video_path) ingestion = IngestionPipeline( video_path=video_path, video_id=video_id, language=Languages.ENGLISH_INDIA, provider=provider ) # Run the ingestion pipeline await ingestion.run() ``` Perform Q&A through MMCT's Video Agent. ```python from mmct.video_pipeline import VideoAgent from mmct.config.providers import VideoAgentProviderConfig from mmct.providers.azure import ( AzureLLMProvider, AzureEmbeddingProvider, AISearchChapterProvider, AISearchKeyframesProvider, AISearchObjectCollectionProvider, AzureStorageProvider ) from mmct.providers.local import ClipImageEmbeddingProvider from azure.identity import DefaultAzureCredential, AzureCliCredential, ChainedTokenCredential import asyncio credentials = ChainedTokenCredential(AzureCliCredential(), DefaultAzureCredential()) # Initializing the provider provider = VideoAgentProviderConfig( llm_provider=AzureLLMProvider( endpoint="https://.openai.azure.com/", deployment_name="", model_name="", api_version="", credentials=credentials, ), embedding_provider=AzureEmbeddingProvider( endpoint="https://.openai.azure.com/", deployment_name="", api_version="", credentials=credentials, ), image_embedding_provider=ClipImageEmbeddingProvider(), vectordb_chapter=AISearchChapterProvider( endpoint="https://.search.windows.net", index_name="", credentials=credentials, ), vectordb_keyframes=AISearchKeyframesProvider( endpoint="https://.search.windows.net", index_name="", credentials=credentials, ), vectordb_object_registry=AISearchObjectCollectionProvider( endpoint="https://.search.windows.net", index_name="", credentials=credentials, ), storage_provider=AzureStorageProvider( storage_account_name="", keyframe_container_name="", credentials=credentials, ) ) # Configure the Video Agent video_agent = VideoAgent( query="input-query", video_id=None, # Optional: specify video ID url=None, # Optional: URL to filter out the search results for given url use_critic_agent=True, # Enable critic agent stream=False, # Stream response cache=False, # Optional: enable caching provider = provider ) # Execute video analysis response = await video_agent() print(f"Video Analysis: {response}") ``` For more comprehensive examples, see the [`examples/`](examples/) directory. ## **Provider System** ### **Multi-Cloud & Vendor-Agnostic Architecture** MMCTAgent now features a **modular provider system** that allows you to seamlessly switch between different cloud providers and AI services without changing your application code. This makes the framework truly **vendor-agnostic** and suitable for various deployment scenarios. #### **Supported Providers** | Service Type | Supported Providers | Use Cases | |--------------|--------------------|-----------| | **LLM** | Azure OpenAI, OpenAI, **+ Custom** | Text generation, chat completion | | **Search** | Azure AI Search, FAISS | Document search and retrieval | | **Transcription** | Azure Speech Services, OpenAI Whisper | Audio-to-text conversion | | **Storage** | Azure Blob Storage, Local Storage | File storage and management | > **Note**: All provider types support custom implementations. See the [Custom LLM Provider Example](examples/image_agent.ipynb) (Anthropic Claude) or read the [Providers Guide](mmct/providers/README.md) for implementation details. For detailed configuration instructions, see our [Provider Configuration Guide](mmct/providers/README.md). --- ## **Configuration** ### System Requirements for CLIP embeddings ([openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)) Minimum (development / small-scale): - CPU: 4-core modern i5/i7, ~8 GB RAM - Disk: ~500 MB caching model + image/text data - GPU: none (works but slow) Recommended (for decent speed / batching): - CPU: 8+ cores, 16 GB RAM - GPU: NVIDIA with β‰₯ 4-6 GB VRAM (e.g. RTX 2060/3060) - PyTorch + CUDA installed, with mixed precision support High-throughput (fast, large batches): - 16+ cores CPU, 32+ GB RAM - GPU: 8-16 GB+ VRAM, fast memory bandwidth (e.g. RTX 3090, A100) - Use float16 / bfloat16, efficient batching, parallel preprocessing --- ## **Project Structure** Below is the project structure highlighting the key entry-point scripts for running the three main pipelinesβ€” `Image QNA`, `Video Ingestion` and `Video Agent`. ```sh MMCTAgent | β”œβ”€β”€ infra | └── INFRA_DEPLOYMENT_GUIDE.md # Guide for deployment of Azure Infrastructure β”œβ”€β”€ app # contains the FASTAPI application over the mmct pipelines. β”œβ”€β”€ mcp_server β”‚ β”œβ”€β”€ main.py # you need to run main.py to start MCP server β”‚ β”œβ”€β”€ client.py # MCP server client to connect to MCP server β”‚ β”œβ”€β”€ notebooks/ # contains the examples to utilize MCP server through different agentic-frameworks β”‚ └── README.md # Guide for MCP server. β”œβ”€β”€ mmct β”‚ β”œβ”€β”€ . β”‚ β”œβ”€β”€ image_pipeline β”‚ β”‚ β”œβ”€β”€ agents β”‚ β”‚ β”‚ └── image_agent.py # Entry point for the MMCT Image Agentic Workflow β”‚ β”‚ └── README.md # Guide for Image Pipeline β”‚ └── video_pipeline β”‚ β”œβ”€β”€ agents β”‚ β”‚ └── video_agent.py # Entry point for the MMCT Video Agentic Workflow β”‚ β”œβ”€β”€ core β”‚ β”‚ └── ingestion β”‚ β”‚ └── ingestion_pipeline.py # Entry point for the Video Ingestion Workflow β”‚ └── README.md # Guide for Video Pipeline β”œβ”€β”€ pyproject.toml # Project configuration and dependencies └── README.md ``` ## **Contributing** This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact opencode@microsoft.com with any additional questions or comments. > *Note:* This project is currently under active research and continuous development. While contributions are encouraged, please note that the codebase may evolve as the project matures. ## **Citation** If you find MMCTAgent useful in your research, please cite our paper: ```bibtex @article{kumar2024mmctagent, title={MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning}, author={Kumar, Somnath and Gadhia, Yash and Ganu, Tanuja and Nambi, Akshay}, conference={NeurIPS OWA-2024}, year={2024}, url={https://www.microsoft.com/en-us/research/publication/mmctagent-multi-modal-critical-thinking-agent-framework-for-complex-visual-reasoning} } ``` ## **License** This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## **Support** - [Documentation](docs/) - [Report Issues](https://github.com/microsoft/MMCTAgent/issues) - [Discussions](https://github.com/microsoft/MMCTAgent/discussions) ---