Welcome to MetaChain! MetaChain is a **Fully-Automated** and highly **Self-Developing** framework that enables users to create and deploy LLM agents through **Natural Language Alone**.
## β¨Key Features
* π Top Performer on the GAIA Benchmark
MetaChain has ranked the **#1** spot among open-sourced methods, delivering comparable performance to **OpenAI's Deep Research**.
* π Agentic-RAG with Native Self-Managing Vector Database
MetaChain equipped with a native self-managing vector database, outperforms industry-leading solutions like **LangChain**.
* β¨ Agent and Workflow Create with Ease
MetaChain leverages natural language to effortlessly build ready-to-use **tools**, **agents** and **workflows** - no coding required.
* π Universal LLM Support
MetaChain seamlessly integrates with **A Wide Range** of LLMs (e.g., OpenAI, Anthropic, Deepseek, vLLM, Grok, Huggingface ...)
* π Flexible Interaction
Benefit from support for both **function-calling** and **ReAct** interaction modes.
* π€ Dynamic, Extensible, Lightweight
MetaChain is your **Personal AI Assistant**, designed to be dynamic, extensible, customized, and lightweight.
π Unlock the Future of LLM Agents. Try π₯MetaChainπ₯ Now!
Quick Overview of MetaChain.
## π₯ News
[2025, Feb 10]: ππWe've released MetaChain!, including framework, evaluation codes and CLI mode! Check our paper for more details.
## π Table of Contents
* β¨ Features
* π₯ News
* β‘ Quick Start
* Installation
* API Keys Setup
* Start with CLI Mode
* π How to Use MetaChain
* 1. `user mode` (SOTA π Open Deep Research)
* 2. `agent editor` (Agent Creation without Workflow)
* 3. `workflow editor` (Agent Creation with Workflow)
* βοΈ Todo List
* π¬ How To Reproduce the Results in the Paper
* π Documentation
* π€ Join the Community
* π Acknowledgements
* π Cite
## β‘ Quick Start
### Installation
#### MetaChain Installation
```bash
git clone https://github.com/HKUDS/MetaChain.git
cd MetaChain
pip install -e .
```
#### Docker Installation
We use Docker to containerize the agent-interactive environment. So please install [Docker](https://www.docker.com/) first. And pull the pre-built image with the following command.
```bash
docker pull tjbtech1/metachain:latest
```
### API Keys Setup
Create an environment variable file, just like `.env.template`, and set the API keys for the LLMs you want to use. Not every LLM API Key is required, use what you need.
```bash
# Required Github Tokens of your own
GITHUB_AI_TOKEN=
# Optional API Keys
OPENAI_API_KEY=
DEEPSEEK_API_KEY=
ANTHROPIC_API_KEY=
GEMINI_API_KEY=
HUGGINGFACE_API_KEY=
GROQ_API_KEY=
XAI_API_KEY=
```
### Start with CLI Mode
Just run the following command to start the CLI mode. (use shell script `cd path/to/MetaChain && sh playground/cli/metachain_cli.sh`). The `COMPLETION_MODEL` is the name of the LLM you want to use. Note that we use [LiteLLM](https://github.com/BerriAI/litellm) as the LLM wrapper, so you should set the `COMPLETION_MODEL` according to the [LiteLLM](https://github.com/BerriAI/litellm) documentation.
```bash
current_dir=$(dirname "$(readlink -f "$0")")
cd $current_dir
cd ../..
export DOCKER_WORKPLACE_NAME=workplace
export EVAL_MODE=True
export BASE_IMAGES=tjbtech1/metachain:latest
export COMPLETION_MODEL=claude-3-5-sonnet-20241022
export DEBUG=False # If you want to see detailed messages of agents' actions, set to True
export MC_MODE=True # If you want to ignore the retry information of LLM connection, set to True
export AI_USER=tjb-tech # Your Github username
export FN_CALL=True
export ADD_USER=False
export NON_FN_CALL=False
port=12345 # The port of the agent-interactive environment
python playground/cli/metachain_cli.py --container_name quick_start --model ${COMPLETION_MODEL} --test_pull_name quick_start_pull --debug --port ${port} --git_clone
```
After the CLI mode is started, you can see the start page of MetaChain:
Start Page of MetaChain.
### Tips
#### Import browser cookies to browser environment
You can import the browser cookies to the browser environment to let the agent better access some specific websites. For more details, please refer to the [cookies](./metachain/environment/cookie_json/README.md) folder.
#### Add your own API keys for third-party Tool Platforms
If you want to create tools from the third-party tool platforms, such as RapidAPI, you should subscribe tools from the platform and add your own API keys by running [process_tool_docs.py](./process_tool_docs.py).
```bash
python process_tool_docs.py
```
More features coming soon! π **Web GUI interface** under development.
## π How to Use MetaChain
### 1. `user mode` (SOTA π Open Deep Research)
MetaChain have an out-of-the-box multi-agent system, which you could choose `user mode` in the start page to use it. This multi-agent system is a general AI assistant, having the same functionality with **OpenAI's Deep Research** and the comparable performance with it in [GAIA](https://gaia-benchmark-leaderboard.hf.space/) benchmark.
- π **High Performance**: Matches Deep Research using Claude 3.5 rather than OpenAI's o3 model.
- π **Model Flexibility**: Compatible with any LLM (including Deepseek-R1, Grok, Gemini, etc.)
- π° **Cost-Effective**: Open-source alternative to Deep Research's $200/month subscription
- π― **User-Friendly**: Easy-to-deploy CLI interface for seamless interaction
- π **File Support**: Handles file uploads for enhanced data interaction
Input your request.
Agent will give you the response.
Use @ to mention the agent you want to use. (Optional)
@Upload_files will help you upload the files.
Select the files you want to use.
Successfully uploaded the files.
### 2. `agent editor` (Agent Creation without Workflow)
The most distinctive feature of MetaChain is its natural language customization capability. Unlike other agent frameworks, MetaChain allows you to create tools, agents, and workflows using natural language alone. Simply choose `agent editor` or `workflow editor` mode to start your journey of building agents through conversations.
You can use `agent editor` as shown in the following figure.
Input what kind of agent you want to create.
Automated agent profiling.
Output the agent profiles.
Create the desired tools.
Input what do you want to complete with the agent. (Optional)
Create the desired agent(s) and go to the next step.
### 3. `workflow editor` (Agent Creation with Workflow)
You can also create the agent workflows using natural language description with the `workflow editor` mode, as shown in the following figure. (Tips: this mode does not support tool creation temporarily.)
Input what kind of workflow you want to create.
Automated workflow profiling.
Output the workflow profiles.
Input what do you want to complete with the workflow. (Optional)
Create the desired workflow(s) and go to the next step.
## βοΈ Todo List
MetaChain is continuously evolving! Here's what's coming:
- π **More Benchmarks**: Expanding evaluations to **SWE-bench**, **WebArena**, and more
- π₯οΈ **GUI Agent**: Supporting *Computer-Use* agents with GUI interaction
- π§ **Tool Platforms**: Integration with more platforms like **Composio**
- ποΈ **Code Sandboxes**: Supporting additional environments like **E2B**
- π¨ **Web Interface**: Developing comprehensive GUI for better user experience
Have ideas or suggestions? Feel free to open an issue! Stay tuned for more exciting updates! π
## π¬ How To Reproduce the Results in the Paper
### GAIA Benchmark
For the GAIA benchmark, you can run the following command to run the inference.
```bash
cd path/to/MetaChain && sh evaluation/gaia/scripts/run_infer.sh
```
For the evaluation, you can run the following command.
```bash
cd path/to/MetaChain && python evaluation/gaia/get_score.py
```
### Agentic-RAG
For the Agentic-RAG task, you can run the following command to run the inference.
Step1. Turn to [this page](https://huggingface.co/datasets/yixuantt/MultiHopRAG) and download it. Save them to your datapath.
Step2. Run the following command to run the inference.
```bash
cd path/to/MetaChain && sh evaluation/multihoprag/scripts/run_rag.sh
```
Step3. The result will be saved in the `evaluation/multihoprag/result.json`.
## π Documentation
A more detailed documentation is coming soon π, and we will update in the [Documentation](https://metachain-ai.github.io/docs) page.
## π€ Join the Community
We want to build a community for MetaChain, and we welcome everyone to join us. You can join our community by:
- [Join our Slack workspace](https://join.slack.com/t/metachain-workspace/shared_invite/zt-2zibtmutw-v7xOJObBf9jE2w3x7nctFQ) - Here we talk about research, architecture, and future development.
- [Join our Discord server](https://discord.gg/z68KRvwB) - This is a community-run server for general discussion, questions, and feedback.
- [Read or post Github Issues](https://github.com/HKUDS/MetaChain/issues) - Check out the issues we're working on, or add your own ideas.
## π Acknowledgements
Rome wasn't built in a day. MetaChain stands on the shoulders of giants, and we are deeply grateful for the outstanding work that came before us. Our framework architecture draws inspiration from [OpenAI Swarm](https://github.com/openai/swarm), while our user mode's three-agent design benefits from [Magentic-one](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one)'s insights. We've also learned from [OpenHands](https://github.com/All-Hands-AI/OpenHands) for documentation structure and many other excellent projects for agent-environment interaction design, among others. We express our sincere gratitude and respect to all these pioneering works that have been instrumental in shaping MetaChain.
## π Cite
```tex
@misc{metachain,
title={{MetaChain: A Fully-Automated and Zero-Code Framework for LLM Agents}},
author={Jiabin Tang, Tianyu Fan, Chao Huang},
year={2025},
eprint={202502.05957},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2502.05957},
}
```