# Auto-Deep-Research **Repository Path**: ericdotwang/Auto-Deep-Research ## Basic Information - **Project Name**: Auto-Deep-Research - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-04-14 - **Last Updated**: 2025-04-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
Logo

Auto-Deep-Research:
Your Fully-Automated and Cost-Effective Personal AI Assistant

Credits Join our Slack community Join our Discord community
Check out the documentation Paper Evaluation Benchmark Score
Welcome to Auto-Deep-Research! Auto-Deep-Research is a open-source and cost-efficient alternative to OpenAI's Deep Research, based on [AutoAgent](https://github.com/HKUDS/AutoAgent) framework. ## ✨Key Features - πŸ† **High Performance**: Ranks the **#1** spot among open-sourced methods, delivering comparable performance to **OpenAI's Deep Research**. - 🌐 **Universal LLM Support**: Seamlessly integrates with **A Wide Range** of LLMs (e.g., OpenAI, Anthropic, Deepseek, vLLM, Grok, Huggingface ...) - πŸ”€ **Flexible Interaction**: Supports both **function-calling** and **non-function-calling** interaction LLMs. - πŸ’° **Cost-Efficient**: Open-source alternative to Deep Research's $200/month subscription with your own pay-as-you-go LLM API keys. - πŸ“ **File Support**: Handles file uploads for enhanced data interaction - πŸš€ **One-Click Launch**: Get started instantly with a simple `auto deep-research` command - **Zero Configuration** needed, truly out-of-the-box experience. πŸš€ Own your own personal assistant with much lower cost. Try πŸ”₯Auto-Deep-ResearchπŸ”₯ Now! ## πŸ”₯ News
## πŸ“‘ Table of Contents * ✨ Features * πŸ”₯ News * 🧐 Why to release Auto-Deep-Research? * ⚑ Quick Start * Installation * API Keys Setup * Start Auto-Deep-Research * β˜‘οΈ Todo List * πŸ“– Documentation * 🀝 Join the Community * πŸ™ Acknowledgements * 🌟 Cite ## 🧐 Why to release Auto-Deep-Research? After releasing AutoAgent (previously known as MetaChain) for a week, we've observed three compelling reasons to introduce Auto-Deep-Research: 1. **Community Interest**
We noticed significant community interest in our Deep Research alternative functionality. In response, we've streamlined the codebase by removing non-Deep-Research related components to create a more focused tool. 2. **Framework Extensibility**
Auto-Deep-Research serves as the first ready-to-use product built on AutoAgent, demonstrating how quickly and easily you can create powerful Agent Apps using our framework. 3. **Community-Driven Improvements**
We've incorporated valuable community feedback from the first week, introducing features like one-click launch and enhanced LLM compatibility to make the tool more accessible and versatile. Auto-Deep-Research represents our commitment to both the community's needs and the demonstration of AutoAgent's potential as a foundation for building practical AI applications. ## ⚑ Quick Start ### Installation #### Auto-Deep-Research Installation ```bash conda create -n auto_deep_research python=3.10 conda activate auto_deep_research git clone https://github.com/HKUDS/Auto-Deep-Research.git cd Auto-Deep-Research pip install -e . ``` #### Docker Installation We use Docker to containerize the agent-interactive environment. So please install [Docker](https://www.docker.com/) first. You don't need to manually pull the pre-built image, because we have let Auto-Deep-Research **automatically pull the pre-built image based on your architecture of your machine**. ### API Keys Setup Create a environment variable file, just like `.env.template`, and set the API keys for the LLMs you want to use. Not every LLM API Key is required, use what you need. ### Start Auto-Deep-Research #### Command Options: You can run `auto deep-research` to start Auto-Deep-Research. Some configuration of this command is shown below. - `--container_name`: Name of the Docker container (default: 'deepresearch') - `--port`: Port for the container (default: 12346) - `COMPLETION_MODEL`: Specify the LLM model to use, you should follow the name of [Litellm](https://github.com/BerriAI/litellm) to set the model name. (Default: `claude-3-5-sonnet-20241022`) - `DEBUG`: Enable debug mode for detailed logs (default: False) - `API_BASE_URL`: The base URL for the LLM provider (default: None) - `FN_CALL`: Enable function calling (default: None). Most of time, you could ignore this option because we have already set the default value based on the model name. #### Different LLM Providers We will show you how easy it is to start Auto-Deep-Research with different LLM providers. ##### Anthropic * set the `ANTHROPIC_API_KEY` in the `.env` file. ```bash ANTHROPIC_API_KEY=your_anthropic_api_key ``` * run the following command to start Auto-Deep-Research. ```bash auto deep-research # default model is claude-3-5-sonnet-20241022 ``` ##### OpenAI * set the `OPENAI_API_KEY` in the `.env` file. ```bash OPENAI_API_KEY=your_openai_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=gpt-4o auto deep-research ``` ##### Mistral * set the `MISTRAL_API_KEY` in the `.env` file. ```bash MISTRAL_API_KEY=your_mistral_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=mistral/mistral-large-2407 auto deep-research ``` ##### Gemini - Google AI Studio * set the `GEMINI_API_KEY` in the `.env` file. ```bash GEMINI_API_KEY=your_gemini_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=gemini/gemini-2.0-flash auto deep-research ``` ##### Huggingface * set the `HUGGINGFACE_API_KEY` in the `.env` file. ```bash HUGGINGFACE_API_KEY=your_huggingface_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=huggingface/meta-llama/Llama-3.3-70B-Instruct auto deep-research ``` ##### Groq * set the `GROQ_API_KEY` in the `.env` file. ```bash GROQ_API_KEY=your_groq_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=groq/deepseek-r1-distill-llama-70b auto deep-research ``` ##### OpenAI-Compatible Endpoints (e.g., Grok) * set the `OPENAI_API_KEY` in the `.env` file. ```bash OPENAI_API_KEY=your_api_key_for_openai_compatible_endpoints ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=openai/grok-2-latest API_BASE_URL=https://api.x.ai/v1 auto deep-research ``` ##### OpenRouter (e.g., DeepSeek-R1) We recommend using OpenRouter as LLM provider of DeepSeek-R1 temporarily. Because official API of DeepSeek-R1 can not be used efficiently. * set the `OPENROUTER_API_KEY` in the `.env` file. ```bash OPENROUTER_API_KEY=your_openrouter_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=openrouter/deepseek/deepseek-r1 auto deep-research ``` ##### DeepSeek * set the `DEEPSEEK_API_KEY` in the `.env` file. ```bash DEEPSEEK_API_KEY=your_deepseek_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=deepseek/deepseek-chat auto deep-research ``` ### Tips #### Import browser cookies to browser environment You can import the browser cookies to the browser environment to let the agent better access some specific websites. For more details, please refer to the [cookies](./metachain/environment/cookie_json/README.md) folder. More features coming soon! πŸš€ **Web GUI interface** under development. ## β˜‘οΈ Todo List Auto-Deep-Research is continuously evolving! Here's what's coming: - πŸ–₯️ **GUI Agent**: Supporting *Computer-Use* agents with GUI interaction - πŸ—οΈ **Code Sandboxes**: Supporting additional environments like **E2B** - 🎨 **Web Interface**: Developing comprehensive GUI for better user experience Have ideas or suggestions? Feel free to open an issue! Stay tuned for more exciting updates! πŸš€ ## πŸ“– Documentation A more detailed documentation is coming soon πŸš€, and we will update in the [Documentation](https://metachain-ai.github.io/docs) page. ## 🀝 Join the Community If you think the Auto-Deep-Research is helpful, you can join our community by: - [Join our Slack workspace](https://join.slack.com/t/metachain-workspace/shared_invite/zt-2zibtmutw-v7xOJObBf9jE2w3x7nctFQ) - Here we talk about research, architecture, and future development. - [Join our Discord server](https://discord.gg/z68KRvwB) - This is a community-run server for general discussion, questions, and feedback. - [Read or post Github Issues](https://github.com/HKUDS/Auto-Deep-Research/issues) - Check out the issues we're working on, or add your own ideas. ## πŸ™ Acknowledgements Rome wasn't built in a day. Auto-Deep-Research is built on the [AutoAgent](https://github.com/HKUDS/AutoAgent) framework. We extend our sincere gratitude to all the pioneering works that have shaped AutoAgent, including OpenAI Swarm for framework architecture inspiration, Magentic-one for the three-agent design insights, OpenHands for documentation structure, and many other excellent projects that contributed to agent-environment interaction design. Your innovations have been instrumental in making both AutoAgent and Auto-Deep-Research possible. ## 🌟 Cite ```tex @misc{AutoAgent, title={{AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents}}, author={Jiabin Tang, Tianyu Fan, Chao Huang}, year={2025}, eprint={202502.05957}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2502.05957}, } ```