# deep-searcher **Repository Path**: bitecode/deep-searcher ## Basic Information - **Project Name**: deep-searcher - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-02-22 - **Last Updated**: 2025-02-22 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # DeepSearcher DeepSearcher combines reasoning LLMs (OpenAI o1, o3-mini, DeepSeek, Grok 3 etc.) and Vector Databases (Milvus, Zilliz Cloud etc.) to perform search, evaluation, and reasoning based on private data, providing highly accurate answer and comprehensive report. This project is suitable for enterprise knowledge management, intelligent Q&A systems, and information retrieval scenarios. ![Architecture](./assets/pic/deep-searcher-arch.png) ## 🚀 Features - **Private Data Search**: Maximizes the utilization of enterprise internal data while ensuring data security. When necessary, it can integrate online content for more accurate answers. - **Vector Database Management**: Supports Milvus and other vector databases, allowing data partitioning for efficient retrieval. - **Flexible Embedding Options**: Compatible with multiple embedding models for optimal selection. - **Multiple LLM Support**: Supports DeepSeek, OpenAI, and other large models for intelligent Q&A and content generation. - **Document Loader**: Supports local file loading, with web crawling capabilities under development. --- ## 🎉 Demo ![demo](./assets/pic/demo.gif) ## 📖 Quick Start ### Installation Install DeepSearcher using pip: ```bash # Clone the repository git clone https://github.com/zilliztech/deep-searcher.git # MAKE SURE the python version is greater than or equal to 3.10 # Recommended: Create a Python virtual environment cd deep-searcher python3 -m venv .venv source .venv/bin/activate # Install dependencies pip install -e . ``` Prepare your `OPENAI_API_KEY` in your environment variables. If you change the LLM in the configuration, make sure to prepare the corresponding API key. ### Quick start demo ```python from deepsearcher.configuration import Configuration, init_config from deepsearcher.online_query import query config = Configuration() # Customize your config here, # more configuration see the Configuration Details section below. config.set_provider_config("llm", "OpenAI", {"model": "gpt-4o-mini"}) init_config(config = config) # Load your local data from deepsearcher.offline_loading import load_from_local_files load_from_local_files(paths_or_directory=your_local_path) # (Optional) Load from web crawling (`FIRECRAWL_API_KEY` env variable required) from deepsearcher.offline_loading import load_from_website load_from_website(urls=website_url) # Query result = query("Write a report about xxx.") # Your question here ``` ### Configuration Details: #### LLM Configuration
config.set_provider_config("llm", "(LLMName)", "(Arguments dict)")

The "LLMName" can be one of the following: ["DeepSeek", "OpenAI", "Grok", "SiliconFlow", "TogetherAI", "Gemini"]

The "Arguments dict" is a dictionary that contains the necessary arguments for the LLM class.

Example (OpenAI)

Make sure you have prepared your OPENAI API KEY as an env variable OPENAI_API_KEY.

config.set_provider_config("llm", "OpenAI", {"model": "gpt-4o"})

More details about OpenAI models: https://platform.openai.com/docs/models

Example (DeepSeek from official)

Make sure you have prepared your DEEPSEEK API KEY as an env variable DEEPSEEK_API_KEY.

config.set_provider_config("llm", "DeepSeek", {"model": "deepseek-chat"})

More details about DeepSeek: https://api-docs.deepseek.com/

Example (DeepSeek from SiliconFlow)

Make sure you have prepared your SILICONFLOW API KEY as an env variable SILICONFLOW_API_KEY.

config.set_provider_config("llm", "SiliconFlow", {"model": "deepseek-ai/DeepSeek-V3"})

More details about SiliconFlow: https://docs.siliconflow.cn/quickstart

Example (DeepSeek from TogetherAI)

Make sure you have prepared your TOGETHER API KEY as an env variable TOGETHER_API_KEY.

config.set_provider_config("llm", "TogetherAI", {"model": "deepseek-ai/DeepSeek-V3"})

You need to install together before running, execute: pip install together. More details about TogetherAI: https://www.together.ai/

Example (Grok)

Make sure you have prepared your XAI API KEY as an env variable XAI_API_KEY.

config.set_provider_config("llm", "Grok", {"model": "grok-2-latest"})

More details about Grok: https://docs.x.ai/docs/overview#featured-models

Example (Google Gemini)

Make sure you have prepared your GEMINI API KEY as an env variable GEMINI_API_KEY.

config.set_provider_config('llm', 'Gemini', { 'model': 'gemini-2.0-flash' })

You need to install gemini before running, execute: pip install google-genai. More details about Gemini: https://ai.google.dev/gemini-api/docs

#### Embedding Model Configuration
config.set_provider_config("embedding", "(EmbeddingModelName)", "(Arguments dict)")

The "EmbeddingModelName" can be one of the following: ["MilvusEmbedding", "OpenAIEmbedding", "VoyageEmbedding"]

The "Arguments dict" is a dictionary that contains the necessary arguments for the embedding model class.

Example (Pymilvus built-in embedding model)
config.set_provider_config("embedding", "MilvusEmbedding", {"model": "BAAI/bge-base-en-v1.5"})

More details about Pymilvus: https://milvus.io/docs/embeddings.md

Example (OpenAI embedding)

Make sure you have prepared your OpenAI API KEY as an env variable OPENAI_API_KEY.

config.set_provider_config("embedding", "OpenAIEmbedding", {"model": "text-embedding-3-small"})

More details about OpenAI models: https://platform.openai.com/docs/guides/embeddings/use-cases

Example (VoyageAI embedding)

Make sure you have prepared your VOYAGE API KEY as an env variable VOYAGE_API_KEY.

config.set_provider_config("embedding", "VoyageEmbedding", {"model": "voyage-3"})

You need to install voyageai before running, execute: pip install voyageai. More details about VoyageAI: https://docs.voyageai.com/embeddings/

Example (Amazon Bedrock embedding)
config.set_provider_config("embedding", "BedrockEmbedding", {"model": "amazon.titan-embed-text-v2:0"})

You need to install boto3 before running, execute: pip install boto3. More details about Amazon Bedrock: https://docs.aws.amazon.com/bedrock/

#### Vector Database Configuration
config.set_provider_config("vector_db", "(VectorDBName)", "(Arguments dict)")

The "VectorDBName" can be one of the following: ["Milvus"] (Under development)

The "Arguments dict" is a dictionary that contains the necessary arguments for the Vector Database class.

Example (Milvus)
config.set_provider_config("vector_db", "Milvus", {"uri": "./milvus.db", "token": ""})

More details about Milvus Config:

#### File Loader Configuration
config.set_provider_config("file_loader", "(FileLoaderName)", "(Arguments dict)")

The "FileLoaderName" can be one of the following: ["PDFLoader", "TextLoader", "UnstructuredLoader"]

The "Arguments dict" is a dictionary that contains the necessary arguments for the File Loader class.

Example (Unstructured)

Make sure you have prepared your Unstructured API KEY and API URL as env variables UNSTRUCTURED_API_KEY and UNSTRUCTURED_API_URL.

config.set_provider_config("file_loader", "UnstructuredLoader", {})

Currently supported file types: ["pdf"] (Under development)

You need to install unstructured-ingest before running, execute: pip install unstructured-ingest. More details about Unstructured: https://docs.unstructured.io/ingestion/overview

#### Web Crawler Configuration
config.set_provider_config("web_crawler", "(WebCrawlerName)", "(Arguments dict)")

The "WebCrawlerName" can be one of the following: ["FireCrawlCrawler", "Crawl4AICrawler", "JinaCrawler"]

The "Arguments dict" is a dictionary that contains the necessary arguments for the Web Crawler class.

Example (FireCrawl)

Make sure you have prepared your FireCrawl API KEY as an env variable FIRECRAWL_API_KEY.

config.set_provider_config("web_crawler", "FireCrawlCrawler", {})

More details about FireCrawl: https://docs.firecrawl.dev/introduction

Example (Crawl4AI)

Make sure you have run crawl4ai-setup in your environment.

config.set_provider_config("web_crawler", "Crawl4AICrawler", {})

You need to install crawl4ai before running, execute: pip install crawl4ai. More details about Crawl4AI: https://docs.crawl4ai.com/

Example (Jina Reader)

Make sure you have prepared your Jina Reader API KEY as an env variable JINA_API_TOKEN.

config.set_provider_config("web_crawler", "JinaCrawler", {})

More details about Jina Reader: https://jina.ai/reader/

### Python CLI Mode #### Load ```shell deepsearcher --load "your_local_path_or_url" ``` Example loading from local file: ```shell deepsearcher --load "/path/to/your/local/file.pdf" ``` Example loading from url (*Set `FIRECRAWL_API_KEY` in your environment variables, see [FireCrawl](https://docs.firecrawl.dev/introduction) for more details*): ```shell deepsearcher --load "https://www.wikiwand.com/en/articles/DeepSeek" ``` #### Query ```shell deepsearcher --query "Write a report about xxx." ``` More help information ```shell deepsearcher --help ``` ### Deployment #### Configure modules You can configure all arguments by modifying [config.yaml](./config.yaml) to set up your system with default modules. For example, set your `OPENAI_API_KEY` in the `llm` section of the YAML file. #### Start service The main script will run a FastAPI service with default address `localhost:8000`. ```shell $ python main.py ``` #### Access via browser You can open url http://localhost:8000/docs in browser to access the web service. Click on the button "Try it out", it allows you to fill the parameters and directly interact with the API. --- ## ❓ Q&A **Q1**: OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like GPTCache/paraphrase-albert-small-v2 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. **A1**: This is mainly due to abnormal access to huggingface, which may be a network or permission problem. You can try the following two methods: 1. If there is a network problem, set up a proxy, try adding the following environment variable. ```bash export HF_ENDPOINT=https://hf-mirror.com ``` 2. If there is a permission problem, set up a personal token, try adding the following environment variable. ```bash export HUGGING_FACE_HUB_TOKEN=xxxx ``` --- **Q2**: DeepSearcher doesn't run in Jupyter notebook. **A2**: Install `nest_asyncio` and then put this code block in front of your jupyter notebook. ``` pip install nest_asyncio ``` ``` import nest_asyncio nest_asyncio.apply() ``` --- ## 🔧 Module Support ### 🔹 Embedding Models - [Open-source embedding models](https://milvus.io/docs/embeddings.md) - [OpenAI](https://platform.openai.com/docs/guides/embeddings/use-cases) (`OPENAI_API_KEY` env variable required) - [VoyageAI](https://docs.voyageai.com/embeddings/) (`VOYAGE_API_KEY` env variable required) - [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/) (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` env variable required) ### 🔹 LLM Support - [OpenAI](https://platform.openai.com/docs/models) (`OPENAI_API_KEY` env variable required) - [DeepSeek](https://api-docs.deepseek.com/) (`DEEPSEEK_API_KEY` env variable required) - [Grok 3](https://x.ai/blog/grok-3) (Coming soon!) (`XAI_API_KEY` env variable required) - [SiliconFlow Inference Service](https://docs.siliconflow.cn/en/userguide/introduction) (`SILICONFLOW_API_KEY` env variable required) - [TogetherAI Inference Service](https://docs.together.ai/docs/introduction) (`TOGETHER_API_KEY` env variable required) - [Google Gemini](https://ai.google.dev/gemini-api/docs) (`GEMINI_API_KEY` env variable required) - [SambaNova Cloud Inference Service](https://docs.together.ai/docs/introduction) (`SAMBANOVA_API_KEY` env variable required) ### 🔹 Document Loader - Local File - PDF(with txt/md) loader - [Unstructured](https://unstructured.io/) (under development) (`UNSTRUCTURED_API_KEY` and `UNSTRUCTURED_URL` env variables required) - Web Crawler - [FireCrawl](https://docs.firecrawl.dev/introduction) (`FIRECRAWL_API_KEY` env variable required) - [Jina Reader](https://jina.ai/reader/) (`JINA_API_TOKEN` env variable required) - [Crawl4AI](https://docs.crawl4ai.com/) (You should run command `crawl4ai-setup` for the first time) ### 🔹 Vector Database Support - [Milvus](https://milvus.io/) (the same as [Zilliz](https://www.zilliz.com/)) --- ## 📌 Future Plans - Enhance web crawling functionality - Support more vector databases (e.g., FAISS...) - Add support for additional large models - Provide RESTful API interface (**DONE**) We welcome contributions! Star & Fork the project and help us build a more powerful DeepSearcher! 🎯