# LLM-translation-API **Repository Path**: objdump/llm-translation-api ## Basic Information - **Project Name**: LLM-translation-API - **Description**: Create translation API for translating text by LLM - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-20 - **Last Updated**: 2026-03-20 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Translation API A high-quality translation service powered by OpenAI-compatible LLM APIs. [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE) ## Features - 🚀 **OpenAI-Compatible**: Works with any OpenAI-compatible API (SiliconFlow, OpenAI, Azure, etc.) - 🌐 **Multi-Language**: Supports translation between 50+ languages - ⚡ **Batch Translation**: Translate up to 10 texts in a single API call - 🛡️ **Context Protection**: Automatic token estimation to prevent exceeding LLM context limits - 📖 **Auto-Generated Docs**: Interactive API documentation with Swagger UI - 🔧 **Easy Configuration**: Simple environment-based configuration ## Quick Start ### 1. Install Dependencies ```bash # Clone the repository git clone cd translation-api # Create virtual environment (recommended) python -m venv venv source venv/bin/activate # Linux/Mac # or venv\Scripts\activate # Windows # Install dependencies pip install -r requirements.txt ``` ### 2. Configure Environment ```bash # Copy environment variable template cp .env.example .env # Edit .env file and configure: # - LLM_API_URL: API endpoint (optional, defaults to SiliconFlow) # - LLM_API_KEY: Your API key from your provider ``` **Example `.env`:** ```bash # Option 1: SiliconFlow (default) LLM_API_URL=https://api.siliconflow.cn/v1/chat/completions LLM_API_KEY=your-siliconflow-api-key # Option 2: OpenAI # LLM_API_URL=https://api.openai.com/v1/chat/completions # LLM_API_KEY=your-openai-api-key ``` ### 3. Start the Service ```bash # Method 1: Direct execution python main.py # Method 2: Using uvicorn with hot reload uvicorn main:app --host 0.0.0.0 --port 9001 --reload ``` The service will start on port 9001 by default. **Access points:** - API endpoint: http://localhost:9001 - Interactive docs: http://localhost:9001/docs (Swagger UI) - Alternative docs: http://localhost:9001/redoc (ReDoc) ## API Endpoints ### Single Text Translation **POST** `/translate` Translate a single text from one language to another. #### Request Parameters | Field | Type | Required | Description | |-------|------|----------|-------------| | text | string | Yes | Text to translate (1-1000 characters) | | target_language | string | Yes | Target language (e.g., "Chinese", "English", "日本語") | | source_language | string | No | Source language (auto-detected if not provided) | #### Example Request ```bash curl -X POST "http://localhost:9001/translate" \ -H "Content-Type: application/json" \ -d '{ "text": "Hello, world!", "target_language": "中文" }' ``` #### Example Response ```json { "translated_text": "你好,世界!", "source_language": "English", "target_language": "中文" } ``` ### Batch Translation **POST** `/translate/batch` Translate multiple texts (1-10 items) in a single API request. #### Request Parameters | Field | Type | Required | Description | |-------|------|----------|-------------| | texts | array | Yes | Array of texts (1-10 items, max 1000 characters each) | | target_language | string | Yes | Target language | | source_language | string | No | Source language (auto-detected if not provided) | #### Example Request ```bash curl -X POST "http://localhost:9001/translate/batch" \ -H "Content-Type: application/json" \ -d '{ "texts": ["Hello world", "How are you?", "Good morning"], "target_language": "中文", "source_language": "English" }' ``` #### Example Response ```json { "results": [ { "original_text": "Hello world", "translated_text": "你好,世界", "success": true, "error": null }, { "original_text": "How are you?", "translated_text": "你好吗?", "success": true, "error": null }, { "original_text": "Good morning", "translated_text": "早上好", "success": true, "error": null } ], "source_language": "English", "target_language": "中文", "success_count": 3, "failed_count": 0 } ``` ### Health Check **GET** `/health` Check service status and API configuration. ```bash curl "http://localhost:9001/health" ``` **Response:** ```json { "status": "healthy", "api_key_configured": true } ``` ## Supported Languages The API supports most languages. Use natural language names as shown below: | Language | Native Name | |----------|-------------| | Chinese | 中文 | | English | English | | Japanese | 日本語 | | Korean | 한국어 | | French | Français | | German | Deutsch | | Spanish | Español | | Russian | Русский | | Arabic | العربية | | Portuguese | Português | | Italian | Italiano | | ...and many more | ## API Providers This service works with any OpenAI-compatible API: ### SiliconFlow (Default) - **Website:** https://siliconflow.cn/ - **Endpoint:** `https://api.siliconflow.cn/v1/chat/completions` - **Models:** DeepSeek-V3.2, Qwen, GLM, and more - **Pricing:** Very competitive rates ### OpenAI - **Website:** https://platform.openai.com/ - **Endpoint:** `https://api.openai.com/v1/chat/completions` - **Models:** GPT-4, GPT-3.5-turbo ### Azure OpenAI - **Endpoint:** `https://{your-resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version=2024-02-01` ## Limits | Limit | Single Translation | Batch Translation | |-------|-------------------|-------------------| | Single text length | ≤ 1000 characters | ≤ 1000 characters per item | | Batch size | N/A | 1-10 items | | Total batch characters | N/A | ≤ 5000 characters | ## Testing ### Run All Tests ```bash # Ensure service is running python main.py # Run test suite python test_api.py ``` **Expected output:** All 16 tests should pass ✅ ### Hongloumeng Batch Translation Test Test batch translation with long classical Chinese text: ```bash # Requires hongloumeng.txt in project directory # Default: test 5 segments python test_batch_hongloumeng.py # Test 10 segments (maximum) python test_batch_hongloumeng.py 10 ``` Results are saved as `hongloumeng_translation_YYYYMMDD_HHMMSS.md`. ## Project Structure ``` translation-api/ ├── main.py # Main FastAPI application ├── test_api.py # API test suite ├── test_batch_hongloumeng.py # Batch translation test ├── requirements.txt # Python dependencies ├── .env.example # Environment variables template ├── .gitignore # Git ignore rules ├── LICENSE # MIT License └── README.md # This file ``` ## Configuration ### Environment Variables | Variable | Required | Default | Description | |----------|----------|---------|-------------| | `LLM_API_KEY` | Yes | - | Your LLM API key | | `LLM_API_URL` | No | `https://api.siliconflow.cn/v1/chat/completions` | API endpoint URL | ### Switching API Providers To switch providers, simply update your `.env` file: ```bash # For SiliconFlow LLM_API_URL=https://api.siliconflow.cn/v1/chat/completions LLM_API_KEY=sk-your-key # For OpenAI LLM_API_URL=https://api.openai.com/v1/chat/completions LLM_API_KEY=sk-your-key ``` No code changes required! ## Security Notes - 🔒 Keep your API key secure and never commit it to version control - 📝 `.env` file is already in `.gitignore` - ⚠️ API keys should be rotated regularly - 🌐 Use HTTPS in production ## License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## Contributing Contributions are welcome! Please feel free to submit a Pull Request. ## Acknowledgments - Built with [FastAPI](https://fastapi.tiangolo.com/) - Powered by OpenAI-compatible LLM APIs