# bert-as-service **Repository Path**: deeplearningrepos/bert-as-service ## Basic Information - **Project Name**: bert-as-service - **Description**: Mapping a variable-length sentence to a fixed-length vector using BERT model - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-03-30 - **Last Updated**: 2021-08-31 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
![]() |
► Jina 101: First Thing to Learn About JinaEnglish • 日本語 • français • Deutsch • Русский язык • 中文► From BERT-as-Service to X-as-ServiceLearn how to use Jina to extract feature vector using any deep learning representation |
Using BERT model as a sentence encoding service, i.e. mapping a variable-length sentence to a fixed-length vector.
Highlights • What is it • Install • Getting Started • API • Tutorials • FAQ • Benchmark • Blog
BERT-Base, Uncased | 12-layer, 768-hidden, 12-heads, 110M parameters |
BERT-Large, Uncased | 24-layer, 1024-hidden, 16-heads, 340M parameters |
BERT-Base, Cased | 12-layer, 768-hidden, 12-heads , 110M parameters |
BERT-Large, Cased | 24-layer, 1024-hidden, 16-heads, 340M parameters |
BERT-Base, Multilingual Cased (New) | 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters |
BERT-Base, Multilingual Cased (Old) | 102 languages, 12-layer, 768-hidden, 12-heads, 110M parameters |
BERT-Base, Chinese | Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parameters |