# Yi **Repository Path**: pplus_open_source/Yi ## Basic Information - **Project Name**: Yi - **Description**: Yi系列模型是由01.AI的开发人员从头开始训练的大型语言模型。第一个公开版本包含两个双语(英文/中文)基础模型,参数大小分别为 6B 和 34B。两者都以 4K 序列长度进行训练,并且在推理期间可以扩展到 32K。 - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-11-04 - **Last Updated**: 2024-05-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
## Introduction The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two bilingual(English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. ## News - 🎯 **2023/11/02**: The base model of `Yi-6B` and `Yi-34B`. ## Model Performance | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code | | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: | | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 | | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 | | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 | | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** | | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 | | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 | | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - | | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 | | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 | | **Yi-34B** | **76.3** | **83.7** | **81.4** | **82.8** | **54.3** | **80.1** | **76.4** | 37.1 | While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated. ## Usage Feel free to [create an issue](https://github.com/01-ai/Yi/issues/new) if you encounter any problem when using the **Yi** series models. ### 1. Prepare development environment The best approach to try the **Yi** series models is through Docker with GPUs. We provide the following docker images to help you get started. Note that the `latest` tag always points to the latest code in the `main` branch. To test a stable version, please replace it with a specific [tag](https://github.com/01-ai/Yi/tags). If you prefer trying out with your local development environment. First, create a virtual environment and clone this repo. Then install the dependencies with `pip install -r requirements.txt`. For the best performance, we recommend you also install the latest version (`>=2.3.3`) of [flash-attention](https://github.com/Dao-AILab/flash-attention#installation-and-features). ### 2. Download the model (optional) By default the model weights and tokenizer will be downloaded from [HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You can also download them manually from the following places: - [ModelScope](https://www.modelscope.cn/organization/01ai/) - Mirror site (remember to extract the content with `tar`) - [Yi-6B.tar](https://storage.lingyiwanwu.com/yi/models/Yi-6B.tar) - [Yi-34B.tar](https://storage.lingyiwanwu.com/yi/models/Yi-34B.tar) ### 3. Examples #### 3.1 Try out the base model ```bash python demo/text_generation.py ``` To reuse the downloaded models in the previous step, you can provide the extra `--model` argument: ```bash python demo/text_generation.py --model /path/to/model ``` Or if you'd like to get your hands dirty: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34B", device_map="auto", torch_dtype="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34B", trust_remote_code=True) inputs = tokenizer("There's a place where time stands still. A place of breath taking wonder, but also", return_tensors="pt") outputs = model.generate(inputs.input_ids.cuda(), max_new_tokens=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```