# mlabonne **Repository Path**: ivanfun/mlabonne ## Basic Information - **Project Name**: mlabonne - **Description**: AI学习教程。 import from https://github.com/mlabonne/mlabonne - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-06-08 - **Last Updated**: 2025-06-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

𝕏 Follow me on X • 🤗 Hugging Face • 💻 Blog • 📙 LLM Engineer's Handbook


Hi, I'm a Machine Learning Scientist, Author, Blogger, and LLM Developer. ## 💼 Projects * [**The LLM Course**](https://github.com/mlabonne/llm-course): A popular curated list of resources to get into LLMs (>48k ⭐). * [**LLM Engineer's Handbook**](https://github.com/PacktPublishing/LLM-Engineers-Handbook): My book about LLM engineering with fine-tuning, RAG, data, evaluation, deployment, etc. * [**Hands-on GNNs**](https://github.com/PacktPublishing/Hands-On-Graph-Neural-Networks-Using-Python): My book about designing and implementing graph neural networks in practice. * [**LLM Datasets**](https://github.com/mlabonne/llm-datasets): Curated list of high-quality datasets for LLM fine-tuning. * [**LLM Tools**](https://github.com/mlabonne/llm-course?tab=readme-ov-file#tools): Automate LLM pipelines with Colab notebooks like [LLM AutoEval](https://github.com/mlabonne/llm-autoeval), [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing), [LazyAxolotl](https://colab.research.google.com/drive/1TsDKNo2riwVmU55gjuBgB1AXVtRRfRHW?usp=sharing), and [AutoQuant](https://colab.research.google.com/drive/1b6nqC7UZVt8bx4MksX7s656GXPM-eWw4?usp=sharing). ## 🤗 New Models * [**Liquid Foundation Models**](https://www.liquid.ai/liquid-foundation-models): My work at Liquid AI is to post-train our own pre-trained LLMs with a custom architecture. [[Playground]](https://playground.liquid.ai/chat) * [**Abliterated models**](https://huggingface.co/collections/mlabonne/abliteration-66bf9a0f9f88f7346cb9462f): Collection of abliterated models to remove refusals. [[Article]](https://huggingface.co/blog/mlabonne/abliteration) * [**NeuralDaredevil-8B**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated): Uncensored models with the highest MMLU score in the 8B category. ## 💀 Old models * [**AlphaMonarch-7B**](https://huggingface.co/mlabonne/AlphaMonarch-7B): Top performer in terms of reasoning + conversational abilities on a variety of benchmarks. [[Demo]](https://huggingface.co/spaces/mlabonne/AlphaMonarch-7B-GGUF-Chat) * [**NeuralBeagle14-7B**](https://huggingface.co/mlabonne/NeuralBeagle14-7B): The most powerful 7B model (rank 10 on the *entire* Open LLM Leaderboard). [[Demo]](https://huggingface.co/spaces/mlabonne/NeuralBeagle14-7B-GGUF-Chat) * [**Phixtral**](https://huggingface.co/mlabonne/phixtral-2x2_8): Novel Mixture of Experts architecture with phi-2 models. [[Demo]](https://huggingface.co/spaces/mlabonne/phixtral-chat) * [**Beyonder-4x7B-v3**](https://huggingface.co/mlabonne/Beyonder-4x7B-v3): Mixture of Experts with four excellent fine-tuned Mistral-7b models. [[Demo]](https://huggingface.co/spaces/mlabonne/Beyonder-4x7B-v3-GGUF-Chat) * [**NeuralHermes**](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B): A DPO fine-tuned version of OpenHermes (extremely cost-efficient). [[Demo]](https://huggingface.co/spaces/mlabonne/NeuralHermes-2.5-Mistral-7B-GGUF-Chat)