# FlashMLA **Repository Path**: deep-spark/FlashMLA ## Basic Information - **Project Name**: FlashMLA - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: iluvatar_flashmla - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 3 - **Forks**: 0 - **Created**: 2025-02-27 - **Last Updated**: 2025-07-07 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # FlashMLA on Iluvatar CoreX Here is the implementation of FlashMLA base on Iluvatar Corex Toolkit and Iluvatar Corex chips. ## Quick start ### Install ```bash bash clean_flashmla.sh bash build_flashmla.sh bash install_flashmla.sh ``` ### Benchmark ```bash python tests/test_flash_mla.py ``` ### Usage ```python from flash_mla import get_mla_metadata, flash_mla_with_kvcache tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv) for i in range(num_layers): ... o_i, lse_i = flash_mla_with_kvcache( q_i, kvcache_i, block_table, cache_seqlens, dv, tile_scheduler_metadata, num_splits, causal=True, ) ... ``` ## Requirements - Iluvatar CoreX GPUs - Iluvatar CoreX Toolkit - PyTorch 2.0 and above ## Acknowledgement FlashMLA is inspired by [FlashAttention 2&3](https://github.com/dao-AILab/flash-attention/) and [cutlass](https://github.com/nvidia/cutlass) projects. ## Citation ```bibtex @misc{flashmla2025, title={FlashMLA: Efficient MLA decoding kernels}, author={Jiashi Li}, year={2025}, publisher = {GitHub}, howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}}, } ```