# P4AI **Repository Path**: mirrors_Xilinx/P4AI ## Basic Information - **Project Name**: P4AI - **Description**: P4AI is a framework for rapidly prototyping DNN-powered SmartNIC solutions using an automated code-generation flow that stitches various technologies together into a high-performance implementation on AMD Alveo™ Adaptable Accelerator Cards. - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-11-16 - **Last Updated**: 2026-04-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # P4AI: A P4-based Data Plane AI Framework Programmable network devices, such as SmartNICs (e.g. AMD Alveo SmartNICs), DPUs (e.g. AMD Pensando DPU), and programmable switches (e.g. Intel Tofino), are gaining popularity as they enable faster pace of innovation and more customisability for the end-user. Vendors have also adopted this trend, supporting languages such as P4 (e.g. Pensando and Tofino are P4-programmable) and eBPF (e.g. eBPF offload to Netronome FPCs), which has inspired network engineers to test and implement many new interesting use-cases in the network data plane, such as traffic classification, in-network consensus, in-network DNS, and more. Modern machine learning methods, especially deep neural networks (DNNs), have had a profound impact on many application domains, most notably on computer vision and natural language processing. As expected, there is a long-tail of applications that can benefit from modern DNNs, and networking applications are not an exception. There is an increased interest in bringing more intelligence to networking applications, and hence a strong demand to enable DNN inference in the network data plane, often referred to as Data Plane AI. However, in order to enable Data Plane AI, the DNN inference accelerator needs to be able to handle the challenging high data-rate environment of the network data plane. Furthermore, programmable network devices either have limited or unsuitable compute resources for executing DNNs in the data plane (extraction of features and/or execution of DNN model). Hence, end-users often fall back to executing DNN-based inference in the control plane, which is not suitable for low-latency per-packet operations at high line-rates. The goal of this project is to showcase that Data Plane AI is possible using existing tools and technologies, which has led to the development of P4AI. P4AI is a framework for rapidly prototyping DNN-powered SmartNIC solutions using an automated code-generation flow that stitches various technologies together into a high-performance implementation on [AMD Alveo™ Adaptable Accelerator Cards](https://www.amd.com/en/products/accelerators/alveo.html). Figure 1 below captures the end-to-end P4AI framework flow.
Figure 1: The programmer flow enabled by the P4AI framework.
The programmer flow can be distilled into the following steps: 1. We recommend using [Brevitas](https://github.com/Xilinx/brevitas) to train Quantized Neural Networks (QNNs) and [FINN](https://github.com/Xilinx/finn) to compile these QNNs into high-performance FPGA implementations. Users can refer to the following [finn-examples tutorial](https://github.com/Xilinx/finn-examples/tree/feature/ddos-anomaly-detector/build/ddos-anomaly-detector-mlp) that showcases a Network Intrusion Detection System (NIDS) use-case designed using the Brevitas and FINN frameworks. 2. Write a `config.json` file that guides the P4AI compiler towards generating the desired hardware implementation. The JSON configuration documentation can be found [here](configs/README.md) and an example `config.json` is provided [here](configs/smartnic-ddos-detector.json). 3. Provide the `config.json` and path to the FINN model as inputs to the P4AI compiler, which will generate two P4 template files, RTL (SystemVerilog) modules, and various scripts that are needed for latter stages. The command to invoke the P4AI compiler is detailed in the subsection ["Generating OpenNIC Plugin"](#generating-opennic-plugin) below. 4. The programmer is expected to populate the two generated P4 template files with the desired packet-processing logic, including DNN feature extraction and DNN-powered decision-making. A guide is provided [here](P4Guide.md). 5. The generated output directory can now be passed as a standalone plugin for compilation using the AMD [OpenNIC Project](https://github.com/Xilinx/open-nic) toolflow. The P4 programs will be compiled to FPGA implementations using the AMD [Vitis Networking P4](https://www.xilinx.com/products/intellectual-property/ef-di-vitisnetp4.html) design environment. The scripts generated in step 3 above encapsulate all of the required steps during compilation, and the programmer can simply follow the [recommended build flow](https://github.com/Xilinx/open-nic-shell?tab=readme-ov-file#how-to-build) of the OpenNIC Shell. The output bitstream can then be used to program the FPGA into a SmartNIC card that can be interfaced using the [OpenNIC driver](https://github.com/Xilinx/open-nic-driver) via PCIe. The system architecture of the generated OpenNIC plugin is shown in Figure 2 below.
Figure 2: FPGA system architecture of the generated P4AI OpenNIC plugin. (click to enlarge)
## Getting Started ### Python/Poetry Setup The P4AI compiler is written in Python3. To ensure portability and reproducibility, [poetry](https://python-poetry.org/) is used to package this project. Please [install](https://python-poetry.org/docs/#installation) `poetry` on your machine to get started. One of the ways to install poetry is to run: ``` curl -sSL https://install.python-poetry.org | POETRY_HOME= python3 - ``` Note: Replace `` with your desired installation path. Once you have installed `poetry`, then you can simply install this package, including any third party dependencies, by running `poetry install` on your local machine. ### Generating OpenNIC Plugin As described in step 3 above, the command to invoke the P4AI compiler is as follows: ```sh poetry run gen-onic-plugin [--no-p4] \ --out-dir \ --path-to-finn \ --config ``` `--no-p4`: Optional boolean flag. Set this flag to disable the generation of templated P4 programs, which would prevent overwriting existing P4 program files in the target output directories. ``: Path to an output directory inside which all the templated P4, RTL, and script files will be generated. ``: Path to the output build directory produced by a successful FINN build of your QNN model. ``: Path to the `config.json` file that provides the configuration parameters required by the P4AI compiler. See [config.json documentation](configs/README.md) and [example config.json](configs/smartnic-ddos-detector.json) for more information. ### Examples / Tutorials A complete reference design, a [SmartNIC DDoS Detector](examples/smartnic-ddos-detector), is included. This directory contains everything needed to build an OpenNIC plugin (except the FINN model artifacts) as outlined in Step 5. The design has been compiled and validated on an Alveo U55C card, and it can process up to 250M packets/second with every packet traversing the inference pipeline. The quantized neural network (2-bit weights, 2-bit activations) was trained on a small subset of the [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html) dataset. It achieves an F1 score of around 86% on a held-out test set. A Jupyter notebook tutorial series that details how a per-packet inference QNN model can be trained and compiled for FPGA implementation using Brevitas and FINN respectively can be found [here](https://github.com/Xilinx/finn-examples/tree/feature/ddos-anomaly-detector/build/ddos-anomaly-detector-mlp). All recipes, models, and data are open-sourced for reproducibility. To build the SmartNIC DDoS Detector example, first train and compile a model with FINN as described in the tutorial above. Feature selection in the tutorial differs slightly from those in the provided example design. Ensure that you either - Train the QNN using the feature set defined in [configs/smartnic-ddos-detector.json](configs/smartnic-ddos-detector.json), or - Update/create your own config file to match the features used to train your model. Note that if you choose this approach, you would have to modify the [P4 programs](examples/smartnic-ddos-detector/p4) as well to match the feature extraction logic. Finally, generate the RTL files by running the following command from the project root directory: ``` poetry run gen-onic-plugin \ --out-dir examples/smartnic-ddos-detector \ --path-to-finn \ --config configs/smartnic-ddos-detector.json \ --no-p4 ``` ## Citation Details about the preprocessed dataset, modeling, and implementation can be found in [our demo paper](https://dl.acm.org/doi/abs/10.1145/3630047.3630191). You can cite our work using the following bibtex snippet: ``` @inproceedings{siddhartha2023enabling, title={Enabling DNN Inference in the Network Data Plane}, author={Siddhartha and Tan, Justin and Bansal, Rajesh and Chee Cheun, Huang and Tokusashi, Yuta and Yew Kwan, Chong and Javaid, Haris and Baldi, Mario}, booktitle={Proceedings of the 6th on European P4 Workshop}, pages={65--68}, year={2023} } ``` If you find this project helpful, please consider giving it a star! Your support is greatly appreciated. ## Maintainers / Contributors Current maintainers: - Sid ([email](mailto:sid.s@amd.com)) Contributors: - Sid ([email](mailto:sid.s@amd.com), [github](https://github.com/sids-amd)) - Justin Tan (intern Jul-Dec 2023) - Yan Ming Pang (intern Jan-Jul 2024) We welcome contributions to P4AI, and if you would like to contribute, please read [CONTRIBUTING.md](CONTRIBUTING.md) and do not hesistate to reach out to any of our maintainers if you have any questions. ---Copyright© 2021-2025 Advanced Micro Devices, Inc.