- Group number: AOHW25_1026
- Project name: Speak, Friend, and Enter: A NPU Backend for Neuromorphic Computing
- Participant: Palladino Vittorio
- Supervisor: Prof.Davide Conficconi
This project implements a backend for neuromorphic (spiking) workloads targeting the AMD Ryzen™ AI NPU (and optionally GPU) using MLIR / AIE tools and the SNNtorch framework.
- Understand the MLIR flow for the AIE/NPU.
- Define and implement basic neuromorphic primitives (spiking neurons, synaptic updates, encoders/decoders).
- Integrate SNNtorch with a backend that compiles to Ryzen AI NPU (and supports heterogeneous GPU+NPU runs).
- SNNtorch — PyTorch-based SNN framework. https://snntorch.readthedocs.io/
- AIE MLIR IRON API — Python wrapper to describe and compile AIE arrays.
- Mini-PC with AMD Ryzen™ 9 7949HS (example target hardware).
- A Linux machine with NPU support (BIOS updated to enable the NPU).
- Python 3.10+ (or compatible with SNNtorch and your MLIR tools).
git
,make
, and a working C++ toolchain (g++/clang) for the testbench.
See: Getting Started for AMD Ryzen™ AI on Linux: https://github.com/Xilinx/mlir-aie#getting-started-for-amd-ryzen-ai-on-linux
Here i leave the command needed to install the env, taken from the previous link to the mlir repo.
Be sure you have the latest BIOS on your laptop or mini-PC that enables the NPU. See here.
If starting from Ubuntu 24.04
you may need to update the Linux kernel to 6.11+ by installing the Hardware Enablement (HWE) stack:
sudo apt update
sudo apt install --install-recommends linux-generic-hwe-24.04
sudo reboot
Turn off SecureBoot (Allows for unsigned drivers to be installed):
BIOS → Security → Secure boot → Disable
-
Execute the scripted build process:
This script will install package dependencies, build the xdna-driver and xrt packages, and install them. These steps require
sudo
access.bash ./utils/build_drivers.sh
-
Reboot as directed after the script exits.
sudo reboot
-
Check that the NPU is working if the device appears with xrt-smi:
source /opt/xilinx/xrt/setup.sh xrt-smi examine
At the bottom of the output you should see:
Devices present BDF : Name ------------------------------------ [0000:66:00.1] : NPU Strix
-
Install the following packages needed for MLIR-AIE:
# Python versions 3.10, 3.12 and 3.13 are currently supported by our wheels sudo apt install \ build-essential clang clang-14 lld lld-14 cmake ninja-build python3-venv python3-pip
-
Clone the mlir-aie repository:
git clone https://github.com/Xilinx/mlir-aie.git cd mlir-aie git checkout bd3b0c899ce536e66efa37718cc0f9d2a77d10e
-
Setup a virtual environment:
python3 -m venv ironenv source ironenv/bin/activate python3 -m pip install --upgrade pip
-
Install IRON library, mlir-aie and llvm-aie compilers from wheels and dependencies:
For release v1.0:
# Install IRON library and mlir-aie from a wheel python3 -m pip install mlir_aie -f https://github.com/Xilinx/mlir-aie/releases/expanded_assets/v1.0 # Install Peano from a llvm-aie wheel python3 -m pip install https://github.com/Xilinx/llvm-aie/releases/download/nightly/llvm_aie-19.0.0.2025041501+b2a279c1-py3-none-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl # Install basic Python requirements (still needed for release v1.0, but is no longer needed for latest wheels) python3 -m pip install -r python/requirements.txt # Install MLIR Python Extras HOST_MLIR_PYTHON_PACKAGE_PREFIX=aie python3 -m pip install -r python/requirements_extras.txt
For daily latest:
# Install IRON library and mlir-aie from a wheel python3 -m pip install mlir_aie -f https://github.com/Xilinx/mlir-aie/releases/expanded_assets/latest-wheels-2 # Install Peano from llvm-aie wheel python3 -m pip install llvm-aie -f https://github.com/Xilinx/llvm-aie/releases/expanded_assets/nightly # Install MLIR Python Extras HOST_MLIR_PYTHON_PACKAGE_PREFIX=aie python3 -m pip install -r python/requirements_extras.txt
-
Setup environment
source utils/env_setup.sh
-
Go inside the delivery folder of the openHW
# move the example source into the expected folder layout
cd OpenHW_deliver/
From the example folder (the one containing the Makefile):
# compile the design
make
# run the C++ testbench
make run
A documented Jupyter notebook is included in the repository. This will exploit the wrapper written in python to directly call the npu kernel from the notebook, using a small library.
Open the provided notebook and follow the cells to build, compile and run examples using the MLIR-AIE flow.
/ (repo root)
├─ OpenHWDelivery
└─ Makefile
└─ denselayer.py #design for the feedforward network
└─ singlecore.py #design for the single aie core
└─ multicore.py #design for the multi aie core
└─ lif_kernel_denselayer.cc #kernel implementaion of the feedforward neural network layer
└─ lif_kernel_singlecore.cc #implementation of the singlecore kernel (vectorized and scalar)
└─ lif_kernel_multicore.cc #implemenation of the multicore kernel (vector and scalar)
└─ test.cpp
└─ ... The following files have been taken from the repo mlir and has the only role to set up correctly all the utils and library of the AIE cores.
└─ cxxopts.hpp #file to
└─ test_utils.cpp
└─ test_utils.h
└─ makefile-common
└─ ...
└─ README.md
- Ensure the BIOS and kernel drivers expose the NPU on your platform.
- Match PyTorch / SNNtorch versions with your Python version.
- If
make
fails, inspect the Makefile and required paths inmlir-aie
— dependencies or environment variables may be missing.
Add your preferred license here (e.g., MIT, Apache-2.0).
- Student: Palladino Vittorio — [email protected]
- Supervisors: Conficconi Davide — [email protected] Sorrentino Giuseppe — [email protected]