NXP has submitted a new open-source Linux kernel driver for their Neutron neural processing unit, designed for edge AI acceleration on select i.MX95 SoCs.
NXP has submitted a new open-source Linux kernel driver for their Neutron neural processing unit, designed for edge AI acceleration on select i.MX95 SoCs.
The Linux kernel continues seeing more open-source kernel drivers emerge for supporting different AI accelerators / NPUs. The newest open-source driver breaking cover today is from NXP and is for enabling their Neutron neural processing unit.
The NXP Neutron NPU is designed for machine learning acceleration for edge AI applications. This Neutron NPU is found with select NXP SoCs like the i.MX95. The Neutron NPU is made up of a RISC-V core running on proprietary firmware, one or more Neutron cores, dedicated fast memory, and a DMA engine for handling data transfers.
For going along with this proposed Neutron accel driver is also an open-source user-space library as well as a custom LiteRT delegate for running LiteRT (formerly TensorFlow-Lite) workloads on the NPU with capable NXP SoCs.
Those interested in this open-source NXP Neutron NPU driver now undergoing code review for possible inclusion in a future version of the Linux kernel can see today's patch series.


The Neutron NPU represents NXP's latest push into edge AI acceleration, targeting applications where low power consumption and real-time inference are critical. The architecture combines a RISC-V control core with specialized neural processing units, allowing for efficient execution of common machine learning workloads without the need for a full GPU or dedicated AI accelerator.
Key specifications of the Neutron NPU include:
- RISC-V core for control and coordination
- Multiple Neutron processing cores for parallel computation
- Dedicated high-speed memory for AI workloads
- DMA engine for efficient data movement
- Support for LiteRT/TensorFlow Lite models
This driver submission follows a growing trend of hardware vendors contributing open-source support for AI accelerators to the mainline Linux kernel. Similar efforts have been seen from companies like Google, Intel, and Qualcomm, all working to improve Linux's AI/ML capabilities.
The inclusion of a user-space library and LiteRT delegate alongside the kernel driver demonstrates NXP's commitment to providing a complete software stack. This approach allows developers to target the Neutron NPU using familiar tools and frameworks, potentially reducing the learning curve and accelerating adoption.
For developers working on edge AI applications, this driver could provide a significant performance boost for compatible NXP hardware. The ability to offload neural network inference to dedicated hardware can reduce power consumption and improve response times compared to running the same workloads on a general-purpose CPU.
As the driver undergoes code review, the Linux community will evaluate its design, implementation quality, and integration with existing kernel subsystems. If accepted, it would join other AI accelerator drivers in providing Linux users with more hardware acceleration options for machine learning workloads.
The patch series is currently available for review, allowing interested parties to examine the implementation details and provide feedback before any potential inclusion in a future kernel release.

Comments
Please log in or register to join the discussion