wiki:catalina/npu

Catalina i.MX95 NPU

The GW9200 and other i.MX95 Catalina SBCs have the following built in NPU for machine learning (ML):

  • eIQ® (Edge Intelligence) Neutron N3-1024S NPU (Neural Processing Unit)
    • Approximately 2 TOPs
    • Designed for high-efficiency and low latency inferences
    • Software stack from NXP (Use the NXP Yocto software image)
    • Benchmarks can be 2-3x of the i.MX8M Plus NPU, even at the same TOPs
    • Support for Modern AI/ML Workloads
      • CNN, MLP, RNN, LSTM, TCN, and more
    • Award Winning eIQ Development Environment
      • Enables TensorFlow, Pytorch, Caffe, ONNX, etc.

For more serious NPU / TPU calculations, consider a Mini-PCIe or M.2 accelerator that can be added to one of the slots on the GW9200.

Cards:

NXP has a lot of great resources which should be used to learn more about the NPU and how to use it. See links below.

See Also

Last modified 3 days ago Last modified on 10/15/2025 08:00:12 PM
Note: See TracWiki for help on using the wiki.