Arteris IP created the new technologies in FlexNoC 4 AI based on its learning from some of the world’s leading AI and DNN SoC design teams. Arteris IP customers developing AI chips include autonomous driving pioneer Mobileye, who recently licensed Arteris IP FlexNoC and Ncore interconnect IP for its next-generation EyeQ systems, Movidius, Cambricon, Intellifusion, Enflame, Iluvatar CoreX, Canaan Creative, and four other companies that have not been publicly announced.
New capabilities in FlexNoC 4 and the new AI Package include:
- Automated topology generation for mesh, ring and torus networks – FlexNoC 4 AI enables SoC architects to not only generate AI topologies automatically but also edit generated topologies to optimize each individual network router, if desired.
- Multicast – FlexNoC 4 AI intelligent multicast optimizes the usage of on-chip and off-chip bandwidth by broadcasting data as close to network targets as possible. This allows for more efficient updates of DNN weights, image maps and other multicast data.
- Source synchronous communications – Helps avoid clock tree synthesis, physical placement, and timing closure problems when spanning long distances on AI chips, which can be larger than 400 mm2.
- VC-Link™ virtual channels – Allows sharing of long physical links in congested areas of the die while maintaining quality-of-service (QoS).
- HBM2 and multichannel memory support – Ideal integration with HBM2 multichannel memory controllers with 8 or 16 channel interleaving.
- Up to 2048-bit wide data support – With non-power of 2 data width and integrated rate adaptation