Companies

The Technology Behind Intel’s Neural Processing Units and Their Impact on AI

Published January 10, 2025

Michael Langan from Intel spoke about the developments within the neural processing unit (NPU) team and the evolution of its architecture. With the rapid growth of AI technology, interest in this field is at an all-time high. Conversations around AI often revolve around new large language models, funding for AI start-ups, or emerging applications.

However, beneath the visible excitement of AI lies the crucial aspect of hardware needed to support these advancements. This is where NPUs come into play. Also referred to as AI accelerators, these specialized hardware components are designed to optimize and accelerate the computations required by AI models, functioning similarly to how the human brain processes information.

In a conversation with Michael Langan during the annual Midas conference in November 2024, he shared insights into Intel's NPU IP team, which he leads. Langan, who has a long history with Intel spanning 14 years, highlights that the work done within the NPU team is essential for various client devices, including laptops and desktops. He notes that the NPU market generates about $30 billion in annual revenue for Intel, with AI being a critical component of this landscape.

Intel's NPU Development in Ireland

The global NPU IP team at Intel consists of around 500 members, with notable roots stemming from the Irish start-up Movidius, acquired by Intel in 2016. Langan explained that the surge in interest surrounding neural processing began in 2012, particularly with the rise of convolutional neural networks, which became popular for tasks such as image recognition.

A significant turning point came in 2017 when Google published a paper titled 'Attention is All You Need,' introducing transformer architecture. Langan remarked that this development transformed the field overnight, forming the basis for technologies like ChatGPT and other large language models. Intel's design strategies have since focused on accelerating workloads driven by these architectures.

At Intel, the team encompasses a wide range of roles and expertise, from traditional hardware design in Verilog RTL to extensive software and compiler development. Langan emphasized the importance of optimized AI compilers, with a substantial portion of the team based in Ireland. Even though the percentage of the Irish team is small compared to the entire site, their contributions are significant to the overall Intel mission.

Challenges in NPU Development

The accelerating pace of advancements in NPU technology presents an ongoing challenge. As major companies like Microsoft, Dell, and HP release new applications, the Intel team often finds itself adapting to the evolving market demands. Langan shared that previously, the trend was for Intel’s teams to proactively pitch new features; now, it is customers who approach them with requests for application-specific functionality.

Additionally, the current talent shortage in specialized skills for NPU development poses another hurdle. Langan highlighted the continual search for individuals with expertise in deep learning hardware, software, and AI compilers. To combat this challenge, Intel has established internship programs with local universities over a decade ago, which has successfully built a pipeline of talented candidates. Langan believes Ireland possesses an abundance of skilled talent that is recognized globally.

Considering future developments, Langan acknowledged a keen interest in what architectural advancements will follow transformer models. He observes new contenders being labeled as potential successors to transformers, such as model architectures called Mamba and Hymba, which aim to enhance training efficiency, reduce power consumption, and boost performance. The team remains attentive to these developments to ensure that Intel’s hardware can adapt appropriately.

Technology, AI, Intel