Qualcomm's six-year-old AI100 accelerators have scored their first major deployment with Saudi Arabia's Humain, which has taken delivery of 1,024 systems despite the chip's aging architecture and limited memory capacity.
Qualcomm has finally secured its first major deployment of the AI100 accelerator, with Saudi Arabia's newly formed Humain outfit taking delivery of 1,024 systems. The announcement, made by Humain's CEO, marks a significant milestone for Qualcomm's AI ambitions, though it comes with an important caveat: the AI100 chip debuted in 2019 and is now showing its age in today's rapidly evolving AI landscape.

(Image credit: Humain/Qualcomm)
The AI100's journey to this deployment has been a long one. First unveiled in 2019, the chip was designed with power efficiency for inference tasks as its primary focus. It became available as a drop-in card in mid-2023, but its architecture is now approximately six years old. This vintage status presents several challenges in the current AI accelerator market.
One of the most significant limitations of the AI100 is its memory capacity. The Ultra variant offers only 128 GB of memory, which restricts the size of models it can effectively run. According to reports, the chip is limited to models with up to 32 billion parameters. In 2026 terms, this is relatively modest, especially when compared to contemporary reasoning models that can use tens of times that amount. For context, modern large language models like GPT-4 are estimated to have over a trillion parameters.
Despite these limitations, Humain's decision to deploy 1,024 AI100 systems suggests there are still use cases where the chip's capabilities are sufficient. The Saudi outfit announced partnerships with Nvidia, AMD, and Qualcomm in May 2025, indicating a multi-vendor approach to building out its AI infrastructure. Alongside the Qualcomm deal, Humain earmarked 18,000 of Nvidia's GB300 Grace Blackwell accelerators and 500 MW worth of compute capacity from AMD.
The timing of this deployment is particularly interesting given the current state of the AI hardware market. Latest-generation AI chips are in extremely short supply, with backorders stretching for months or even years. Major players like OpenAI and Oracle are consuming vast quantities of available silicon, making it difficult for newer entrants to secure cutting-edge hardware. This supply constraint may explain why Humain opted for Qualcomm's older but readily available AI100 systems.
Humain's first announced AI datacenter customer is Adobe, which suggests the Qualcomm AI100 accelerators may be well-suited for certain types of workloads. Basic image-fill and generation tasks, which don't require the massive parameter counts of cutting-edge reasoning models, could be ideal candidates for the AI100's capabilities. This would allow Humain to offer a range of services at different price points and performance levels.
For Qualcomm, this deployment represents both a validation of their AI strategy and a reminder of how quickly the AI hardware landscape evolves. The company has already announced its next-generation AI200 chip for late 2026 and the AI250 for 2027. These newer chips will need to address the limitations of the AI100 while competing in an increasingly crowded and competitive market.
The AI accelerator market has become dominated by Nvidia in recent years, with AMD making steady inroads. Qualcomm's entry into this space with the AI100 was ambitious, but the six-year gap between announcement and major deployment highlights the challenges of competing in this fast-moving industry. The company's ability to secure this deal with Humain, despite the chip's age, demonstrates that there remains demand for diverse AI hardware solutions, particularly when supply constraints affect the latest offerings.
This deployment also raises questions about the lifecycle of AI hardware and how quickly cutting-edge technology becomes legacy. In traditional computing, a six-year-old chip might still be considered relatively modern. In the AI space, where model sizes and computational requirements are growing exponentially, six years represents multiple generations of advancement.
For Humain, the decision to deploy 1,024 AI100 systems represents a pragmatic approach to building out AI infrastructure. By combining older but capable hardware with the latest offerings from Nvidia and AMD, the company can offer a range of services while managing costs and navigating supply constraints. This strategy may prove particularly valuable as the AI market continues to evolve and new use cases emerge that don't necessarily require the most advanced hardware.
As Qualcomm looks to the future with its AI200 and AI250 chips, the success of the AI100 deployment with Humain will likely inform the company's strategy. The challenge will be to deliver chips that not only match the capabilities of competitors but also address the specific needs of emerging AI applications and the practical constraints of datacenter deployment.
The AI hardware market continues to evolve rapidly, with new players entering and existing ones expanding their offerings. Qualcomm's AI100 deployment with Humain represents an interesting case study in how older technology can still find relevance in a market obsessed with the latest and greatest. As AI applications continue to diversify, there may be increasing demand for a range of hardware options, from cutting-edge to proven and reliable.

Comments
Please log in or register to join the discussion