Qualcomm said the AI200 and AI250 chips were optimized for AI inference — the process of running already-trained AI models to generate responses — rather than the training phase that requires more computing power and is dominated by Nvidia’s powerful GPUs.
Shares in Qualcomm were up as much as 20 percent in Monday trading on Wall Street.
The San Diego-based company is best known for smartphone and personal computer processors and is venturing into new territory with the release.
Qualcomm said the AI200, slated for commercial release in 2026, offers 768 gigabytes of memory per card and uses direct liquid cooling to manage heat in densely packed server racks that each consume up to 160 kilowatts.
The company said the AI250, expected in 2027, would be ten times more effective in memory bandwidth than current products on the market while consuming less power.Nvidia, the poster child for the generative AI boom, currently dominates the AI chip market, driven by its H100 and newer H200 GPU accelerators used in training and running AI models in massive purpose-built data centers.AMD, the second-largest player, has been gaining ground, positioning itself as a challenger to Nvidia’s dominance.
Both companies have recently entered into arrangements with OpenAI, the creator of ChatGPT, as the industry seeks to ramp up the massive infrastructure needed to support the AI frenzy.
This has raised some fears that Wall Street could be in the throes of an AI stock bubble reminiscent of the internet boom and crash experienced in the late 1990s and 2000.
Still, companies are largely rewarded for their connection to the AI bonanza. AMD’s shares rose 35 percent on the day of its announcement with OpenAI, the world’s most valuable private company, worth $500 billion.