DeepSeek, a Chinese startup, is making waves in the U.S.-led AI ecosystem, significantly impacting Nvidia’s market capitalization and presenting new opportunities for smaller AI firms. The emergence of DeepSeek’s open-source model has been seen by many as a “massive” opportunity rather than a threat, with industry leaders eager to pivot away from proprietary solutions.
Andrew Feldman, CEO of Cerebras Systems, an AI chip startup, noted that developers are increasingly keen to replace OpenAI’s costly and closed models with alternatives like DeepSeek’s R1. This shift has led to one of Cerebras’ largest spikes in demand for its cloud-based services, highlighting a growing trend toward open-source models in the AI space. “R1 shows a single company will not dominate that market growth—hardware and software moats do not exist for open-source models,” Feldman stated.
DeepSeek’s R1 reasoning model claims to compete with top American technologies while maintaining lower operational costs. However, some industry observers have expressed skepticism about these claims. Feldman emphasized that lower prices in the AI market, similar to those in the PC and internet sectors, could stimulate global adoption, suggesting that the AI market is on a significant growth trajectory.
DeepSeek Accelerating Adoption of Inference Technologies
DeepSeek’s impact extends beyond software; it may also accelerate the adoption of new chip technologies by enhancing the AI cycle from training to inference. Inference refers to the application of AI for making predictions or decisions based on new data, contrasting with model training, which is more resource-intensive.
Phelix Lee, an equity analyst at Morningstar, explained that while Nvidia dominates the GPU market for AI training, there is growing potential in the inference segment, where competitors can offer higher efficiency at lower costs. “AI training is compute-intensive, but inference can utilize less powerful chips designed for specific tasks,” Lee noted.
Several AI chip startups have reported increased demand for inference chips as clients begin to adopt and integrate DeepSeek’s open-source model. Sid Sheth, CEO of d-Matrix, commented, “DeepSeek has demonstrated that smaller open models can match or exceed the capabilities of larger proprietary models at a fraction of the cost,” catalyzing a shift toward inference.
Robert Wachen, co-founder of AI chipmaker Etched, echoed this sentiment, revealing that many companies have reached out since DeepSeek’s reasoning models were released. “Companies are shifting their spending from training clusters to inference clusters,” he said, emphasizing the growing demand for inference-time computing.
The Broader Impacts on AI Adoption
Analysts agree that DeepSeek’s advancements signal a positive shift for the AI inference and chip industries. A report from Bain & Company highlighted that DeepSeek’s engineering innovations could reduce inference costs while improving training efficiency. In a bullish scenario, these efficiency gains could lead to lower costs for inference, driving broader AI adoption.
This aligns with Jevons Paradox, which posits that cost reductions in technology can lead to increased demand. Financial services firm Wedbush anticipates that the growing use of AI across enterprises and retail sectors will continue to drive demand.
Sunny Madra, COO at Groq, a company focused on AI inference chips, suggested that smaller players will have more opportunities to thrive as global AI demand rises. Madra stated that as the demand for AI grows, Nvidia’s inability to supply enough chips to everyone creates opportunities for aggressive market penetration for us.