Meta’s AI chief: DeepSeek proves AI progress isn’t about chips

Meta’s AI chief: DeepSeek proves AI progress isn’t about chips

Meta chief AI scientist Yann LeCun superimposed next to the DeepSeek AI logo

Yann LeCun, Meta’s chief AI scientist, claimed the market reaction to DeepSeek was “woefully unjustified” and that open source research powered the Chinese startup's meteoric rise, not its hardware.

DeepSeek took the world by storm this week with a series of AI models trained on hardware far less powerful than that used by Western AI developers like OpenAI. The revelation spooked investors, triggering a mass sell-off of tech stocks amid fears that companies like Nvidia may be massively overvalued.

LeCun described the sell-off as a “major misunderstanding” and that it was DeepSeek’s innovative approach to improving training efficiency that powered their success.

Subscribe today for free

“DeepSeek has profited from open research and open source, e.g. PyTorch and Llama from Meta,” LeCun said in a LinkedIn post. “To people who see the performance of DeepSeek and think: ‘China is surpassing the US in AI.’ You are reading this wrong. The correct reading is: ‘Open source models are surpassing proprietary ones.’”

LeCun, a Turing Award winner and vocal advocate for open source AI, argues that progress lies in improving architectures — such as his JEPA concept for self-supervised learning — rather than relying on more powerful hardware.

“[DeepSeek] came up with new ideas and built them on top of other people's work,” LeCun said. “Because their work is published and open source, everyone can profit from it. That is the power of open research and open source.”

Following an appearance at the World Economic Forum in Davos, LeCun contended that open source AI is the key to building diverse AI systems and tools, adding: “It is how we will get to a point where AI tools can understand, reflect and benefit people from all languages and cultures in the world.”

The market crash that followed in the wake of DeepSeek occurred after the team behind it claimed its open source DeepSeek-V3 model was developed for just $6 million — check out Capacity's in-depth breakdown of DeepSeek’s development costs.

Investors were spooked after the likes of Microsoft, OpenAI, Oracle, xAI, and Google have spent billions of dollars building out their AI infrastructure, snapping up high-end hardware, only to be outperformed by a Chinese startup that supposedly spent what is in comparison pocket change

In a later Threads post, LeCun said the stock sell stemmed from a “major misunderstanding” about AI infrastructure investments.

“Much of those billions are going into infrastructure for inference, not training,” LeCun said. “Running AI assistant services for billions of people requires a lot of compute. Once you put video understanding, reasoning, large-scale memory, and other capabilities in AI systems, inference costs are going to increase.

“The only real question is whether users will be willing to pay enough (directly or not) to justify the capex and opex.”

RELATED STORIES

Behind the DeepSeek hype: Costs, safety risks & censorship explained

China’s DeepSeek AI sparks global tech shift, challenges US Big Tech dominance

Is DeepSeek the future of AI? Industry reactions to R1’s launch

Gift this article