
Did you know that the U.S. has maintained its A.I. superiority over China for nearly a decade? According to Alexandr Wang, the 28-year-old CEO of Scale AI, a $13 billion startup, this dominance is now being challenged by an “earth-shattering” A.I. model released by the Chinese startup DeepSeek just in time for Christmas. Wang expressed his concerns about the narrowing gap between the two countries in an interview with CNBC on Jan. 23.
This week, DeepSeek unveiled a second A.I. model that showcases reasoning capabilities comparable to those of leading U.S. companies like OpenAI. The rapid advancements made by the Chinese startup have not only impressed researchers but have also sparked debates about the efficacy of A.I. chip export controls aimed at restricting China’s access to the cutting-edge GPUs that power A.I. technologies.
“The A.I. race and the A.I. war between the U.S. and China are among the most critical issues of our time,” Wang emphasized. In a recent full-page advertisement in The Washington Post, Wang urged the Trump administration to safeguard America’s technological edge in the face of China’s rapidly advancing A.I. models. He proposed increasing investments in computing power and data infrastructure, as well as implementing an energy strategy to support the A.I. revolution.
Wang, a self-made billionaire and one of the youngest in the world, hails from Los Alamos, New Mexico. Raised by parents who were weapons physicists at the renowned Los Alamos National Laboratory, Wang ventured into the tech industry after brief stints at Addepar and Quora. His journey led him to delve into machine learning at MIT before launching Scale AI, a company that specializes in providing meticulously labeled data for A.I. training through contract work.
With a valuation of $13.8 billion and clients like the U.S. Department of Defense and OpenAI, Scale AI has solidified its position in the A.I. landscape. Wang’s close ties with OpenAI CEO Sam Altman, whom he roomed with during the Covid-19 pandemic, played a pivotal role in Scale AI’s growth. The company was incubated within Y Combinator, the startup accelerator once led by Altman.
How does DeepSeek’s latest release stack up against leading U.S. models?
Scale AI recently collaborated with the Center for A.I. Safety to introduce “Humanity’s Last Exam,” touted as the toughest benchmark test for A.I. systems to date. While existing models have struggled to surpass a 10% success rate on the test, DeepSeek’s new reasoning model, DeepSeek-R1, has emerged as a frontrunner. Wang revealed to CNBC that the model’s performance rivals that of the best American models.
Despite stringent GPU export regulations in China, DeepSeek claims to have achieved remarkable results with significantly fewer resources compared to their American counterparts. For instance, DeepSeek-V3, released in December, was trained on around 2,000 Nvidia A.I. chips, whereas Meta’s Llama 3.1 model used 16,000 GPUs for training.
Wang remains skeptical, pointing out that Chinese labs possess more H100s, a type of Nvidia GPU not legally available in China, than commonly believed. He hinted at DeepSeek’s access to approximately 50,000 H100s, a fact they can’t disclose due to U.S. export controls.
To propel the advancement of frontier A.I. models, Wang stressed the need for increased computational capacity and infrastructure in the U.S. He predicts that the A.I. market could reach $1 trillion once A.I. achieves artificial general intelligence (A.G.I.), a milestone he believes could be reached in the next two to four years. A.G.I. refers to systems that can function as highly capable remote workers, a feat Wang envisions on the horizon.