Delving into the realm of Artificial Intelligence (A.I.) is not a new venture for Eric Schmidt, the former CEO of Google. Over the years, he has invested in numerous A.I. startups like Stability AI, Inflection AI, and Mistral AI. However, Schmidt is now taking a different approach by launching a $10 million venture dedicated to advancing research on the safety challenges associated with this groundbreaking technology.
The funds from this venture will establish an A.I. safety science program at Schmidt Sciences, a nonprofit organization founded by Schmidt and his wife Wendy. The program, led by Michael Belinsky, aims to prioritize the scientific research behind A.I. safety, rather than just focusing on the risks involved. “That’s the kind of work we want to do—academic research to understand why some things are inherently unsafe,” Belinsky explained.
Already, more than two dozen researchers have been chosen to receive grants of up to $500,000 from the program. In addition to financial support, participants will also have access to computational resources and A.I. models. The program will continue to evolve in sync with the rapid advancements in the industry. “We want to address the challenges of current A.I. systems, not outdated models like GPT-2,” Belinsky emphasized.
Renowned researchers like Yoshua Bengio and Zico Kolter are among the initial grantees of the program. Bengio will focus on developing risk mitigation technology for A.I. systems, while Kolter will explore phenomena like adversarial transfer. Another recipient, Daniel Kang, plans to investigate whether A.I. agents can execute cybersecurity attacks, highlighting the potential risks associated with A.I.’s capabilities.
Despite the ongoing excitement surrounding A.I. in Silicon Valley, there are concerns that safety considerations are being overshadowed. Schmidt Sciences’ new program seeks to address these concerns by removing barriers that hinder A.I. safety research. By fostering collaboration between academia and industry, researchers like Kang hope that major A.I. companies will integrate safety research findings into their technology development processes.
As the A.I. landscape continues to evolve, Kang emphasizes the importance of open communication and transparent reporting in testing A.I. models. He stresses the need for responsible practices from major labs to ensure the safe and ethical development of A.I. technology.
In conclusion, Eric Schmidt’s $10 million commitment to A.I. safety underscores the importance of prioritizing research and innovation to address the challenges and risks associated with this transformative technology.