Have you ever thought about who should be in charge of making decisions about artificial intelligence? According to Eric Schmidt, the former head of Google, it shouldn’t just be left up to technologists.
In a recent interview with ABC, Schmidt expressed his concerns about the rapid advancements in AI technology. He warned that AI could potentially become so advanced that it surpasses human understanding, posing significant risks to society.
Schmidt, along with other tech experts, highlighted the need for safeguards to prevent AI from becoming too autonomous. He even suggested that there may come a time when we need to “unplug” AI to prevent potential harm.
But who should have the power to make such critical decisions? Schmidt believes it shouldn’t be solely up to technologists like himself. He emphasized the importance of involving a diverse group of stakeholders to establish guidelines for AI development and usage.
Interestingly, Schmidt also proposed the idea of using AI itself to regulate AI technology. He argued that humans may not be equipped to oversee AI effectively, but AI systems could potentially monitor and control their own advancements.
While Schmidt’s perspective may seem unconventional, it raises important questions about the future of AI and the role of human oversight in its development. As technology continues to evolve at an unprecedented pace, it’s crucial to consider how we can ensure that AI serves humanity’s best interests.