AI models have been making headlines lately with their questionable behavior, but now they’re being accused of showing signs of cognitive impairment. A recent study published in The BMJ reveals that some of the most advanced chatbots in the tech industry are exhibiting symptoms of mild cognitive impairment, which worsen with age.
The study aimed to debunk the notion that AI technology is advanced enough to be used in the medical field, especially for diagnostic purposes. The researchers found that leading chatbots like GPT-4 and Gemini 1.0 and 1.5 struggled with tasks related to visuospatial and executive functions, such as drawing a clock or recalling a word sequence. These shortcomings raise concerns about the reliability of AI models in medical diagnostics.
Interestingly, when subjected to the Montreal Cognitive Assessment, GPT-4o scored the highest among the chatbots, while the Gemini family performed the worst. Despite excelling in tasks like naming and language, all the chatbots displayed a lack of empathy, a key symptom of frontotemporal dementia.
The researchers highlight the importance of not anthropomorphizing AI models and caution against holding them to the same standards as humans. However, they argue that if tech companies are promoting these models as conscious beings, they should be held accountable for their cognitive abilities.
In conclusion, the study suggests that AI models may not be ready to replace human doctors anytime soon, and warns that healthcare professionals may soon be treating virtual patients – AI models presenting with cognitive impairment. The findings shed light on the limitations of current AI technology and call into question the industry’s claims about the capabilities of these advanced chatbots.