About Us
Understanding Human Wellbeing in the AI Era
As AI chatbots reach unprecedented adoption rates, the AI Psychological Harms Research Coalition was founded to research and address the emerging psychological risks that could threaten human cognitive and social health in our AI-integrated world.
The Issue at Hand
Since the public launch of ChatGPT in November 2022, now 25% of the US population — and approximately 5% globally — are making use of anthropomorphic conversational AI chatbots. It is the most quickly adopted technology in human history, and for good reason: large language models (LLMs) are proving helpful, powerful, capable and adaptable across many domains.
However, for all of the potential benefits to society that widespread adoption of artificial intelligence could bring — across education, psychology, and all other domains and industries — the rapid spread of chatbots has also resulted in the emergence of a new set of risks that can be understood as AI-induced psychological disorders.
Initial evidence suggests that these disorders are already on path to becoming widespread, and are impacting all types of people who interact with the technology—even those previously aware of the dangers posed. Obvious risks concern the individual loss of essential cognitive functions, including writing, reasoning, communication, and planning. This is a kind of mental atrophy akin to dependence on the GPS replacing navigation skills.
Beyond this, AI applications that create experiences of intimate relationships (simulating teacher, friend, lover, parent) are bringing users to regard machines as having the properties of personhood, interfering with cognitive and interpersonal processes in novel ways, creating new kinds of psychological harms.
It is for these reasons that we have formed the AI Psychological Harms Research Coalition, in order to form a network of leading academic and professional institutions to gather evidence, focus research, and deepen our shared understanding of these novel risks to human psychological health posed by the use of AI.
The full scope of these harms remains unknown: how many people are affected, how quickly the numbers are growing, and which groups are most vulnerable. Essential research is needed to understand what is occurring, but current research efforts are radically insufficient. This contributes to inaction in addressing the risk, as these technologies are being widely adopted by children, both in schools and at home, without understanding potential harms.
Our initial focus is to serve as a rallying point for those interested in this field of study, help highlight research, build a resource library, and convene conferences.
Document and understand the emerging risks.
Support clinicians in recognizing and treating AI-related conditions.
Inform safer AI design and public policy.
Help people use these tools in healthier, more sustainable ways.
Our mission is not to condemn AI technology, but to ensure its development is guided by a clear, evidence-based understanding of human psychological integrity.
Partners
This research initiative is supported by a network of leading academic and professional institutions committed to advancing psychological health and safety.
Our Team