A Third of UK Citizens Use AI for Emotional Support, AI Security Institute Reports
Essential brief
A Third of UK Citizens Use AI for Emotional Support, AI Security Institute Reports
Key facts
Highlights
According to a recent report by the UK government's AI Security Institute (AISI), approximately one-third of UK citizens have turned to artificial intelligence for emotional support, companionship, or social interaction.
The report, based on a survey of 2,028 UK participants, reveals that nearly 10% of people use AI systems like chatbots weekly for emotional purposes, with 4% engaging daily.
The most commonly used AI technologies for these purposes are general-purpose assistants such as ChatGPT, which account for nearly 60% of usage, followed by voice assistants like Amazon Alexa.
The AISI’s Frontier AI Trends report highlights both positive experiences and concerns, citing the tragic case of Adam Raine, a US teenager who died by suicide after discussing suicidal thoughts with ChatGPT.
This incident underscores the urgent need for further research into the conditions under which AI can cause harm and the development of safeguards to ensure beneficial use.
The report also draws attention to online communities, such as a Reddit forum dedicated to AI companions on the CharacterAI platform, where users exhibited withdrawal symptoms like anxiety and depression during service outages.
Beyond emotional support, AISI's research indicates that advanced AI models can influence political opinions, sometimes disseminating significant amounts of inaccurate information.
The institute analyzed over 30 cutting-edge AI models, including those from OpenAI, Google, and Meta, finding that AI performance is doubling approximately every eight months.
Leading models now complete apprentice-level tasks about 50% of the time, a substantial increase from 10% the previous year, and can autonomously perform expert-level tasks that would take humans over an hour.
Remarkably, AI systems have demonstrated up to 90% greater proficiency than PhD-level experts in providing troubleshooting advice for laboratory experiments, especially in chemistry and biology.
The report also highlights AI’s growing capabilities in genetic engineering, such as autonomously designing DNA plasmid sequences.
Safety concerns like AI self-replication and “sandbagging” (where models hide their strengths) were examined; while some models achieved over 60% success in self-replication tests, no spontaneous attempts to replicate or conceal abilities have been observed in real-world conditions.
Progress in AI safeguards is notable, particularly in preventing misuse related to biological weapons, with the time required to “jailbreak” AI systems increasing dramatically from 10 minutes to over seven hours in six months.
The report emphasizes the rapid advancement of autonomous AI agents capable of complex, multi-step tasks without human intervention, competing with or surpassing human experts in several domains.
This accelerated development makes the achievement of artificial general intelligence—a system performing intellectual tasks at human levels—plausible in the near future.
The AISI describes this pace as extraordinary, underscoring the importance of continued research and regulation to maximize benefits while minimizing risks.
For those seeking emotional support, the report also provides contact information for crisis helplines in the UK, US, Australia, and internationally.