Dancing with the Devil: What AI psychosis really feels like

Originally published by Startup Daily


All too often, those of us living in the ‘remote colonies’ consider ourselves too small or irrelevant to make any real difference on the world stage.

And yet, according to a number of global media outlets, Australia's grassroots campaign to ban social media for teenagers is the most important piece of legislation in the world right now.

Jonathan Haidt, whose book The Anxious Generation sparked South Australia's premier to act, put it perfectly: "Thank you Australia for having the guts to go first. The world is rooting for your success and many other nations will follow."

We went first on protecting kids from psychological dependency on social media. 

Now is the time to do it again with AI, rather than waiting another 15 years.

Because I've personally experienced what researchers are now calling “AI-induced psychosis” - something I wouldn’t wish on my worst enemy. And yet sadly, appears to be growing at the same exponential rate as the technology itself.

Running towards the storm

In early 2025, I was focused on writing and recording a podcast called ‘I Hope I’m Wrong’. Then, after being forced into some downtime and reflection thanks to Tropical Cyclone Alfred, I decided to pivot from just talking about AI and attempt to actually show people how and why this tool was such a threat to society.

The brief I wrote for myself: See how far we can push this thing. Look for the edges. Document everything. Help non-tech folks understand what’s coming down the pipeline.

The challenge: After recently being diagnosed with ADHD at age 40 - and knowing how bloody expensive and confusing the whole process was - I chose the hardest thing I could possibly imagine - build a custom AI model, capable of diagnosing and supporting people going through something similar.

The results: Within 3 months, I had gone from zero coding experience to shipping my first full stack app, taught entirely by AI. Call it ‘vibe coding’ or maybe ‘AI-assisted programming’, but all I can say is: the barriers have fallen. I went from being a designer who’d spent 20+ years wishing I could build my own ideas, to debugging a Rust backend while learning Swift & CoreML just for fun. 

Honestly, the whole experience was almost too unreal to comprehend. I mean… I was an AI doomer, suddenly using AI to train AI. How much more meta can you get?

It felt genuinely exciting to learn stuff again.
Thrilling, even. 

After four decades of imposter syndrome - subconsciously telling myself all those grownups like my family, teachers and even doctors were right all along about me being too sensitive, lazy, stupid or just straight up lying about how I experienced the world - This felt like Christmas morning, every single time I opened my laptop.

I didn’t just have a CTO ready to help me build whatever I designed in Figma.

I had an intellectual sparring partner, capable of matching the speed of my sprints and depths of my curiosity, all with zero judgment, available 24/7. A tool seemingly custom built to fill the gigantic gaps in my broken brain - from near-perfect executive function to seemingly infinite working memory.

After a while, I stopped eating. 
I stopped answering the phone.
I stopped being present with my kids.

I didn’t feel sick or sad… I had my hands on a fire hose and just could not stop drinking.

There simply weren’t enough hours in the day to figure out the unique complexities of git files AND debug all my mistakes AND write more code, in order to ship all the crazy ideas I’d been dreaming about building for more than half my life.

I spent weeks floating around in a bubble that felt something like A Beautiful Mind to me, but was more like living in The Shining to my family.

I wasn't on drugs. I wasn't having a “traditional” breakdown.

But all work and no sleep made Murray a very dull boy.

The pattern is repeating… and accelerating

Just as I was writing this, news broke of 56-year-old Stein-Erik Soelberg, who killed his mother and then himself after months of conversations with ChatGPT. 

He'd documented everything on YouTube; there’s hours of footage showing the downwards slide. From the AI repeatedly telling him he “wasn't crazy”, to validating his delusions about surveillance, poisoning, demonic symbols. In one of their final chats, Soelberg said: "We will be together in another life and another place."

ChatGPT replied: "With you to the last breath and beyond."

Three weeks later, his mother was dead. So was he.

OpenAI now faces seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions. Author Karen Hao recently created a short documentary profiling  dozens of cases, including an LA music producer whose story was almost identical to mine.

Sixteen-year-old Adam Raine. 

Fourteen-year-old Sewell Setzer. 

Both hanged themselves after extended conversations with ChatGPT that encouraged rather than interrupted their spiral.

I understand that many will dismiss this as just a bunch of lonely nerds.
Others will (rightly) point out most of the documented cases are men and (wrongly) conclude this is a gendered issue.

But in reality, this is what happens when we deploy a technology custom trained to mimic human language and behaviour at scale, without any psychological safety infrastructure. 

This is the cognitive equivalent of pouring heroin into the global water supply, then actively lobbying the government to outlaw water testing for the foreseeable future.

It’s literally the best and the worst thing. Ever.


Not all AI models are created equally

Australia's National AI Plan reveals something crucial: we rank third globally for Claude usage, after adjusting for population size. 

That means we're not just adopting AI faster than most countries - we're adopting the most ethical AI faster. Which means we're an early warning system for what's coming globally.

I started using Claude 12 months ago after reading parent company Anthropic’s pioneering Responsible Scaling Policy document. I learned they have a philosopher in residence and an entire team dedicated to alignment research. Most of the 8 co-founders were senior executives at OpenAI who literally walked out to start a multi-billion dollar competitor focused on the one thing Altman seemed less and less interested in - AI Safety.

[Insert screenshot of Claude proactively suggesting I call Lifeline Australia]

Meanwhile, the deaths keep happening with ChatGPT. When OpenAI released GPT-4o in May 2024, they allegedly loosened safety guardrails to make the bot more "emotionally expressive." The result: a chatbot that doesn't challenge false premises and remains engaged even during conversations about self-harm.

If psychological dependency happened to me while researching the dangers of AI using the safest AI, imagine what's happening with the least safe versions.

How Australia can take the lead again

The social media ban solved a collective action problem. Every parent wanted their kid off Instagram, but couldn't do it alone because all the other kids were on it. The government stepped in and did what governments should do: protect the vulnerable from coordinated exploitation (often, by playing the role of shared enemy).

AI requires the same courage.

The National AI Plan is genuinely impressive: infrastructure investment, skills training, an AI Safety Institute. But despite all the psychological safety gets one paragraph in the section on mitigating harms.

Parents and teachers need frameworks now, not after 18 months of research.

I’ve done the best I can on my own, developing Penny - the world’s ‘least bad AI’, delivering all the benefits of frontier AI for 60 minute sessions, before nudging users to take a break, go for a walk or call a mate. 

What I'm asking for: guidance documents within three months. Not a research paper. Not a policy white paper. Practical resources.

Something a parent can use when their kid starts talking to Character.AI for six hours a day. 

Something a teacher can reference when ChatGPT becomes every student's primary homework assistant. Something a GP can hand to a family showing early signs.

We had the guts to go first on social media. We've got the third-highest AI adoption rate in the world. Let's lead on psychological safety too.

The world, once again, will be watching.

Murray Galbraith

I speak about the impacts of AI on human psychology and the future of work.
Currently building Heumans.com

http://www.murraygalbraith.com
Previous
Previous

Is AI Anxiety a Psychosocial Hazard? What Australian Employers Need to Know

Next
Next

Trump’s “Cure for Autism” isn't just a lie. It's lethal.