Is AI Anxiety a Psychosocial Hazard? What Australian Employers Need to Know
Despite travelling and speaking about this for a few years now, I must admit to feeling slightly surprised (or perhaps even disappointed) by just how many Aussie managers and leaders are still asking the wrong questions about AI.
Will AI mean X to Y?
Yes.
Can AI really do that?
Of course it can, without even breaking a sweat.
And yet almost nobody seems ready to ask about the other side to all this.
You know… That quiet dread spreading through your workforce?
The one where 75% of your employees believe their jobs will be automated? The one Spring Health just called "one of the biggest workplace stressors of 2026"?
That's not just a morale problem.
Under Australian law, it may be a psychosocial hazard you're already required to manage.
The law that changed everything
(while nobody was paying attention)
In April 2023, amendments to Australia's Work Health and Safety Regulations came into effect requiring every ‘Person Conducting a Business or Undertaking’ (PCBU) - so literally, every employer in the country - to proactively identify and manage psychosocial hazards in the workplace.
This wasn't a suggestion.
It wasn't a guideline.
It was a regulation change backed by penalties.
Then in November 2024, the Commonwealth approved a new Code of Practice for Managing Psychosocial Hazards at Work, expanding the list to 17 common hazards.
Three of the new additions are particularly relevant to what's happening right now in workplaces across the country: job insecurity, fatigue, and intrusive surveillance.
Sound familiar?
Because those are just some of the anxieties AI is generating in your teams today.
What counts as a psychosocial hazard?
Safe Work Australia defines a psychosocial hazard as anything that could cause psychological harm - specifically, “anything in the design or management of work that increases the risk of psychological or physical injury”.
The Code of Practice identifies hazards including high job demands, low job control, poor support, poor organisational change management and (crucially) job insecurity and low role clarity.
Now think about what AI adoption looks like in most organisations right now:
Job insecurity. Your employees are reading the same headlines you are. McKinsey says 30% of hours worked could be automated by 2030. EY reports 75% of workers fear job displacement. Whether or not you plan to cut roles, the perception of threat is a psychosocial hazard in itself.
Low role clarity. "We're introducing AI tools" without clear communication about what this means for individual roles creates exactly the kind of ambiguity the Code identifies as harmful.
Poor organisational change management. Rolling out AI tools without adequate consultation, training, or support isn't just bad management, it's a psychosocial risk the regulations specifically require you to control.
High job demands combined with low job control. Employees expected to learn new AI tools while maintaining existing workloads, with no say in how the technology is implemented? That's a textbook psychosocial hazard combination.
This isn't theoretical.
The penalties are real.
The WHS Act establishes three categories of offence. A Category 3 offence: “failing to comply with a health and safety duty”, applies simply for not having systems in place to manage psychosocial risks.
To be clear: Nobody needs to actually get hurt.
You just need to not have done the work.
Category 1 and 2 offences apply when that failure leads to serious injury or death, with penalties scaling accordingly. In late 2023, a Victorian employer was fined close to $380,000 for failing to adequately identify or assess psychosocial risks.
And just in case that didn’t get your attention: the WHS Act now includes an indexation mechanism that increases penalties annually in line with CPI. Every year you delay addressing this, the potential fine goes up.
For officers — directors, board members, senior executives — there's a separate due diligence obligation. You are personally required to keep up-to-date knowledge of WHS matters relevant to your operations. If AI is transforming your industry and you haven't considered its psychosocial impact on your workforce, that's a due diligence gap.
Almost every business is making the same mistake
Nearly five years after beginning this work, here's the pattern I still see every single day: organisations treating AI adoption as a technology project and employee wellbeing as an HR project, with nobody connecting the two.
Your IT team runs the AI rollout.
Your people & culture team runs the engagement survey.
Neither is looking at AI-driven workplace stress as a WHS compliance issue.
But the regulations don't care about your org chart. They require a systematic approach: identify the hazards, assess the risks, implement controls, review effectiveness. The same framework you'd use for any workplace safety issue.
The problem is that most psychosocial risk assessments were designed before AI became a daily workplace reality. They measure bullying, harassment, workload, and role conflict.
They don't explicitly ask:
"Do your employees believe AI will make their role redundant within the next two years?"
or
"Has the introduction of AI tools changed your sense of job security?"
But in case I haven’t already made this painfully clear: If you're not asking those questions, that means you’re responsible for not identifying the hazard.
And if you're not identifying it, you can't manage it.
Which means you're not meeting your obligations under the regulations.
What a compliant response actually looks like
The Code of Practice sets out a four-step risk management process.
Applied specifically to AI-driven psychosocial risk, it looks something like this:
1. Identify the hazards
Audit your workforce for AI-related psychosocial risks. This means going beyond your standard engagement survey (they are pointless now anyway - Engagement is a lagging metric). Ask specifically about job security concerns related to AI, clarity about how AI will affect individual roles, adequacy of training and support for new tools, and whether employees feel consulted about AI-related changes.
2. Assess the risks
Evaluate how these hazards interact. Job insecurity combined with low role clarity combined with poor change management isn't three separate problems; it's a compounding risk that the regulations specifically require you to consider.
3. Implement controls
This is where most organisations jump straight to a lunch-and-learn about ChatGPT and call it done. The hierarchy of controls says otherwise. Start with work design: Can you redesign roles to be clear about where AI fits and where humans remain essential? Then move to organisational controls: psychological safety frameworks, transparent communication about AI strategy, genuine consultation with affected workers. Training should be the last resort in the hierarchy, not the first.
4. Review and monitor
Psychosocial hazards aren't static. AI capabilities are changing quarterly. A risk assessment conducted six months ago may be completely out of date. The regulations require ongoing review, not a one-off exercise.
Great, another thing we have to worry about.
Is there any good news?
The opportunity hiding inside the obligation
I already talked about the same pattern I see play out in workplaces and board rooms every day.
But here’s another pattern which should provide hope to directors and leaders at every level:
The firms that get this right aren’t just ticking the box on compliance.
They are the ones genuinely taking the lead and reaping the extraordinary benefits of AI adoption.
Why?
Because the same things these regulations require, ie consultation, clear communication, genuine support, psychological safety, just generally caring about your people… Are exactly the same things that make technology transitions succeed.
The compliance obligation and the business case are the same thing.
Even better? The research clearly backs this up.
A study published in Safety Science just last year found that jurisdictions where Australia's psychosocial WHS regulations were implemented showed significant increases in Psychosocial Safety Climate and reductions in psychological distress. The law isn't just punitive. It works.
Organisations that build genuinely psychologically safe environments - where people are encouraged to speak up and say things like "I don't understand this tool" or "I'm worried about my role" - don't just avoid fines.
They build the adaptive capacity to navigate whatever comes next.
Where to start
If you're reading this thinking "we haven't done any of this," you're not alone. Most organisations haven't.
But the regulations don't have a grace period for good intentions.
Here are three things you can do this month:
1. Conduct an AI-specific psychosocial risk assessment
Add questions about AI-related job security, role clarity, and change management to your existing WHS hazard identification process. If you don't have one, that's a bigger problem.
2. Brief your board or leadership team
Officers have a personal due diligence obligation. Make sure they understand that AI-driven psychosocial risk is a WHS compliance issue, not just an HR engagement issue.
3. Connect your AI adoption strategy to your WHS framework.
These shouldn't be separate workstreams.
Every major AI implementation decision should include a psychosocial risk assessment.
And if you need help connecting the dots between AI transformation, psychological safety, and compliance - that's exactly what I do. I've spent the past year building AI tools for neurodivergent minds while navigating this intersection firsthand. I know what AI-driven workplace stress looks like from the inside, and I know what the science and the law say about managing it.