Ausmed's Building Workforce Capability with AI event in Perth brought together aged care, disability, and community care leaders for a candid, high-energy conversation about one of the most significant shifts facing the care sector. Hosted by Ausmed CEO Will Egan and featuring a panel of experienced executives from across Western Australia, the discussion went beyond the hype to explore practical realities, governance challenges, and the enduring importance of human judgment.
Here are our top 10 takeaways from the day.
1. AI is a Tool, Not a Decision-Maker
The most consistent message from every panellist was unambiguous: AI can inform, support, and accelerate, but it must never replace human judgment in care decisions. Whether discussing clinical assessments, policy compliance, or workforce management, the panel drew a firm line between AI as a decision-support tool and AI as a decision-maker.
"We will never place a client because a computer told us to. Clinical decisions will always be people decisions."
Kim Adamson, COO, Mosaic Community Care
David Cox of Curtin Heritage Living reinforced the point. For his organisation, AI is about getting more information to make better decisions, not about outsourcing the decision itself. This framing matters enormously as providers consider where and how to introduce AI into their operations.
2. Start Where It's Low-Risk: Back Office Before the Frontline
The panel's advice for organisations beginning their AI journey was consistent: start in the back office, where processes are data-rich, risks are lower, and the business case is easier to prove.
"High process, high data is a very logical place to begin, either for efficiency or to let the models do the analysis for you."
Dan Norgard, Chief Customer and Innovation Officer, Juniper Aged Care
David Cox shared a concrete example: Curtin Heritage Living's AI-powered invoice processing saves significant staff time and introduced a layer of managerial oversight that hadn't previously existed. Dan added that AI can surface insights from compliance and operational data that teams simply don't have time to analyse manually, including trends that could flag risks before a commission visit. The message: build confidence and capability in lower-stakes environments before moving closer to the person receiving care.
3. Accountability Sits with the Individual
With AI hallucinations, embedded bias, and model drift all real risks, the panel emphasised that professional accountability cannot be outsourced to a system. If a staff member follows an AI recommendation that turns out to be wrong, responsibility rests with that person.
"When we all became really worried about cybersecurity, we trained everyone in cyber. We need to do the same thing for AI."
Kim Adamson, COO, Mosaic Community Care
Will Egan added important context: under current legal frameworks, large language model manufacturers generally cannot be litigated against for errors in their models. This places even greater importance on organisations equipping their people with the skills to question, verify, and critically engage with AI outputs rather than simply accept them.
4. Governance Is an Ongoing Conversation, Not a One-Size-Fits-All Answer
Governance of AI was a recurring theme throughout the session, but the panel was honest: there is no universal framework. The right approach depends on what an organisation is using AI for, what data it involves, which staff are interacting with it, and what risks it introduces.
"Start with your risk appetite. What are you prepared to accept? Get a policy, get a procedure, get some behaviour around it, and accept there will be failure, because that is where the innovation comes from."
Dan Norgard, Chief Customer and Innovation Officer, Juniper Aged Care
Juniper engaged specialist support to work through AI governance with their board. David Cox highlighted privacy as Curtin Heritage Living's primary lens, particularly around AI-enabled sensors in residents' rooms. Kim Adamson described Mosaic's early-stage approach: proof-of-concept pilots with careful oversight before anything goes to scale. The common thread across all three was to treat AI as a capability to be governed, not just a technology to be deployed.
5. Co-Design Your AI Journey with Your Workforce
Trust in AI tools does not happen by announcement. It has to be built over time, through transparency, genuine involvement, and a willingness to hear concerns, especially from frontline staff who may feel most uncertain about what this technology means for their role.
"You cannot push these things and you cannot start with the solution. You have to start with the problem. That is the only way you will get buy-in."
David Cox, Managing Director, Curtin Heritage Living
Curtin Heritage Living spent a full year engaging staff before introducing AI-enabled sensors in residential rooms. Rather than presenting the technology first, the team started by discussing the problem: falls, false alarms, resident safety. Over 90% of families ultimately consented to the technology, and previously sceptical staff became advocates. Kristy Harper from iLA described a similar approach, building human-in-the-loop validation steps directly into their AI-powered care planning tool so staff retain oversight at every stage of the process.
6. Data Strategy Comes Before AI Strategy
AI is only as good as the data it is built on. Several panelists noted that organisations without a robust data foundation will struggle to get reliable or useful outputs from AI, regardless of how sophisticated the tool.
"AI is only as good as your data. If you are thinking about where to start strategically, investing in your data is a great place to begin."
Kristy Harper, Director of Program Delivery, iLA
While iLA does not yet have AI explicitly in its strategic plan, it has invested deliberately in building a strong data strategy as a prerequisite. Dan Norgard echoed this, noting that Juniper is surveying its workforce to understand where AI can genuinely create value before selecting which use cases to pursue. The lesson: do not lead with AI as a goal. Lead with the problems you are trying to solve, and make sure your data is in shape to support the solutions.
7. Think of AI as a Capability, Not Just a Technology
Dan Norgard made a distinction that resonated strongly across the room: AI should be thought of as a capability, not simply a technology to be handed to the IT team. That reframe changes who needs to be involved in implementation.
"Be really kind to your IT people. Do not rock up and say you are going to solve all my problems. Talk with your learning and development people, your organisational development folk, your strategy people. This is a shared piece to work through."
Dan Norgard, Chief Customer and Innovation Officer, Juniper Aged Care
If AI is a capability like leadership or communication, then building it requires structured learning, practice, feedback, and time, not just access to a tool. It also means the responsibility for AI uplift sits across the whole organisation, not with a single function.
8. Measuring AI Success Is Harder Than It Looks
Several panelists were candid about the difficulty of measuring AI's impact, particularly in care environments where the relationship between technology and outcomes is complex.
"Sometimes it is really hard to pin something that is so different to what you were using before and say, objectively, this is better."
David Cox, Managing Director, Curtin Heritage Living
Kim Adamson shared that Mosaic's early proof-of-concept is already showing approximately 38 hours saved per client onboarding, a meaningful efficiency gain in a sector with thin margins. But David Cox described a more nuanced challenge: Curtin Heritage Living's AI sensor network now captures significantly more falls data than before, which has actually made their star rating appear worse, even though more complete reporting reflects better care. Be prepared to work with qualitative and subjective measures alongside traditional KPIs, at least in the early stages, and make sure your measurement framework accounts for what the technology is actually changing.
9. Keep Asking: What Does This Mean for the Person?
One of the most thought-provoking moments of the session came from a question about whether AI, in optimising for what is measurable, might quietly deprioritise what truly matters to the person receiving care.
"Having the principles and then going back and checking yourself as you implement is how we ensure efficiency does not come at the cost of the person."
Dan Norgard, Chief Customer and Innovation Officer, Juniper Aged Care
Dan's response reflected the depth of Juniper's approach. The organisation has developed ethical decision-making guidelines anchored in a single core question: what does this mean for the person? David Cox offered an equally important counterpoint: Curtin Heritage Living's AI sensors have actually increased privacy for residents, who can now lock their doors rather than being checked on manually through the night. Implemented thoughtfully, AI can enhance human dignity rather than erode it.
10. The Future Workforce Needs New Skills, Including How to Think Critically About AI
The final question of the day, posed by a nursing academic from Edith Cowan University, struck at one of the sector's biggest emerging challenges: how do we ensure that new graduates and early-career workers develop genuine clinical reasoning rather than learning to lean on AI as a shortcut?
"AI feels like a perfect tool right now for people with a certain level of experience because they know how to ask the right questions. But how do we support new graduates to develop clinical reasoning when we are also taking away some of the entry-level experiences they used to get?"
Kristy Harper, Director of Program Delivery, iLA
Will Egan pointed to simulation tools as part of the answer. AI-powered simulations can expose workers to variable, high-consequence scenarios in a safe environment, building the kind of repeated experiential learning that develops genuine competence. Robbie Russo, Ausmed's COO, reinforced that this sits at the heart of Ausmed's direction: moving online learning from passive consumption to active, real-world application that changes behaviour and ultimately improves care.
The Building Workforce Capability with AI event made one thing clear: the care sector is not standing still. Leaders across aged care, disability, and community care are actively experimenting, learning, and building the governance frameworks needed to adopt AI responsibly. The path forward is not about choosing between technology and humanity. It is about using both well, starting with the problem, keeping the person at the centre, and building a workforce with the capability and confidence to use AI with genuine critical judgment.


