Rostering coordinators at a national aged care provider were spending hours every week matching carers to clients — qualifications, compliance rules, award rates, travel distances, fatigue levels. They’d looked at scheduling software. None of it could handle the complexity.
What they wanted didn’t exist: intelligent enough to decide autonomously, self-aware enough to stop and ask when it wasn’t sure. Not full automation. Not a dumb tool. Something in between.
A bad roster isn’t an inconvenience. It’s a vulnerable person not getting the carer they need, or a carer driving 90 minutes between appointments because the system didn’t account for geography. It’s a compliance breach that nobody catches until it’s too late. It’s burnout — carers working back-to-back heavy shifts because the spreadsheet didn’t track fatigue.
The coordinators carrying all of this were good at their jobs. That was the problem — the entire operation depended on their judgment, their memory, and their willingness to spend their days inside a spreadsheet instead of supporting their teams.
We started with the question, not a proposal — weeks of internal experiments before we went back to the client with a hypothesis we believed in enough to back with our own time. Build it on our risk. If it works, they pay.
The system we built does what a good coordinator does: checks qualifications, monitors fatigue, matches carers to clients based on history and compatibility, optimises routes, and enforces compliance automatically. But the key design decision was what it doesn’t do — when it’s not confident, it stops and escalates to a human instead of guessing.
The first version proved the concept but missed the domain nuance. It could roster. It could check compliance. It could optimise routes. But it didn’t understand how coordinators think about carer-client relationships — the soft signals, the personality matching, the unwritten rules that experienced humans carry in their heads.
The COO’s words: ”@#$%ing amazing. But not fit for purpose.”
We listened, rebuilt, and came back two weeks later. Not a patch — a rebuild informed by everything we’d learned watching the team react to version one. The second version thought like a coordinator. That version was “almost perfect.”
Coordinators who lived in spreadsheets now approve rosters in minutes. Compliance issues get caught before a shift is assigned, not after. Carers get matched to the clients they work best with. And the system knows when to ask for help — which turned out to be the feature that mattered most.
The client didn’t just sign off. They acquired the product outright — Wayfinder, built November 2024 to May 2025, acquired June 2025. The best validation of any system is when the client wants to own it.
“The first deliverable was @#$%ing amazing. My team and I could not believe that in the space of a month Agent Labs had been able to produce this. But it was not fully fit for purpose. They circled back two weeks later taking our recommendations on board, and it was almost perfect.”
Chief Operations Officer National Aged Care Provider
| A 90% reduction in rostering time |
| 95%+ accuracy in carer-to-client matching |
| Award and regulatory compliance checks are now fully automated |
| Fatigue and wellbeing monitoring built in to every roster decision |
| Human escalation by design — the system asks before it makes a mistake |
A national aged care provider needed a rostering system smart enough to make decisions — and honest enough to know when it couldn’t. We built it on our own risk, rebuilt it after the first delivery, and delivered something so right they bought it outright.