A volume problem with an honesty problem sitting inside it.
Job searching in Melbourne's data and AI market in 2026 is a volume problem. Two hundred-plus new listings a day across multiple boards. A hundred-plus applicants per role. The quality of the match is often hard to tell from the description alone. The default workflow — manual review plus a one-size-fits-all CV — wastes time on bad matches and undersells the good ones.
Underneath the volume problem is an honesty problem. A single LLM asked "is this job a good fit for Harry?" will find reasons to say yes. Language models, by default, are cooperative. The moments in a job search where you most need a cold second opinion are the moments an LLM will cheerfully supply the hottest first opinion.
I wanted a system that read every relevant listing daily, scored them with honesty rather than optimistic enthusiasm, produced tailored application materials only for the genuinely promising ones, learned from my apply/skip decisions over time, and cost less than a coffee per day to operate. That last constraint shaped the design more than any other.