AI and the Workplace: Legal Obligations for Australian Startups Using AI to Hire or Manage Staff

AI and the Workplace: Legal Obligations for Australian Startups Using AI to Hire or Manage Staff

Australian startups are adopting AI tools at pace. Automated CV screening, algorithmic scheduling, AI-driven performance analytics, chatbot-based candidate assessments — these are no longer experimental technologies. They are standard features in the HR platforms that early-stage companies subscribe to from day one.

What most founders don’t realise is that deploying these tools creates legal obligations that cut across multiple areas of Australian law. The fact that a third-party vendor built the algorithm doesn’t insulate your company from liability. If the tool makes a decision that affects a worker or a job applicant, you are the one who answers for it.

Here’s what you need to know.

Anti-Discrimination Law: You Own the Algorithm’s Decisions

The most immediate legal risk of using AI in hiring and employment decisions is discrimination. Under Commonwealth legislation — including the Sex Discrimination Act 1984, the Racial Discrimination Act 1975, the Age Discrimination Act 2004, and the Disability Discrimination Act 1992 — it is unlawful to treat a person less favourably on the basis of a protected attribute. State and territory laws, such as the Anti-Discrimination Act 1977 (NSW) and the Equal Opportunity Act 2010 (Vic), impose overlapping obligations.

These laws apply to recruitment decisions. Critically, discriminatory intent is not required. If an AI screening tool disproportionately excludes candidates of a particular age, gender, or ethnic background, the employer may be liable even if no human ever intended that outcome.

This is the concept of indirect discrimination — a condition, requirement, or practice that appears neutral but has a disproportionate adverse impact on people with a protected attribute. An AI recruitment tool trained on historical hiring data will learn the patterns embedded in that data. If your company (or your industry) has historically hired predominantly young men, the algorithm may learn to favour candidates who match that profile and systematically disadvantage women, older applicants, or people with disabilities.

The Amazon example is instructive. In 2018, Amazon scrapped an internal AI recruiting tool after discovering it was penalising CVs that included the word “women’s” and favouring male-coded language — a direct consequence of training the system on a decade of predominantly male applicant data. Amazon never deployed the tool in live hiring, but the incident demonstrates how easily algorithmic bias can emerge.

The practical point for startups: If you use an AI tool to screen, rank, or assess candidates, you cannot rely on the vendor’s assurance that the tool is “unbiased.” You need to understand what data the tool was trained on, how it makes decisions, and whether it has been tested for adverse impact against protected attributes. If you can’t answer those questions, you’re exposed.

The Fair Work Act: Algorithmic Management and Adverse Action

The Fair Work Act 2009 (Cth) contains broad general protections against “adverse action” — which includes dismissing an employee, injuring them in their employment, or altering their position to their detriment. It is unlawful to take adverse action against a person because of a workplace right, because they have made a complaint, or because of a protected attribute.

Section 361 imposes a reverse onus of proof: if an employee establishes that adverse action was taken and that they had a protected attribute or exercised a workplace right, the employer bears the burden of proving that the attribute or right was not the reason for the action.

Now consider how this interacts with algorithmic decision-making. If an employee is performance-managed, demoted, or terminated based on metrics generated by an AI system — a system whose internal logic neither the employer nor the employee can fully explain — the employer’s ability to discharge that burden of proof is significantly compromised.

The Fair Work Act also enshrines the principle of “a fair go all round” in unfair dismissal proceedings under section 381. If a dismissal is based on opaque algorithmic outputs that the employee cannot meaningfully contest, and the employer cannot meaningfully explain, a tribunal may find that procedural fairness was not afforded.

The practical point for startups: If you use AI tools for performance management, rostering, or workforce decisions, ensure that a human being reviews and takes responsibility for any decision that materially affects an employee’s employment. “The algorithm recommended it” is not a defence.

The NSW Digital Work Systems Act: A New WHS Obligation

On 12 February 2026, the NSW Parliament passed the Work Health and Safety Amendment (Digital Work Systems) Act 2026, making New South Wales the first Australian jurisdiction to enact specific workplace health and safety legislation targeting AI and algorithmic systems.

The Act amends the Work Health and Safety Act 2011 (NSW) in two key ways.

First, it introduces a new primary duty under section 19(3)(c1) requiring a person conducting a business or undertaking (PCBU) to ensure, so far as is reasonably practicable, that workers’ health and safety is not put at risk by the use of digital work systems. A “digital work system” is defined broadly as “an algorithm, artificial intelligence, automation or online platform.”

Second, it introduces a specific duty under section 21A for PCBUs that use digital work systems to allocate work. This duty requires the PCBU to consider whether the system creates or contributes to risks including excessive workloads, unreasonable performance metrics, excessive monitoring or surveillance, and unlawful discriminatory practices.

The Act has not yet commenced — its operative provisions await proclamation, and SafeWork NSW must publish guidelines before the union entry provisions take effect. But the direction of travel is clear, and Safe Work Australia has been tasked with considering whether the national model WHS laws should be amended to address the same issues. Multi-state employers should be preparing now.

The practical point for startups: If you operate in NSW (or employ anyone who works from NSW), audit the digital systems you use to allocate, monitor, or manage work. Automated rostering tools, productivity tracking software, and algorithmic task assignment platforms all fall within the Act’s definition of “digital work systems.”

Privacy Act Reforms: Transparency About Automated Decision-Making

The Privacy and Other Legislation Amendment Act 2024 (Cth) introduced amendments to the Privacy Act 1988 that will take effect on 10 December 2026. These amendments require entities regulated under the Privacy Act to update their privacy policies to disclose how they use “computer programs” — which includes AI and algorithmic systems — to make decisions that “significantly affect” individuals’ rights or interests using personal information.

The disclosure obligations are specific. If your startup uses AI in ways that substantially influence decisions about job applicants or employees — screening CVs, ranking candidates, determining performance ratings, or informing termination decisions — your privacy policy will need to explain what personal information is used, which decisions are made “predominantly” or “solely” by automated systems, and what review mechanisms are available.

This was a direct response to the Robodebt scandal, where automated systems made consequential decisions about individuals without transparency or adequate review. The amendments apply to any APP entity — which includes most private sector organisations with annual turnover exceeding $3 million, though the Act captures some smaller businesses too.

The practical point for startups: Review your privacy policy now. If you use any AI tool that processes personal information to make or inform decisions about people — whether job applicants, employees, or customers — you will need to disclose this by December 2026.

Workplace Surveillance: Existing Obligations Still Apply

Even before the recent reforms, Australian employers have been subject to workplace surveillance legislation. The Workplace Surveillance Act 2005 (NSW) requires employers to give at least 14 days’ written notice before commencing surveillance of employees, including computer surveillance. Similar obligations exist under the Surveillance Devices Act 1999 (Vic) and equivalent legislation in other jurisdictions.

AI-powered monitoring tools — keystroke loggers, screen capture software, productivity scoring algorithms — are surveillance. If you deploy them without complying with the applicable notification requirements, you’re in breach of existing law, regardless of the new reforms.

The Privacy Act’s new statutory tort for serious invasion of privacy, which commenced on 10 June 2025, adds another layer. The employee records exemption under the Privacy Act, which historically gave employers wide latitude in handling employee information, does not protect against a serious invasion of privacy. AI-driven surveillance that extends into an employee’s private activities may attract liability even where the employer had a legitimate business purpose.

What Startups Should Do Now

The legal framework governing AI in the workplace is evolving rapidly, but the obligations are already real. If you’re using AI tools in your hiring or people management processes — and if you’re a startup using modern HR software, you almost certainly are — here’s where to start:

  1. Audit your tools. Identify every AI or algorithmic system that touches hiring, performance management, rostering, or employee monitoring. Understand what data they use and how they make decisions.

  2. Test for bias. Ask your vendors whether their tools have been tested for adverse impact against protected attributes. If they can’t answer, consider whether you should be using the tool.

  3. Keep humans in the loop. Ensure that no material employment decision — hiring, discipline, termination — is made solely by an algorithm. A human must review, understand, and take responsibility for the decision.

  4. Update your privacy policy. Begin preparing for the December 2026 ADM disclosure requirements now.

  5. Review surveillance compliance. If you use any form of employee monitoring, confirm that you have complied with the notice requirements under the applicable state legislation.

  6. Watch the WHS space. Even if you don’t operate in NSW, the Digital Work Systems Act signals where national regulation is heading. Getting ahead of it is cheaper than catching up.

If you’re building a startup and need help navigating the legal obligations around AI in the workplace, get in touch. This is an area where getting it right early is significantly cheaper than fixing it later.

Recent Articles

blog-image
AI and the Workplace: Legal Obligations for Australian Startups Using AI to Hire or Manage Staff

Australian startups are adopting AI tools at pace. Automated CV screening, algorithmic scheduling, AI-driven performance analytics, chatbot-based candidate assessments — these are no longer …

blog-image
Shareholders' Agreements vs Company Constitution: What Each Does and Why You Need Both

Most Australian startups incorporate a company and move on. Maybe the founders shake hands, split equity, and start building. If they’re diligent, they adopt a company constitution. If …

blog-image
Due Diligence Ready: What Investors Actually Look for in Your Legal Data Room

You’ve pitched your startup, the investor is interested, and a term sheet is on the table. Then the email arrives: “Can you set up a data room?”