If you’re building an AI-powered product in Australia — or using AI tools in your startup’s operations — you might be wondering what the rules are. The honest answer: it’s complicated, it’s changing, and there’s no single “AI law” to point to. But that doesn’t mean you’re operating in a legal vacuum. Existing laws already apply to what you’re doing with AI, and new developments are shaping how those laws will be interpreted and enforced.
Here’s what founders need to know in early 2026.
Australia’s Approach: Existing Laws, Not a Standalone AI Act
In December 2025, the Australian Government released its National AI Plan — the long-awaited roadmap for how Australia intends to manage AI development and adoption. The headline for founders is this: Australia is not introducing a standalone AI Act. At least not yet.
This was a deliberate pivot. In September 2024, the government had proposed ten “mandatory guardrails” for high-risk AI, which would have required developers to create risk management plans, test systems before and after deployment, establish complaints mechanisms, and open records to third-party assessment. That proposal attracted significant pushback from industry, and the Productivity Commission recommended in August 2025 that economy-wide AI regulation be paused until an audit of existing law could be completed.
The result is a lighter-touch approach. Instead of new AI-specific legislation, the government will rely on “strong existing, largely technology-neutral legal frameworks” — consumer protection, privacy, discrimination, workplace safety, and intellectual property law — supplemented by voluntary guidance and a new $30 million AI Safety Institute that began rolling out in early 2026.
For startups, this is broadly good news. It means less compliance burden than a standalone AI act would have imposed, and more room to build without navigating a brand-new regulatory regime. But the flip side is that the laws that already apply to your business apply equally when you use AI — and regulators are paying attention.
The Laws That Apply Right Now
Privacy Act 1988
The Privacy Act is the most immediately relevant piece of legislation for AI startups. If your product collects, stores, or processes personal information — and most do — the Australian Privacy Principles (APPs) apply regardless of whether a human or an algorithm is making decisions.
Practically, this means:
- Transparency: You need to inform users when AI systems process their personal information. If your product uses AI to make recommendations, assess creditworthiness, or analyse behaviour, your privacy policy should say so.
- Consent and purpose limitation: Personal data collected for one purpose can’t be repurposed for AI training without appropriate consent or a relevant exception.
- Data minimisation: Collect only what you need. Building a massive training dataset from user data “just in case” is a compliance risk.
This matters even more because of the 2024 Privacy Act amendments, which take effect in December 2026. These reforms will introduce stronger consent requirements and, critically, potential rights to explanation for high-impact automated decisions. If your AI system is making decisions that significantly affect people’s rights or interests, you’ll need to be ready to explain how and why.
Penalties for serious breaches can reach the greater of $50 million, three times the benefit obtained, or 30% of adjusted turnover. These are not theoretical numbers — the OAIC is actively enforcing.
Australian Consumer Law
The Australian Consumer Law (ACL) prohibits misleading or deceptive conduct. That prohibition extends to AI-generated content, AI-driven recommendations, and AI-powered pricing.
If your chatbot gives advice that’s wrong, your AI-generated product descriptions are inaccurate, or your recommendation engine steers customers toward products based on undisclosed commercial arrangements rather than genuine suitability — you have an ACL problem. The ACCC doesn’t care whether a human wrote the misleading claim or a language model generated it. The conduct is what matters, not the mechanism.
For startups using generative AI in customer-facing contexts, this means implementing quality controls, accuracy monitoring, and appropriate disclaimers. “The AI said it” is not a defence.
Anti-Discrimination Laws
Australia’s federal anti-discrimination framework — the Sex Discrimination Act 1984, Racial Discrimination Act 1975, Disability Discrimination Act 1992, and Age Discrimination Act 2004 — applies to AI-driven decisions just as it does to human ones.
This is particularly relevant for startups building AI tools in hiring, lending, insurance, or any domain where decisions affect people based on characteristics that correlate with protected attributes. Algorithmic bias isn’t just an ethical problem — it’s a legal one. If your AI system produces discriminatory outcomes, you can’t point to the training data and shrug.
The practical requirement is bias testing. Audit your models for disparate impact across protected groups, document what you find, and fix what needs fixing. Maintaining human oversight for high-impact decisions is also strongly recommended.
Copyright and Intellectual Property
Copyright is the area where AI regulation is most actively evolving in Australia. In October 2025, the government announced it would not introduce a text and data mining (TDM) exception to the Copyright Act 1968 — meaning using copyrighted material to train AI models without permission or compensation remains legally risky.
For AI startups, this has direct implications:
- Training data provenance matters. If you’re training models on scraped web content, you need to understand the copyright position of that content. “Everyone else does it” is not a legal strategy.
- AI-generated outputs may not attract copyright protection. Australian law requires a human author for copyright to subsist. Purely AI-generated content may fall into a legal grey zone.
- Contracts need to address AI. If you’re building AI products for clients, your agreements should be explicit about who owns AI-generated outputs, what data the AI was trained on, and what indemnities apply.
The government is consulting on further copyright reform, but for now the safest position is to treat third-party copyrighted material as off-limits for training unless you have a clear licence or exception.
The Voluntary AI Safety Standard and AI6
While there’s no mandatory AI law, the government has published detailed voluntary guidance that Australian startups should be aware of.
The Voluntary AI Safety Standard (VAISS), released in 2024, sets out 10 guardrails covering accountability, risk management, data governance, testing, human oversight, transparency, contestability, and record-keeping. In October 2025, the National AI Centre updated this into the Guidance for AI Adoption (AI6) — six essential practices that integrate the VAISS guardrails into a more practical framework.
These standards are voluntary. But “voluntary” doesn’t mean “irrelevant.” They represent the government’s view of best practice, and they’re likely to influence how regulators interpret existing legal obligations. An AI startup that can demonstrate alignment with AI6 and VAISS is in a much stronger position — both with regulators and with enterprise customers who increasingly require governance frameworks from their AI vendors.
The 10 VAISS guardrails, in summary, cover: organisational accountability, risk management across the AI lifecycle, data governance, testing and evaluation before deployment, human oversight for high-risk decisions, transparency with users, contestability of AI decisions, supply chain transparency, privacy and security, and record-keeping. If you’re building AI products, these are worth reading in full.
The AI Safety Institute
The new Australian AI Safety Institute (AISI), funded with $30 million and housed within the Department of Industry, Science and Resources, is the government’s mechanism for keeping pace with AI developments. Its role is to monitor emerging AI risks, advise on regulatory gaps, test frontier AI systems, and recommend where existing laws need updating.
For startups, the AISI matters because it’s the entity most likely to drive future regulatory changes. If the Institute identifies a gap in how existing laws address a specific AI risk — say, in healthcare AI or autonomous decision-making — expect targeted reforms to follow. The government has been explicit that it “will not hesitate to intervene” if voluntary approaches prove insufficient.
What Founders Should Do Now
You don’t need to hire a compliance team or freeze your AI roadmap. But you should take some practical steps:
-
Audit your data practices. Understand what personal information your AI systems collect and process, and make sure your privacy policies are accurate and current. Start preparing for the December 2026 Privacy Act changes.
-
Check your training data. Know where your training data comes from. If you’re using third-party content, make sure you have appropriate licences or permissions. Document your data provenance.
-
Test for bias. If your AI makes decisions about people — hiring, lending, recommendations, pricing — audit for discriminatory outcomes. Document your testing methodology and results.
-
Be transparent. Tell users when they’re interacting with AI. Disclose AI involvement in decision-making. This is good practice now and increasingly likely to be a legal requirement.
-
Read the VAISS and AI6. They’re practical, well-structured, and free. Aligning your AI governance with these frameworks is low-cost insurance against future mandatory requirements.
-
Build in human oversight. For high-stakes AI decisions, maintain meaningful human review. “Human in the loop” isn’t just a buzzword — it’s a regulatory expectation that’s only going to strengthen.
The Bottom Line
Australia’s AI regulatory environment is deliberately gradualist. There’s no comprehensive AI act, no mandatory registration, and no blanket prohibition on any AI technology. The government is betting that existing laws, voluntary standards, and a well-resourced safety institute can manage the risks while the technology and the evidence base mature.
For startup founders, this creates a window of opportunity — but not a free pass. The laws that protect consumers, prevent discrimination, safeguard privacy, and enforce intellectual property rights apply to AI just as they apply to everything else your business does. The startups that build compliance into their AI products from the beginning — rather than retrofitting it when regulation catches up — will be the ones best positioned as the framework evolves.
If you’re building AI products and want advice on how to structure your compliance approach, get in touch.