On 16 April 2026, Chief Justice Debra Mortimer of the Federal Court of Australia issued the Use of Generative Artificial Intelligence Practice Note (GPN-AI) — the Court’s first comprehensive statement on how generative AI tools like ChatGPT, Copilot, and Claude can and cannot be used in Federal Court proceedings. For startups that are building AI into their workflows, or that might one day find themselves in litigation, this practice note creates obligations that extend well beyond the legal profession.
This article explains what GPN-AI says, who it applies to, and what Australian startup founders should do about it.
What GPN-AI Actually Requires
The practice note is built around three core expectations:
-
Basic understanding. Anyone using generative AI in connection with Court proceedings must have a basic understanding of the technology’s capabilities, limitations, and risks — including its tendency to produce inaccurate or entirely fictitious material.
-
No compromise to justice. The use of AI must not undermine the proper administration of justice or public confidence in the legal system.
-
Disclosure on demand. If the Court requires it, a person must disclose whether and how generative AI was used in preparing materials for a proceeding.
The practice note does not ban AI. The Court expressly “embraces” the use of technology and acknowledges that AI can increase efficiency, reduce costs, and improve access to justice. But it draws a hard line at accuracy: the presentation of false or inaccurate information to the Court is described as “unacceptable” regardless of whether a human or an AI tool generated it.
Who Must Comply
GPN-AI applies to every person who appears before or files documents with the Federal Court. That includes:
- Legally represented parties
- Self-represented litigants
- Witnesses and expert witnesses
- Third parties subject to subpoenas or document production orders
If your startup is ever a party to Federal Court proceedings — whether in a contractual dispute, an IP infringement claim, an employment matter, or ASIC enforcement action — these obligations apply directly to your company and its employees, not just to your lawyers.
The Disclosure Obligations
The practice note creates two tiers of disclosure obligations:
Pleadings, submissions, and documents
For documents filed with the Court (pleadings, written submissions, chronologies, lists of documents), the person responsible for preparing the document must be able to confirm that:
- Facts stated can actually be proved
- Legal authorities cited genuinely exist and support the stated proposition
- Evidence cited is real and likely admissible
- Chronologies are accurate
If AI was used in preparing those documents, the responsible person must have verified all of this themselves. The AI does not carry responsibility — the person does.
Evidence and expert reports
Stricter requirements apply to affidavits, witness statements, and expert reports. Where AI tools have been used to summarise or analyse information that a witness relies on, to create images, videos or sound recordings presented to the Court, or in any other manner that might affect admissibility, that use must be disclosed.
The disclosure must appear at the start of the document and state, as concisely as possible, where in the document AI has been used and how.
Expert witnesses remain subject to their overriding duty to assist the Court impartially. Their reports must reflect their own reasoning and opinions — not outputs generated by an AI tool.
The Confidentiality Problem
For startups, the most commercially significant section of GPN-AI may be its guidance on confidential information. The practice note warns that when confidential information is entered into a generative AI tool, it may become available to other people. Users may not know where that information is stored, how it is used, or who will have access to it.
The Court identifies several categories of information that must not be entered into AI tools in ways inconsistent with the obligations attaching to them:
- Information subject to court confidentiality or suppression orders
- Legally privileged material
- Information subject to implied undertakings (the Harman obligation)
- Material produced under subpoena
- Information that is otherwise private or confidential
For startups involved in litigation, this creates a practical risk. If your team uses AI tools to prepare for discovery, review documents, or draft correspondence about a dispute, and confidential or privileged information is entered into a public or semi-public AI tool, you may have waived privilege or breached court orders — even if that was not your intention.
The practice note draws a distinction between open or public AI tools and closed or enterprise systems. The risks of inadvertent disclosure may be lower for tools operating in controlled environments. But using a closed system does not automatically solve the problem — outputs from that tool could still breach confidentiality obligations if not handled carefully.
Consequences of Non-Compliance
The practice note is clear that consequences will follow for those who use AI inconsistently with its requirements. Chief Justice Mortimer identified two categories:
- Adverse costs orders — the Court can order that a party pay the other side’s costs where AI misuse has wasted the Court’s time or frustrated the just resolution of proceedings.
- Professional obligation issues — for lawyers, non-compliance may raise questions of fitness to practise. The Court flags that existing duties of candour, accuracy, and not misleading the Court apply with full force regardless of whether AI was involved.
Australia has already seen at least 73 identified cases where AI-generated hallucinations appeared in court documents. A Victorian lawyer was sanctioned in 2025 for filing false citations generated by AI — stripped of his ability to practise as a principal lawyer. The Federal Court’s practice note makes clear that similar consequences now have a formal framework behind them.
What This Means for Startups
If you are using AI in compliance or legal functions
Many startups use AI tools for contract review, regulatory research, compliance monitoring, or document generation. GPN-AI does not directly regulate those internal uses — but if any of that work product ends up in Federal Court proceedings, the practice note’s verification and disclosure obligations apply. The lesson: treat AI outputs as drafts that require human verification, not as finished products.
If you are building AI tools
Startups building legal technology, compliance automation, or AI-assisted document review should pay attention to how the Courts are framing acceptable use. GPN-AI’s distinction between open and closed AI environments, and its emphasis on data handling transparency, signals the kind of assurances that enterprise customers (including law firms and corporate legal departments) will demand. Building audit trails, keeping data within controlled environments, and enabling users to demonstrate verification will become table stakes.
If you might end up in litigation
Every startup is one dispute away from the Federal Court. Whether it is a co-founder disagreement, an investor claim, an IP dispute, or a regulatory action, these rules will apply to you and your team — not just your lawyers. The time to establish sensible AI governance policies is before a dispute materialises, not after your counterparty’s lawyers ask the Court to order disclosure of your AI usage.
Practical Steps for Founders
-
Implement an AI usage policy that covers how your team uses AI tools in connection with legal, compliance, and regulatory matters. The policy should distinguish between internal use (lower risk) and any use that could generate materials relied on in proceedings (higher risk).
-
Use enterprise or closed AI tools for any work involving confidential business information, privileged communications, or sensitive commercial data. Public AI tools should not be used for legally sensitive material.
-
Maintain verification workflows. Any AI-generated output that could be relied on in legal proceedings — whether a contract summary, a compliance analysis, or a chronology of events — should be verified by a qualified human before it is treated as authoritative.
-
Brief your team. Ensure that anyone who might produce documents relevant to litigation understands that AI use may need to be disclosed and that they should keep records of how AI tools were used in document preparation.
-
Talk to your lawyers. If your startup is already in, or anticipates, Federal Court proceedings, discuss with your legal team how GPN-AI applies to your specific situation — particularly around discovery obligations and confidential information.
The Broader Trend
GPN-AI is part of a broader movement across Australian courts. The NSW Supreme Court issued its own generative AI practice note (SC Gen 23) in 2024. The Victorian Law Reform Commission tabled a comprehensive report on AI in courts and tribunals in February 2026. The High Court’s Chief Justice Gageler said in November 2025 that judges had become “human filters” for AI-generated content and the situation had reached an “unsustainable phase.”
For startups, the signal is consistent: AI use is not being banned, but it is being brought within existing accountability frameworks. The companies that build responsible AI usage into their operations now — before a court or regulator forces the issue — will be better positioned than those scrambling to comply after the fact.
Viridian Lawyers advises Australian startups and technology companies on regulatory compliance, governance, and dispute resolution. If you need guidance on how your startup’s AI usage interacts with court obligations, get in touch.