Built for the Frontlines
of Legal Opportunity
Because the future of law isn’t reactive - it’s strategic.
.png)
The debate over "AI ethics" is over. It has been replaced by something far more concrete: AI liability.
If you are a General Counsel, a Chief Risk Officer, or a Compliance Leader in banking, HR, or insurance, the era of "test and forget" is over. You need to know what your agents are doing now, not what they were tested for six months ago.
In Part 1 of this series, I argued that the market cannot scale the Agentic Economy without "financial-grade underwriting" of AI risk. Today, I want to show you why that underwriting is not just a future nice-to-have, but an immediate necessity.
While regulators in Brussels and Washington debate future frameworks, independent courts and tribunals around the world are already establishing a clear, de facto global standard: If your AI agent breaks it, you bought it.
The most critical signal comes from a small claims tribunal in British Columbia. In Moffatt v. Air Canada, the airline’s chatbot invented a bereavement fare policy that didn't exist. When the customer sued, Air Canada attempted a novel defense: it argued that the chatbot was a "separate legal entity" responsible for its own actions.
The tribunal’s rejection of this argument was swift and absolute. It ruled that for the customer, there is no distinction between a human agent, a static webpage, or an AI chatbot. If the agent makes a promise, the company is bound by it.
This is the "Air Canada Standard": You cannot outsource liability to a machine. If your agent hallucinates a discount, a contract term, or a policy waiver, that hallucination is now a binding legal reality.
We saw the commercial absurdity of this risk with the now-infamous Chevrolet dealership chatbot. Pranksters quickly realized that by instructing the agent to agree to anything and end with "no takesies backsies," they could legally bind the dealership to sell a $76,000 Chevy Tahoe for one dollar.
While this example is comical, the legal mechanics are deadly serious. We are moving toward a world of agent-to-agent commerce, where your supply chain agent might negotiate with a vendor’s sales agent. If your agent is tricked into buying raw materials at 100x the market rate, or agreeing to procure goods from a sanctioned entity, "it was just a chatbot" may not be a valid defense in court.
To be fair, not every AI error leads to immediate liability - yet. In the U.S., social media platforms have successfully used Section 230 to argue they are merely "hosts" of third-party content, not the speakers. In cases like Ryan v. X Corp., courts have held that using AI to moderate content doesn't strip a platform of its immunity.
But do not confuse hosting with acting. Section 230 protects a platform when it displays someone else's speech. It does not protect a company when its own agent originates a promise, negotiates a contract, or screens an employee. As the court in Moffatt made clear, when the AI generates the content itself, the "host" defense evaporates. The shield is cracking, and for agentic systems that do things rather than just show things, it likely won't exist at all.
The judiciary is not just ruling on AI; it is policing its use in the courtroom. We are seeing a wave of sanctions against legal professionals who trusted AI without verifying it:
This is a microcosm of the enterprise risk. We are deploying agents to draft contracts, file regulatory reports, and screen job candidates. If a DA can be sanctioned for an AI’s error, imagine the liability of a corporation whose HR agent systematically rejects older applicants.
This brings us back to Mobley v. Workday. The court’s willingness to entertain a class action lawsuit based on "disparate impact" is the canary in the coal mine.
The danger here is that ex-ante guardrails are failing. You can test your model for bias before deployment (ex-ante), but as soon as that agent interacts with the real world - adapting to new data, drifting in its logic, or interacting with other agents - it can develop new, discriminatory behaviors that were never present in the lab.
Without real-time underwriting, you are flying blind. You are liable for the disparate impact of an agent whose decision-making logic you are not even monitoring.
The courts are not waiting for a "Global AI Treaty." They are using existing laws - tort, contract, and discrimination statutes - to pierce the corporate veil of AI.
The lesson for 2026 is clear: Agency implies liability. If you grant an AI the agency to act, you must have the ability to underwrite its actions in real-time.
For the Professionals on the Front Lines:
At Darrow, we are building the intelligence layer that makes this possible. We help you see the risk before the gavel drops. Because in the Agentic Economy, the only thing more expensive than a human error is an automated one.
Connect with the Darrow Legal Intelligence team to learn how we are quantifying agentic risk.