Back to news

3 Ways Technology Amplifies Human Judgment in Law

Artificial Lawyer
read full article

In this piece for Artificial Lawyer, Darrow's VP of Litigation Partnerships Etia Rottman Frand argues that the most important thing to understand about AI in legal practice is not what it can do — it's what it cannot and should not do. Structured around three interconnected arguments, the piece makes the case that technology's role is to amplify human judgment in law, not replace it.

The first argument concerns litigation as a force for accountability. Litigation is one of society's strongest mechanisms for holding institutions responsible for harm. The challenge is that evidence of harmful conduct is often scattered and hidden — buried in complex datasets, website code, opaque algorithms, and digital traces that no individual or team could realistically review at scale. Modern legal technology, when built and applied responsibly, gives litigators a broader perspective. Darrow, for example, uses legal intelligence to detect high-merit matters in data privacy, consumer protection, and environmental law by connecting disparate data points and surfacing signals that indicate harm. The technology widens the net. Human litigators still decide what qualifies as harm and how to act on it. And while AI can help solve complex problems, independent legal judgment aligned with social values remains irreducibly human.

The second argument draws on legal scholarship to frame the relationship between AI and expertise. Rottman Frand cites law professor David Yosifon, who argues that AI will not so much take legal jobs as change what those jobs require. In his view, AI should handle mechanical tasks — sifting through thousands of documents for a single critical fact — while lawyers focus on what machines cannot supply: moral reasoning and ethical judgment. Yosifon likens AI to earlier technological shifts such as the printing press, which made legal texts more accessible without removing the need for lawyers to interpret laws and argue in court. His core claim maps directly onto the model Darrow practices: technology structures information and simulates certain forms of reasoning, but real decision-making belongs to human attorneys. The goal is to free lawyers to do more of the genuine work of lawyering, not less.

The third argument looks toward what legal technology authority Richard Susskind calls the era of the AI-empowered client — a future in which legal capability is embedded in the systems people already use, rather than accessed exclusively through a traditional retainer. For corporations, this means compliance built into operational processes. For individuals, it means accessible tools that help them recognize and enforce their rights before harm becomes irreversible. But even in this vision, human judgment sits at the center. The lawyers who matter most in this future will not only litigate cases — they will design the systems themselves, defining guardrails, setting standards of fairness, and determining when a problem exceeds what an automated system can safely handle.

Rottman Frand closes by grounding all three arguments in Darrow's day-to-day work. Rather than waiting for complaints to arrive one filing at a time, Darrow monitors public data for signs of illegal conduct and lets human litigators decide what rises to the level of a case worth pursuing. The opportunity for tomorrow's lawyers is to help build and govern the systems that will deliver legal capability at scale — and that work calls for exactly the qualities AI cannot supply: ethical reasoning, empathy for affected communities, and a clear sense of how legal norms should guide real-world behavior.