In this bylined piece for Artificial Lawyer, Darrow's VP of Litigation Partnerships Etia Rottman Frand draws on four and a half years leading a team at the intersection of law, business, and AI to share three lessons about what it takes to build and lead effectively in legal technology.
Rottman Frand opens with a grounding observation: the most important litigation decisions are often made long before a case is filed. For contingency-fee firms in particular, case selection is existential — every intake decision affects a firm's ability to continue advocating for future clients. Her team sits at exactly this moment of decision, working directly with plaintiff attorneys to evaluate potential matters surfaced through Darrow's legal intelligence. Every conversation involves tradeoffs, and the responsibility is to present information clearly enough for a partner to make an informed choice. That context shapes everything about how she thinks about building a team.
The first lesson is to hire for judgment over process adherence. In a fast-moving environment where no two conversations with a law firm partner unfold the same way, Rottman Frand looks for people who can deeply understand processes but also know when to depart from them. The hiring process at Darrow reflects this directly: candidates are placed in a realistic simulation with minimal background — a conversation with a potential client about a matter they might pursue — and given almost complete freedom to structure it. This reveals how candidates handle uncertainty, adapt when a conversation shifts unexpectedly, and represent the firm to someone who has no prior context about Darrow. Since every candidate structures the conversation differently, the interviewers are also unscripted, which creates a genuinely mutual assessment. Strong judgment at this stage means case opportunities get presented with a clear understanding of risk and reward, and partners' time and investment decisions are respected.
The second lesson is to design for dissent. In high-stakes environments, healthy disagreement doesn't emerge organically — it has to be deliberately built into how an organization operates. Drawing on models from national security and intelligence analysis, where analysts are often present in decision rooms specifically because their role is to confirm or challenge emerging conclusions, Rottman Frand argues that organizations need to create conditions where dissent is not just permitted but expected, regardless of seniority. The purpose is twofold: it surfaces edge cases and uncomfortable data points early enough to act on them, and it counters the hubris that naturally accumulates around authority and confidence. When dissent is treated as friction, organizations lose their capacity to self-correct, and risk accumulates unnoticed.
The third lesson concerns the role of technology in legal work. Rottman Frand draws on a foundational piece of legal scholarship — the transformation of disputes through naming, blaming, and claiming — to make the point that harm must become intelligible before it can be addressed. Most potential violations never reach litigation because the earlier transformations fail to occur: evidence stays fragmented, signals go unrecognized, and the connection between cause and harm is never made. Legal technology, at its best, reduces the friction that prevents these transformations. She closes with a metaphor that captures the right relationship between AI and judgment: flight instruments don't replace a pilot — they restore orientation when intuition becomes unreliable. That, she argues, is exactly how technology should function in legal practice.