Back to news

How Can AI and Data Analysis Save a Legal System?

Jerusalem Post
read full article

The Jerusalem Post profiles Darrow co-founder and CTO Gila Hayat at the publication's Women Leaders Summit, in a conversation that moves beyond Darrow's product to examine the broader relationship between AI, data, democratic representation, and trust in legal institutions.

Hayat opens with a diagnosis: public trust in the legal system is eroding. Darrow's response is to build AI that identifies when companies break the law against large groups of people — surfacing violations that the system itself lacks the capacity to detect proactively. The platform scans publicly available data to generate litigation assets for legal professionals, transforming what was once invisible harm into actionable cases.

The interview also touches on a more nuanced challenge: AI reflects the biases embedded in the data it is trained on. Judicial opinions, case law, and historical legal records all carry the accumulated biases of the humans who produced them. Hayat's view is that the solution is not to avoid AI but to build systems grounded in facts about what actually happened to real people — creating a corrective dataset that counterbalances historical distortions rather than perpetuating them.

The most striking part of the conversation is Hayat's argument about data and democratic participation. Everything people say and do online is being ingested into systems that are already making consequential decisions about how society functions. For communities that feel underrepresented, the practical implication is clear: speaking up, sharing experiences, and creating a positive online presence is itself a form of civic participation in an AI-shaped world. Silence, she argues, is also a data point — and not a neutral one.