Schedule a Call

Law, Disrupted: My Conversation with John Quinn

Evyatar Ben Artzi

31/10/2022

In October 2022, I had the privilege of being interviewed by world renowned trial lawyer, John Quinn, for his breakthrough podcast, Law Disrupted. We had a really profound conversion on the intersection of technology, data, AI, and litigation. I want to share a few highlights here.

To listen to the full interview, called, “Using Artificial Intelligence to Identify High Value Legal Claims,” click here.

John:

This is John Quinn, and this is Law Disrupted. And today we are speaking with Evyatar Ben Artzi, who is one of the co-founders of an innovative Israel-based company called Darrow. Darrow uses something they call “Justice Intelligence” – artificial intelligence to actually identify instances where companies are breaking the law; are not compliant with rules, for example, about usage of data. Or that there is a disparity between what they tell the world they do and what research on the internet or analyzing data shows they actually do. They identify potential claims and classes of people who are injured by this type of activity. And it’s all powered by artificial intelligence. Evya, thanks for joining us here today.

Evya:

Happy to do it. Thanks for having me.

John:

Before we get into what your business does, could you give us a description of your background and how you got to where you are?

Evya:

I was a combat officer in the IDF. And I made the shift to law rather abruptly. I went to law school where I met Elad, my co-founder, and studied law and cognitive science. We went on to clerk together at the Supreme Court, and witnessed the types of challenges that plaintiffs’ lawyers were facing when they were out there defending people’s rights, and I got the feeling something was wrong about the system. Like it was not a leveled playing field. And one of the biggest problems we noticed was that lawyers were always looking for more cases and spending a lot of time on this. And it wasn’t only time consuming, but sometimes it’s just hard. You’re looking for a needle in a haystack, right? Elad and I met our third co-founder, Gila, who is a gifted data scientist from the intelligence unit in Israel. And the three of us met up and started doing this thing together, trying to build something that could scale. 

John:

Yes, it’s an absolutely fascinating business. If we could back up for a minute. When you were in the Israeli military, you were in the IDF’s famous 8200 Intelligence Unit. And I know from having spent time in Israel and talking to a lot of people that many people in the startup communities and the software space come from this unit. It has spawned a lot of different companies, hasn’t it?

Evya:

Yes, absolutely. But I wasn’t in the 8200 unit myself. I was actually a fighter in the field. But my co-founder Gila, was in 8200. And the combination between people who actually did the work on the ground and the people who were in the intelligence unit – the eyes and ears that you would have, digitally speaking, created that. 

John:

I just have to ask this question because I know its’ a story that has been repeated many times. Does the ISraeli government… There must be some restrictions on the use that can be made when people who are in these elite intelligence units, like 8200, when they go out into private practice. They have learned something and then have an idea for a business. There must be some restrictions on what uses they can make. What’s proprietary or what is secret to the Israeli government.

Evya:

Some startups, not ours, but some startups, the government actually comes by and sells “well, hey we have a piece of it. We built these.” So there are some IP issues that happen. For us, it’s a bit different because this was kind of a concept we used from military intelligence, but not anything specific. So it was our methodologies and know-hows that enabled us to get into this.

John:

Well, this whole concept of using intelligence or data in order to identify claims, I find fascinating. And I have never come across anyone or any company who is doing this other than your company. Give us a little idea who this works. 

Evya:

Basically, it’s a new category of startups… Justice Intelligence companies usually are about making the justice system as a whole work better. And toward that goal, help firms find better cases, reduce due diligence costs of finding those cases, bringing them to court effectively. Our company is focused on class actions, mass actions, more of the big litigation that’s out there on the consumer side… The idea is that these companies help find new violations (new evidence, new plaintiffs, new underwriting). So that’s the broader category. But what we do at Darrow is a bit different because we are teaching the law to a machine by turning that law into something machines can consume as rules. So we studied a lot of litigation, and we use it as our roadmap. Once the machine can understand what cases are “good” and what are “bad,” it knows what to look for essentially. It becomes a search algorithm. If you ask the right questions, you can get indications of wrongdoing from real world data. That’s any text out there, right? We scan real world data (news feeds, social media, sometimes administrative documents like filing, environmental monitoring). We look at HTML source codes, legal data from court dockets, and legislation. It all helps you track and detect those harmful events, connect the legal dots to say: ok this harmful event actually occurred… And then determine how many people were harmed and what the legal outcome would be. Of course, that’s predictive, but it helps set the financial value of a case. 

John:

I assume a lot of these cases would not otherwise be discovered. Claims that wouldn’t have been discovered if you didn’t have the ability to sift through these data sets and assess or compare them to lawsuits that have been successful, or claims that have been filed.

Evya:

That’s correct. I think the idea is that usually when a lawyer looks for something, they look for the smoking gun. They are looking for one piece of data to tell them: ok, there was a violation here. And usually it comes up in a news article, when you see something there and you’re like “ok, there’s something actionable and I can take this to court. I have enough data here and, with some knowledge that I already have, I know this company did something wrong.” But when you look at it more broadly and a machine can kind of memorize the whole internet and then read the next piece of news and the next piece of social media, then when you read just a tweet ad you know to refer to that story that happened to a company somewhere in the past and can say: “ok, now I have a story” from combining three or four or sometimes 10 different data points. So that’s kind of putting the puzzle pieces together, not just finding that smoking gun. So, yes I think these are usually cases law firms do not find on their own, or maybe they would find but it would take significantly longer. 

John:

Let me see if I can understand how this works. You’re taking news feeds, public information, public filings… all this information. And you are taking that data and comparing it to other sets of data from which claims have surfaced? Is that essentially it? And you’re trying to match up or model whether this new set of data suggests that there may be a claim that is like this other one that has already been filed. Is that essentially it?

Evya:

Yeah, that’s essentially it. When you think about it in technological terms, a knowledge graph, from a legal point of view. So you can start to look at each data point and say, well are you negligent? Are you a breach of contract? And when you ask these questions and the sub-questions of course and sub-conditions of each cause of action, then you start getting answers. And once you have enough confidence that some cluster of data points together make a case, or a similar story happened in the past, then you have in case law already… then you could say: ok so this might be a potential case, now a lawyer should look at it and decide whether it’s actionable. 

John:

Could you walk us through a couple of examples, perhaps, where your data sifting has yielded a potential case?

Evya:

I can’t give examples of actual cases. But the idea is, I think that the place where we’re the most comfortable today and feel very strong is privacy. In the world of privacy, most violations are contradictions of companies’ legal documents, or Terms of Service or Privacy Policies compared to what they actually do. So data from the source codes of what the company’s website is doing or what the company’s app is doing allows you to compare what they’re saying and what they’re doing. So if you look at that data, it’s sometimes enough to find a case. Of course, you would need a lot more data like what the company’s financials are, and have they been sued for this in the past. And a lot of information about the strength of the case and similar cases that are like it. And whether this language in the specific Terms of Service or Privacy Policy is the relevant language to give consent or be informative enough for a consumer to decide whether to hand out their personal data or not. But these are one type of privacy case. Of course, there are others like data breaches and data leaks, and misuse of private information where it doesn’t matter what you say on the privacy policy, what you’re doing in itself is illegal or non-compliant in some way. Environmental protection – we can find cases of pollution that harm large communities who are unaware of their illnesses, or damages caused by that pollution in water, air or land. It also happens in antitrust cases, where we find big cartels or places where monopolies exist. And there’s a problem with that as well. And any anti-competitive behavior basically happened in the past in some way, and today recreates itself in a different market … so we can track that. And the same thing happens with consumer protection, where usually it’s online consumer protection, although we do have a lot of cases where it’s physical products. But those cases are based on the idea that the company’s breaching the contract or doing something that is just illegal.

John:

I mean, this is fascinating. A number of tech companies, large and small, have gotten into trouble recently for, you know, sharing data with third parties. And sometimes, explicitly contrary to what they tell the users that they’re doing. And your technology can detect that this sharing is happening. 

Evya:

That’s correct. At the end of the day, it’s about trust. And people trust that when they provide their personal data to you, you’ll use it only for good, first of all, and only do the things they expect you to do with it. And I think there’s something today in the world of privacy where companies are kind of learning that it doesn’t matter if you check all the boxes, it’s not enough to be compliant. You need to be really compliant and that is the good fight and that is the ESG fight in my opinion. We’ve been getting into a world where our data is out there so much, we’re not really in control of our lives. So if you look online today, most of the things that you’re seeing are personalized to you. And if your data, data you didn’t know was shared or is being used against you, if you didn’t know that data is responsible for that, your susceptible to what you’re seeing. And that could be advertising, but it could also be a lot of other types of content. I think that of goes to the core of the right to autonomy. Basically, people’s autonomy is in danger. And I think that could change with the right privacy laws, right?

John:

I mean, it is remarkable to me. I saw a report, you have have seen this, in some litigation that Facebook is involved in. And a special master was actually appointed to interview under oath to try to get to the bottom of what data Facebook has exactly on its users. And from the reports I read, amazingly, the Facebook engineers essentially said: we cannot answer that question. No one person can answer that question. Put us under oath. We can’t tell you exactly what data we have from our users. That’s one remarkable thing I heard recently. The second – I was talking to the general council of a global fund, and this person told me they were very concerned because they realized they don’t know what data they collect. Actually, they don’t know where it’s stored and they’re not entirely sure what uses are made of it. Now, this is a fund that invests in a lot of different companies, a private equity fund, and this persona candidly acknowledged they were quite nervous about the fact that they don’t know the answers to these things. I’m wondering whether there is also a potential business in helping companies understand, sort of map out what data they have, and help them understand. Because a lot of companies are just grappling with this. I mean, they’ve grown. They’ve been collecting data for years. This whole privacy thing has kind of snuck up on them. They suddenly realize: hey this is a huge exposure. We better hire a full-time privacy lawyer to oversee this. I think there’s a lot of demand for this type of service that helps companies understand what they’ve got and what actually they do with it.

Evya:

In the future, that might be a business direction for Darrow, after we do what we can with the enforcement market. There’s a kind of deeper problem there. Have you read Industry Unbound? There’s a book about the inside story of privacy. He spent a lot of time in tech companies in the compliance area, and also with the actual developers, the product people. He did research for three years and then wrote a book about it. Basically, his inference from this, which might be sad, is that companies have no real incentive to comply with the law. The idea that it’s easier today to do other things; there are other strategies to get rid of the problem of privacy when all you need to do is, as you say, hire a lawyer, right? Hire a lawyer, call them your Chief Privacy Officer. Usually don’t give them a budget and let them do the work. Let them do what they call self-governance or corporate management of privacy. 

John:

Well, I think that’s changing because you’re seeing settlements of privacy class actions in the hundreds of millions of dollars. You now have a statute in California where if there’s been a breach and you can’t show that you’ve complied with best practices, there is a liquidated damages per consumer, per class member, which can add up very, very fast. And there are similar laws under consideration in other states. There’s a bill before Congress, a federal privacy law, that would create a private right of action for violation. So I think that what you said is an accurate statement of the law, maybe now and certainly in the past. But I really think that’s changing. There’s gonna be huge penalties for companies that don’t comply.

Evya:

I’m so hopeful that you’re right. But my feeling, from what I’m doing in the field today, and also from the people I talk to on the class action side, some on the defense actually, but mostly on the plaintiff’s side, is that the optimistic indicator is that isn’t the trend, right? We’re seeing something move. But the problem is that most of the liquidated damages we see in statute don’t really get to court. Most of these cases settled out of court, and most of them settle for dimes on the dollar. Cents on the dollar, sometimes.

John:

Depends on the size of the class. Yeah, of course. I mean I just recently settlement one for 450 million in a class of 60-80 million people. And I think there was like $8 a class member or something in that range. They came up with that as their damages theory. 

Evya:

So if the damages really are $8 per victim, and the statutory damages are sometimes the thousand dollars of $2,000… we have the Wiretap Act where we can sometimes have $10,000 right? The idea that we’re looking at such a small amount of damages at the end of the day seems to be like something that is not driving companies to comply more and more with the law, but actually to say: well ok, we now understand what this specific price is so we can kind of drop that price as we go along in litigation. Cause the last one was only thirty cents on the dollar, and you’re a new case and you’re settling earlier so why not 25 cents on the dollar? I think there’s a trend towards a positive enforcement of the law, but we’re still not there. We still need a lot more enforcement to get there.

John:

So let’s talk about another area where you’re helping to identify claims. As you mentioned, pollution. It’s interesting to me that, you know, sitting at your computer in Tel Aviv, you might be able to pick up the fact that some residents in an area of Louisiana or someplace may have environmental claims. How are you able to do that? What data are you looking at? 

Evya:

So, first of all, I think that social media’s underestimated in that area, right? People complain all the time about things we don’t really understand. And they do it location-based all the time. So that data is aggregated. And we know to say: okay, here’s a bunch of complaints rising in this area about bad odor. It doesn’t really matter. But you’re seeing that that in itself is really important and you can see it as a trend. So that’s one way of king of finding the indicator of something going wrong, but then you need to correlate that with actual data that monitors the environment. So there are a lot of sensors out there. Some of course are proprietary, but others are just publicly available (but not publicly noticeable). And those types of sensors allow us to find the correlation between those complaints or any type of chatter online that says there’s something wrong. Hard coded indicators that there’s a problem. And when you connect that to the factories in the area; the people who are dumping in that area; the people who are putting bad materials into oceans or other bodies of water… that kind of allows you to say: ok we have an initial case here. There’s a lot of work to be done later on. And lawyers, good litigators, are the only ones who could do it sometimes. But having that initial education allows you to help communities that don’t even know that they’re being wronged. 

John:

What would be the beginning of finding one of these cases? Are you querying social media data sets for people complaining about bad odors or tastes? Or you’re monitoring some type of air and water monitor, and detect an uptick? Or some combination of the two?

Evya:

So it needs to be a combination. Because having just complaints about something is not enough. But you have sort of triggers that are based on past litigation. We’re in the era now where there’s a lot of litigation that has already concluded that lived in the era of social media. So we know how social media looks when people find out there’s something wrong with the environment. So now comparing historic data about what’s happened to what’s going on today allows you to get a better understanding of what the triggers are and what you’re looking for.

John:

That’s fascinating. And another area you mentioned was cartel, competition, antitrust kinds of claims. There, are you looking at pricing patterns, pricing movements in an industry? How do you detect those kinds of claims? 

Evya:

Pricing is harder where it’s a physical product and you don’t always know the price without physically going to the store. But there is a lot of data about pricing online, and there’s a lot of data about pricing in different areas where we’re talking about eCommerce. So there, pricing is available and you could look at it. But those are not the easiest cases. The easiest cases are more like conspiracy – that happens when a company has some platform that was open but now closes it; or does anything to exclude others from a market that they’re in. Using market power from another market that it has a monopoly in. Those kinds of activities are much easier to detect. And that happens all the time. You could see it by fusing both financial data from the public markets, and data about what these companies are actually doing with their products, with their launches of a new product, and the prices of those products. Of course, Ithink in antitrust, the problem is much larger because finding a claim is not the end of the road, right? I think the idea is that these cases require not only an initial indication but a lot of data to support it. So we work with our partner law firms to actually build the cases up after there’s been an initial indication… I think the million dollar question for anyone practicing law: what’s a priority? What is the most painful thing that I could solve for society? And when people ask that question, sometimes they’re regarded as idealists, right? And people say, well what’s the thing that could make the most money? And this kind of connects to our founding story because we actually wanted to be an NGO when we started out. But then Gila was like, “well I’m not going to be the CTO of an NGO…” She said, we’ll never be independent. If we’re an NGO, we’ll always be reliant on someone else’s donations and never really get justice done. You won’t have a good incentive to find all of the legal violations in the world. We’re focusing on discovering the violation, trying to understand where are the most cases, and where is it the most beneficial to society? We measure social ROI – we measure how these cases really return on investment. What does that mean for society? How much money do we bring back to the American public?

John:

Do you assign a value to social ROI? You actually have metrics for that?

Evya:

Yes. We have metrics. We call it “Gross Litigation Value,” and we’re looking at the valuable litigation that is created based on our platform. The more valuable the litigation, the better it is for us. So the idea is to find the cases that are the most valuable. So of course, we’re looking at the preferences of our partners and clients: what they want from the platform. We do surveys all the time, but we’re also looking at what is the potential social ROI that a case would have, and trying to get those cases that are most beneficial to society. We’re trying to create a work of frictionless justice, and to uncover every legal violation out there. But there’s a priority and that’s what is just bigger right now.

John:

So, how do you work with lawyers? I mean, you don’t file these cases yourselves.

Evya:

Lawyers use our platform. 

John:

Tell us how you work with law firms and lawyers to pursue claims that you identify.

Evya:

We work with law firms to find these cases. The law firms use our platform to source the cases and then they go on and file them, of course. They pay to use the platform. And we get a lot of benefits just from the fact that these cases get to court and succeed there. Our ROI is measured exactly like that.

John:

Are most of the cases generated by your research and data filed in the US or elsewhere? Are you working on other jurisdictions, or just the US?

Evya:

We started out in Israel. We did a few cases to check out that we could actually do it. But to build a really big platform, you have to do it in English. That’s the only way. So we built an NLP in English, and then started doing this in the US back in January 2021.

John:

Alright. Can you give us some idea about how large your operations have been? How many law firms you work with or how many cases you’ve been involved in? 

Evya:

We’ve been involved with over one hundred cases altogether in the US, and we work with dozens of law firms nationwide.

John:

I’ve always said that when people talk about artificial intelligence and machine learning, that I’ve thought litigators would be the last to be replaced. Certainly advocacy. And I hope we never reach a situation where judges are using AI programs to decide how long sentences should be, or whether somebody should get out on parole, or whether a motion to compel discovery should be granted and things like that. But this business that you have, for the first time, has made me wonder about the potential for artificial intelligence to make inroads into litigation. I mean, for a long time now, we have had programs that help us manage data, produce data in discovery, help manage witness testimony and correlations with testimony and documents and things such as that. But this is a significant step beyond all that.

Evya:

We can answer that question looking at our clients. Our clients are great painters. They want to scale their business. They’re looking at this as an opportunity for any lawyer in the firm to become the “rainmaker.” Basically, they generate a steady stream of cases hat are meritorious cases to work on. So for us, those are the clients and Ithink they are more data oriented, looking for new ways to become those rainmakers. I think in the past it was this model of a relationship-based origination of cases. That’s changing today. I think some people say the legal industry is slow to embrace change. That the data revolution is not for lawyers. 

John:

We’re the last adopters. Especially in the common law world. We go forward into the future by looking at what we did in the past.

Evya:

That’s what people say. But I think machine learning in itself is exactly that – looking at the past to know what to do in the future, right? At the end of the day, you have a training which allows you to predict what’s going to happen later on. So, lawyers are basically the first adopters of using data, archiving data. In Persia, like thousands of years ago, people saved every legal document they have. That’s the first thing we have from those cultures. So I Think legal data itself is very, very old… And then it became something people were hesitant to adopt.

John:

Do you see a time when you’ll be able to predict outcomes in cases? I can see, you know, repetitive cases like product liability cases. You should get to a point where you can predict outcomes, I would think.

Evya:

We do it today.

John:

How about the typical complex breach of contract case, securities case, or antitrust case. Lawyers who handle those cases, they’re gonna say to you: oh wait, this case is unique. It’s complicated. You can’t just compare it to any other case. I mean, do you think there’s anything to that?

Evya:

Of course there’s something to that, because at the end of the day, lawyers know what they’re doing. I think the issue is time, right? How long does it take you to predict the outcome of a case? If it takes you the time it takes a judge to rule that case, then that’s too long right? You need something really fast. I think that for us, the idea is that for ethical consideration, you would want each case to be reviewed by a human being that actually has a moral compass, to come and say what is right and what is wrong, and whether this is a good case or not. 

John:

Evya, for lawyers and others who listen to this, who want to learn more. What’s the best way for them to learn more about Darrow to get in touch with you or to see if there’s an opportunity to work together?

Evya:

They can contact us on our website, www.darrow.ai. And also, send me an email. I’m at evya@darrow.ai

John:

Well, this has been a very fascinating discussion. Thanks very much.

Evya:

You for having me, John. Thank you.

John:

You’ve been listening to Law Disrupted with me, John Quinn. You can sign up to receive an email when a new episode drops at our website, Law disrupted.fm. If you enjoyed the show, please share a link on social media and follow at JB Q Law or at Quinn. Emmanuel. Thank you for tuning in.