When VentureBeat asked Andrew Burt why he was starting an AI-focused law firm, Burt was quick to clarify that it’s about AI analytics. But that didn’t answer the underlying question of why the world needs a law firm focused so precisely on this one key area.
“The thesis behind the law firm is that traditional legal expertise on its own is not sufficient,” said Burt, a Yale Law School alum. His partner is data scientist Patrick Hall, and together they aim to provide legal acumen around AI and analytics that’s bolstered by technical understanding. “If we are going to successfully manage the risks of AI and advanced analytics, we need both of these types of expertise commingled,” added Burt.
Called bnh.ai (techy shorthand for “Burt and Hall”), the firm is located in Washington, D.C., which Burt says confers a key advantage. “There’s a rule in D.C. It’s called 5.4b, and it basically allows Washington, D.C. to be the only place in the country where lawyers and non-lawyers can jointly run law firms together,” he explained. That’s why Hall, who is not an attorney, can be a partner in this law firm.
Hall is an adjunct professor who teaches graduate courses in data mining and machine learning in the Department of Decision Sciences at George Washington University, and he’s also the senior director for data science products at H2O.ai. Burt’s other job is as chief legal officer at Immuta, which advises companies on legal and ethical uses of data and will incubate the new law firm.
Burt said that although Bnh.ai is emerging from a sort of stealth, it has been operational for weeks and already has some clients. That initial push hasn’t yet fixed the firm’s direction, but the team knows it will include AI regulation policy — perhaps an obvious focus given the D.C. location. For Burt, such policy work stands at an ideal intersection between technological knowledge and legal expertise. “Our plans are very ambitious, and we certainly hope to do that. Over the last few years, we have been informally advising regulators and policymakers on how to approach a lot of the issues raised by AI,” Burt said.
But what about legal challenges from people who feel they’ve been injured in some way by an AI system, like potential discrimination in a job hunt, accidents caused by autonomous vehicles, or violations of data privacy rights? Burt acknowledged that there’s much to mine in these areas, and he didn’t rule out the possibility that the firm would litigate cases. “The truth is we don’t know, because we’re just starting out, but so far the answer seems to be that customers are engaging us kind of prelitigation,” he said. “And the idea is to figure out what could go wrong, and then how do we minimize impact — or frankly, what going wrong, and how do we stop it? So there’s an incident response component to this as well.”
It all comes down to liability. Burt said that when companies — whether small or large (as in, Fortune 100) — put resources into AI projects, they start to see that AI is a huge liability because of its powerful capabilities. Burt broadly classifies AI liability into three categories: fairness concerns, privacy, and security. He sees the issue of interpretability as an umbrella over all three. “If you can’t interpret your AI, it’s very hard to understand what types of liabilities it’s causing.”
AI presents novel problems that, naturally, have legal ramifications. For example, there’s debate about whether an AI can hold a patent or copyright a written work. As the medical field adopts more machine learning and computer vision tools in patient diagnostics, questions about physician liability continue to percolate. Meanwhile, lawmakers are wrestling with how to understand and regulate facial recognition.
Though readily acknowledging that AI demands a new generation of laws and regulations, Burt asserts that existing legal precedent can be adapted or applied to these new problems. “I have to disagree with the point that all of this is new,” he said. “One of my frustrations is that frequently [in] these conversations, people act as if we’re starting from square zero or square one. And that’s not true. There are lots of different frameworks we can point to.” A key example he flagged is SR 11-7, a Federal Reserve regulation that has to do with risk from algorithmic models. It’s been on the books since 2011.
His adamant position on precedence is perhaps somewhat controversial. But regardless of whether AI presents wholly novel legal challenges or not, no one would disagree that navigating them is difficult. This is what Burt and Hall are leaning into with their AI-focused law firm. “I can’t even tell you the number of situations I’ve been in where the biggest question, kind of standing in the way in the way of the adoption of artificial intelligence, was not technical — it was legal,” he said.
Author: Seth Colaner.
Source: Venturebeat