AI & RoboticsNews

Reid Hoffman on AI, defense, and ethics when scaling a startup

LinkedIn cofounder and Greylock Partners investor Reid Hoffman tells executives who are running startups that scale fast — the kind who want to double in size every few months — to build ethics into their businesses.

As companies plan for the future and grow their engineering or sales ranks, they should consider what can go wrong, he said, and hire people whose job is dedicated to risk management. (Former DARPA director Regina Dugan suggested the same at an AI conference for CEOs held by Samsung last year.) Next, he added, companies can develop a risk framework to sort risk levels. Anything that can be a catastrophic risk to individuals, a systemic risk to company systems, or a risk to a large number of users should be handled in a proactive way to stay competitive with other startups.

Hoffman, who coauthored the book Blitzscaling, joined former White House chief data scientist DJ Patil and Stanford University political science professor Amy Zegart Tuesday at the Stanford Human-Centered AI Intelligence (HAI) fall conference on AI ethics, governance, and policy symposium at the Hoover Institution in Palo Alto. HAI got its start earlier this year with support from Hoffman and other well-known players in the AI space, including IBM Research director Dario Gil, Microsoft Research director Eric Horwitz, and AWS CEO Andy Jassy.

Scaling quickly matters a lot to digital-first companies, which are in a rush to outperform every other company in the space with a server and an internet connection, he said. “That speed to scale really matters a lot. You can still do that while thinking, OK, we as the initial founding team are thinking a little bit about what kind of things can go wrong. Then as we scale to a multi-threaded organization, we can hire people whose job it is to say ‘That’s what I do, is make sure I identify those [risks],’ and then working with the rest of the organization on what are the ways we don’t steer into those,” Hoffman said.

Hoffman sits on the board of the Stanford Institute for Human-Centered AI, and was a major backer of the Ethics and Governance of Artificial Intelligence Fund at MIT and Harvard University in January 2017.

Later, when the conversation moved to AI applications in health care, he argued that ethics is not about avoiding every single negative outcome. It’s about being able to handle the tradeoffs of negative outcome risks and attempting to keep those risks low.

“The problem that people typically use when they use the word ‘ethics’ is they try to have a zero percent chance of a bad outcome, and a zero percent chance of a bad outcome means you’re going to do almost nothing and move super slow, and a lot of people in this case are going to die from cancer that you otherwise could’ve solved,” Hoffman said.

“The humanist questions, the ethics question is about saying which questions do we need to ask when we’re building features on these data sets, and then which questions need to be asked in order for us to have a sense of belief that we’re moving in the right direction towards having more justice in society,” he added.

In the conversation, moderated by Zegart, Patil and Hoffman talked about AI, U.S.-China relations, the creation of AI for the benefit of mankind, what they’re worried about in the years ahead, and potential tech regulation from Washington D.C.

When asked to predict the biggest negative consequence of AI in the next two decades, Patil and Hoffman agreed it will be a cyberattack by an asymmetrical actor, like what the U.S. dealt with in terrorist attacks like 9/11. “I think the version you see of this is a massive cyberattack that does very large-scale destruction, and there will be some form of AI or machine learning in there,” he said.

Patil may be best known as the person who coined the term “data scientist” while at LinkedIn more than a decade ago.

Like when he urged entrepreneurs to take a “tour of duty” in government, Patil also urged technologists to consider a job in government. “When technologists don’t show up, you’re going to get poor policy decisions,” he said.

Patil said former FBI director James Comey made a massive error when he failed to work with encryption specialists, something he referred to as a “an unfortunate miss in leadership.” On the flip side, he described former Department of Defense director Ash Carter’s decision to create the Defense Innovation Board as an example of good incorporation of tech by the government.

Earlier in the day at the event, former Google CEO Eric Schmidt warned against overregulation of AI, while former European Parliament member Marietje Schaake said regulators need to be empowered to defend democracy and fair market practices.

The Defense Innovation Board, chaired by Schmidt, will share a final draft of its AI ethics recommendations for the Department of Defense in a meeting Thursday. Hoffman also sits on the board, and went further than Patil. (Hoffman also sits on the board of Microsoft, which landed a $10 billion cloud computing contract on Friday.)

“The actual function that the DoD is trying to provide for is, how do you keep peace? So if you say I’m opting out and not helping people, not only are you endangering people who put their lives on the line, not only are you not doing your duty as a citizen, but you’re undermining the actual function, which is how do we keep peace,” Hoffman said.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!