AI & RoboticsNews

Dumb AI is a bigger risk than strong AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The year is 2052. The world has averted the climate crisis thanks to finally adopting nuclear power for the majority of power generation. Conventional wisdom is now that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears around nuclear waste and plant blowups have been alleviated primarily through better software automation. What we didn’t know is that the software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several unrelated plants all fail in the same year. The council of nuclear power CEOs has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We now have to choose between modernity and unacceptable risk.

Artificial Intelligence, or AI, is having a moment. After a multi-decade “AI winter,” machine learning has awakened from its slumber to find a world of technical advances like reinforcement learning, transformers and more with computational resources that are now fully baked and can make use of these advances.

AI’s ascendance has not gone unnoticed; in fact, it has spurred much debate. The conversation is often dominated by those who are afraid of AI. These people range from ethical AI researchers afraid of bias to rationalists contemplating extinction events. Their concerns tend to revolve around AI that is hard to understand or too intelligent to control, ultimately end-running the goals of us, its creators. Usually, AI boosters will respond with a techno-optimist tack. They argue that these worrywarts are wholesale wrong, pointing to their own abstract arguments as well as hard data regarding the good work that AI has done for us so far to imply that it will continue to do good for us in the future.

Both of these views are missing the point. An ethereal form of strong AI isn’t here yet and probably won’t be for some time. Instead, we face a bigger risk, one that is here today and only getting worse: We are deploying lots of AI before it is fully baked. In other words, our biggest risk is not AI that is too smart but rather AI that is too dumb. Our greatest risk is like the vignette above: AI that is not malevolent but stupid. And we are ignoring it.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Dumb AI is already out there

Dumb AI is a bigger risk than strong AI principally because the former actually exists, while it is not yet known for sure whether the latter is actually possible. Perhaps Eliezer Yudkowsky put it best: “the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Real AI is in actual use, from manufacturing floors to translation services. According to McKinsey, fully 70% of companies reported revenue generation from using AI. These are not trivial applications, either — AI is being deployed in mission-critical functions today, functions most people still erroneously think are far away, and there are many examples.

The US military is already deploying autonomous weapons (specifically, quadcopter mines) that do not require human kill decisions, even though we do not yet have an autonomous weapons treaty. Amazon actually deployed an AI-powered resume sorting tool before it was retracted for sexism. Facial recognition software used by actual police departments is resulting in wrongful arrests. Epic System’s sepsis prediction systems are frequently wrong even though they are in use at hospitals across the United States. IBM even canceled a $62 million clinical radiology contract because its recommendations were “unsafe and incorrect.”

The obvious objection to these examples, put forth by researchers like Michael Jordan, is that these are actually examples of machine learning rather than AI and that the terms should not be used interchangeably. The essence of this critique is that machine learning systems are not truly intelligent, for a host of reasons, such as an inability to adapt to new situations or a lack of robustness against small changes. This is a fine critique, but there is something important about the fact that machine learning systems can still perform well at difficult tasks without explicit instruction. They are not perfect reasoning machines, but neither are we (if we were, presumably, we would never lose games to these imperfect programs like AlphaGo).

Usually, we avoid dumb-AI risks by having different testing strategies. But this breaks down in part because we are testing these technologies in less arduous domains where the tolerance for error is higher, and then deploying that same technology in higher-risk fields. In other words, both the AI models used for Tesla’s autopilot and Facebook’s content moderation are based on the same core technology of neural networks, but it certainly appears that Facebook’s models are overzealous while Tesla’s models are too lax.

Where does dumb AI risk come from?

First and foremost, there is a dramatic risk from AI that is built on fundamentally fine technology but complete misapplication. Some fields are just completely run over with bad practices. For example, in microbiome research, one metanalysis found that 88% of papers in its sample were so flawed as to be plainly untrustworthy. This is a particular worry as AI gets more widely deployed; there are far more use cases than there are people who know how to carefully develop AI systems or know how to deploy and monitor them.

Another important problem is latent bias. Here, “bias” does not just mean discrimination against minorities, but bias in the more technical sense of a model displaying behavior that was unexpected but is always biased in a particular direction. Bias can come from many places, whether it is a poor training set, a subtle implication of the math, or just an unanticipated incentive in the fitness function. It should give us pause, for example, that every social media filtering algorithm creates a bias towards outrageous behavior, regardless of which company, country or university produced that model. There may be many other model biases that we haven’t yet discovered; the big risk is that these biases may have a long feedback cycle and only be detectable at scale, which means we will only become aware of it in production after the damage is done.

There is also a risk that models with such latent risk can be too widely distributed. Percy Liang at Stanford has noted that so-called “foundational models” are now deployed quite widely, so if there is a problem in a foundational model it can create unexpected issues downstream. The nuclear explosion vignette at the start of this essay is an illustration of precisely that kind of risk.

As we continue to deploy dumb AI, our ability to fix it worsens over time. When the Colonial Pipeline was hacked, the CEO noted that they could not switch to manual mode because the people who historically operated the manual pipelines were retired or dead, a phenomenon called “deskilling.” In some contexts, you might want to teach a manual alternative, like teaching military sailors celestial navigation in case of GPS failure, but this is highly infeasible as society becomes ever more automated — the cost eventually becomes so high that the purpose of automation goes away. Increasingly, we forget how to do what we once did for ourselves, creating the risk of what Samo Burja calls “industrial exhaustion.”

The solution: not less AI, smarter AI

So what does this mean for AI development, and how should we proceed?

AI is not going away. In fact, it will only get more widely deployed. Any attempt to deal with the problem of dumb AI has to deal with the short-to-medium term issues mentioned above as well as long-term concerns that fix the problem, at least without depending on the deus ex machina that is strong AI.

Thankfully, many of these problems are potential startups in themselves. AI market sizes vary but can easily exceed $60 billion and 40% CAGR. In such a big market, each problem can be a billion-dollar company.

The first important issue is faulty AI stemming from poor development or deployment that flies against best practices. There needs to be better training, both white labeled for universities and as career training, and there needs to be a General Assembly for AI that does that. Many basic issues, from proper implementation of k-fold validation to production deployment, can be fixed by SaaS companies that do the heavy lifting. These are big problems, each of which deserves its own company.

The next big issue is data. Whether your system is supervised or unsupervised (or even symbolic!), a large amount of data is needed to train and then test your models. Getting the data can be very hard, but so can labeling, developing good metrics for bias, making sure that it is comprehensive, and so on. Scale.ai has already proven that there is a large market for these companies; clearly, there is much more to do, including collecting ex-post performance data for tuning and auditing model performance.

Lastly, we need to make actual AI better. we should not fear research and startups that make AI better; we should fear their absence. The primary problems come not from AI that is too good, but from AI that is too bad. That means investments in techniques to decrease the amount of data needed to make good models, new foundational models, and more. Much of this work should also focus on making models more auditable, focusing on things like explainability and scrutability. While these will be companies too, many of these advances will require R&D spending within existing companies and research grants to universities.

That said, we must be careful. Our solutions may end up making problems worse. Transfer learning, for example, could prevent error by allowing different learning agents to share their progress, but it also has the potential to propagate bias or measurement error. We also need to balance the risks against the benefits. Many AI systems are extremely beneficial. They help the disabled navigate streets, allow for superior and free translation, and have made phone photography better than ever. We don’t want to throw out the baby with the bathwater.

We also need to not be alarmists. We often penalize AI for errors unfairly because it is a new technology. The ACLU found Congressman John Lewis was mistakenly caught up in a facial recognition mugshot; Congressman Lewis’s status as an American hero is usually used as a “gotcha” for tools like Rekognition, but the human error rate for police lineups can be as high as 39%! It is like when Tesla batteries catch fire; obviously, every fire is a failure, but electric cars catch fire much less often than cars with combustion engines. New can be scary, but Luddites shouldn’t get a veto over the future.

AI is very promising; we just need to make it easy to make it truly smart every step of the way, to avoid real harm and, potentially, catastrophe. We have come so far. From here, I am confident we will only go farther.

Evan J. Zimmerman is the founder and CEO of Drift Biotechnologies, a genomic software company, and the founder and chairman of Jovono, a venture capital firm.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Evan J. Zimmerman, Drift Biotechnologies
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!