AI & RoboticsNews

AI algorithms could disrupt our ability to think

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022


Last year, the U.S. National Security Commission on Artificial Intelligence concluded in a report to Congress that AI is “world altering.” AI is also mind altering as the AI-powered machine is now becoming the mind. This is an emerging reality of the 2020s. As a society, we are learning to lean on AI for so many things that we could become less inquisitive and more trusting of the information we are provided by AI-powered machines. In other words, we could already be in the process of outsourcing our thinking to machines and, as a result, losing a portion of our agency.

The trend towards greater application of AI shows no sign of slowing. Private investment in AI is at an all-time high, totaling $93.5 billion in 2021 — double the amount from the prior year — according to the Stanford Institute for Human-Centered Artificial Intelligence. And the number of patent filings related to AI innovation in 2021 is 30 times greater than the filings in 2015. This is proof the AI gold rush is running full force. Fortunately, much of what is being achieved with AI will be beneficial, as evidenced by examples of AI helping to solve scientific problems ranging from protein folding to Mars exploration and even communicating with animals.

Most AI applications are based on machine learning and deep learning neural networks that require large datasets. For consumer applications, this data is gleaned from personal choices, preferences, and selections on everything from clothing and books to ideology. From this data, the applications find patterns, leading to informed predictions of what we would likely need or want or would find most interesting and engaging. Thus, the machines are providing us with many useful tools, such as recommendation engines and 24/7 chatbot support. Many of these apps appear useful — or, at worst, benign.

An example that many of us can relate to are AI-powered apps that provide us with driving directions. These are undoubtedly helpful, keeping people from getting lost. I have always been very good at directions and reading physical maps. After having driven to a location once, I have no problem getting there again without assistance. But now I have the app on for nearly every drive, even for destinations I have driven many times. Maybe I’m not as confident in my directions as I thought; maybe I just want the company of the soothing voice telling me where to turn; or maybe I’m becoming dependent on the apps to provide direction. I do worry now that if I didn’t have the app, I might no longer be able to find my way.

Perhaps we should be paying more attention to this not-so-subtle shift in our reliance on AI-powered apps. We already know they diminish our privacy. And if they also diminish our human agency, that could have serious consequences. If we trust an app to find the fastest route between two places, we are likely to trust other apps and will increasingly move through life on autopilot, just like our cars in the not-too-distant future. And if we also unconsciously digest what we are presented in news feeds, social media, search, and recommendations, possibly without questioning it, will we lose the ability to form opinions and interests of our own?

The dangers of digital groupthink

How else could one explain the completely unfounded QAnon theory that there are elite Satan-worshipping pedophiles in U.S. government, business, and the media seeking to harvest children’s blood? The conspiracy theory started with a series of posts on the message board 4chan that then spread rapidly through other social platforms via recommendation engines. We now know — ironically with the help of machine learning — that the initial posts were likely created by a South African software developer with little knowledge of the U.S. Nevertheless, the number of people believing in this theory continues to grow; and it rivals some mainstream religions in popularity.

According to a story published in the Wall Street Journal, the intellect weakens as the brain grows dependent on phone technology. The same likely holds true for any information technology where content flows our way without us having to work to learn or discover on our own. If that’s true, then AI, which increasingly presents content tailored to our specific interests and reflects our biases, could create a self-reinforcing syndrome that simplifies our choices, satisfies immediate needs, weakens our intellect, and locks us into an existing mindset.

NBC News correspondent Jacob Ward argues in his new book The Loop that through AI apps we have entered a new paradigm, one with the same choreography repeated. “The data is sampled, the results are analyzed, a shrunken list of choices is offered, and we choose again, continuing the cycle.” He adds that by “using AI to make choices for us, we will wind up reprogramming our brains and our society … we’re primed to accept what AI tells us.”

The Cybernetics of conformity

A key part of Ward’s argument is that our choices are shrunk because the AI is presenting us with options similar to what we have preferred in the past or are most likely to prefer based on our past. So our future becomes more narrowly defined. Essentially, we could become frozen in time — a form of mental homeostasis — by the apps theoretically designed to help us make better decisions. This reinforcing worldview is reminiscent of Don Juan explaining to Carlos Castaneda in A Separate Reality that “the world is such and such, or so-and-so only because we tell ourselves that that is the way it is.”

Ward echoes this when he says, “The human brain is built to accept what it’s told, especially if what it’s told conforms to our expectations and saves us tedious mental work.” The positive feedback loop presented by AI algorithms regurgitating our desires and preferences contributes to the information bubbles we already experience, reinforcing our existing views, adding to polarization by making us less open to different points of view, less able to change, and make us into people we did not consciously intend to be. This is essentially the cybernetics of conformity, of the machine becoming the mind while abiding by its own internal algorithmic programming. In turn, this will make us — as individuals and as a society — simultaneously more predictable and more vulnerable to digital manipulation.

Of course, it is not really AI that is doing this. The technology is simply a tool that can be used to achieve a desired end, whether to sell more shoes, persuade to a political ideology, control the temperature in our homes, or talk with whales. There is intent implied in its application. To maintain our agency, we must insist on an AI Bill of Rights as proposed by the U.S. Office of Science and Technology Policy. More than that, we need a regulatory framework soon that protects our personal data and ability to think for ourselves. The E.U. and China have made steps in this direction, and the current administration is leading to similar moves in the U.S. Clearly, now is the time for the U.S. to get more serious in this endeavor — before we become non-thinking automatons.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!