AI & RoboticsNews

With the Metaverse on the way, an AI bill of rights is urgent

There is a lot more than the usual amount of handwringing over AI these days. Former Google CEO Eric Schmidt and former US Secretary of State and National Security Advisor Henry Kissinger put out a new book last week warning of AI’s dangers. Fresh AI warnings have also been issued by professors Stuart Russell (UC Berkeley) and Youval Harari (University of Jerusalem). Op-eds from the editorial board at the Guardian and Maureen Dowd at the New York Times have amplified these concerns. Facebook — now rebranded as Meta — has come under growing pressure for its algorithms creating social toxicity, but it is hardly alone. The White House has called for an AI bill of rights, and the Financial Times argues this should extend globally. Worries over AI are flying faster than a gale force wind.

The concerns point to shortcomings in existing AI implementations and the inherent dangers posed by its use in relation to employment, housing, credit, commerce, criminal sentencing, and healthcare. And yet, there are a multitude of significant advances brought about by AI that would otherwise not be possible — from revolutionizing our understanding of biology to saving energy in data centers, to becoming a new trusted source of career advice, developing computer code, identifying cancers in patients, and even opening up the possibility of communicating with other species.  In these areas and more, AI is increasingly adding value to our experience and becoming interwoven into the fabric of daily life.

AI is a classic double-edged sword in much the same way as other major technologies have been since the start of the Industrial Revolution. Burning carbon drives the industrial world but leads to global warming. Nuclear fission provides cheap and abundant electricity though could be used to destroy us. The Internet boosts commerce and provides ready access to nearly infinite amounts of useful information, yet also offers an easy path for misinformation that undermines trust and threatens democracy. AI finds patterns in enormous and complex datasets to solve problems that people cannot, though it often reinforces inherent biases and is being used to build weapons where life and death decisions could be automated. The danger associated with this dichotomy is best described by sociobiologist E.O. Wilson at a Harvard debate, where he said “The real problem of humanity is the following: We have paleolithic emotions; medieval institutions; and God-like technology.”

Where will our latest God-like technology take us?

Harari imagines an AI future where the rich will have access to the latest technologies, such as brain implants and genetic engineering, leading to greater inequalities and class differentiation. Author Jeanette Winterson has a more sanguine view of how people and AI can live together. Her latest book, , is a series of essays that imagines a future where artificial intelligence is smart enough to live alongside humans. Yet, both Harari and Winterson agree that AI changes at least one fundamental aspect of humanity. As Winterson puts it: “Once artificial intelligence ceases to be a tool, if it does, and becomes a player in the game, something alongside us, then Homo Sapiens is no longer top of the tree.”

Both authors also talk about how our species may evolve into something else entirely as AI gains in sophistication, where our bodies may no longer be necessary. In that eventuality, we would become purely digital consciousness. But how then would we act in the world? Futuristic literature introduces some possibilities. We might act purely in a digital realm as described by Neal Stephenson in , where the consciousness of a person is uploaded onto a computer network and they’re still around in a sense. Or perhaps a person’s digital consciousness could be downloaded through the network into a robot as described by William Gibson in .

Add to this mix the potential of an “artificial friend,” an emotionally intelligent android as portrayed by Kazuo Ishiguro in . Or “digients” (short for “digital entities”), as described by Ted Chiang in “The Lifecycle of Software Objects,” where artificial intelligences that have been created within a purely digital world (much as the impending metaverse) inhabit a digital shared space with people where they can interact. In Chiang’s visiondigients too can transcend the digital to temporarily inhabit a physical body, and both move and act in the real world. When we consider multiple digital beings and realms, many relationship possibilities emerge between human and artificial intelligences. These possibilities and others made possible by AI point to the potential to transform life as we know it and create lifetimes of employment for ethicists.

Although these possibilities are still some years in the future, they could happen sooner than we think. The concept of the metaverse — a digital shared space in which people and artificial intelligences live and interact — was in this futuristic category much like those above. First envisioned by Stephenson in his 1992 novel the concept was subsequently expanded by Ernest Cline in . This vision will soon be realized, less than 30 years from its original conception.

Tech influencer Jon Radoff explains how the metaverse will be enabled, populated by, and supported with AI. This previously imaginary world is quickly becoming real with AI’s underpinning, moving from science fiction to fact. It is possible this could be a digital paradise, or the most addicting thing ever created. I am guessing the latter, a ubiquitous medium offering new ways to create, play, shop, socialize, and work online. Like any addiction, this could be a public health problem. Nevertheless, an array of companies are now getting ready for the metaverse; Meta of course, and also Microsoft, Nike, Dropbox, Nvidia, Niantic and more. New Zealand tech company Soul Machines is already creating “digital people” for the metaverse that have “teachable” digital brains. These sound a lot like digients and are being developed in part to become a digital workforce. What could go wrong?

Is an AI bill of rights enough?

The metaverse represents something new, a step change in AI development where digital people – artificially intelligent digital constructs – will be participating alongside humans. They will be in our games, selling to us, and sometimes standing in for us through avatars, essentially personal digital twins. The metaverse will likely be a gateway that spawns technologies leading to the sci-fi scenarios described above. Ethics experts are already asking who will build and control the metaverse and how privacy will be protected.

It is not only ethics experts who are expressing concerns about the metaverse and data privacy risks. Which comes back to the White House idea of an AI bill of rights to guard people against potential misuse and abuse of transformative AI technologies. With the EU and China moving in somewhat similar directions, the chorus calling for greater ethics and regulation of AI technology is growing louder. Even Russia recently developed an AI Code of Ethics. As if there were not already ample reasons to pursue these initiatives, the advent of the metaverse adds greater urgency. However, non-binding ethics discussions and codes are not enough.

It is possible that existing laws to protect privacy, for example, could be better applied and prioritized, and that an AI bill of rights is simply a distraction from their enforcement. An AI bill of rights is a good idea, but these rights are just words if they are not backed-up by statute. Hopefully, such a list of rights would provide the framework needed by agencies and legislatures to enact the administrative rules and laws needed.

Though not widely known, the US government made a start to regulate the use of AI by banning the use of biased and unexplainable algorithms in decisions that affect consumers. Much more needs to be done to protect people, and to hold accountable those who misuse the technology. Though the initial metaverse could be clunky and not all that compelling, it is likely only a matter of a few years before those problems are solved. Now is the time for meaningful AI regulations and standards. An AI bill of rights is an important and useful step in that direction.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!