AI & RoboticsNews

Hitting the Books: What do we want our AI-powered future to look like?

Once the shining city on a hill that the rest of the world looked to for leadership and guidance, America’s moral high ground has steadily eroded in recent decades — and rapidly accelerated since Trump’s corrupt, self-dealing tenure in the White House began. Our corporations, and the technologies they develop, are certainly no better. Amazon treats its workers like indentured servants at best, Facebook algorithms actively promotes genocide overseas and fascism here in the States, and Google doesn’t even try to live up to its own maxim of “don’t be evil” anymore.

In her upcoming book, The Power of Ethics: How to Make Good Choices in a Complicated World, Susan Liautaud, Chair of Council of the London School of Economics and Political Science, lays out an ambitious four-step plan to recalibrate our skewed moral compass illustrating how effective ethical decision making can be used to counter damage done by those in power and create a better, fairer and more equitable world for everyone. In the excerpt below, Liautaud explores the “blurring boundaries” of human-AI relations and how we can ensure that this emerging technology is used for humanity’s benefit rather than just becoming another Microsoft Tay.

The Power of Ethics by Susan Liataud
Simon & Schuster

Excerpt from THE POWER OF ETHICS by Susan Liautaud. Copyright © 2021 by Susan Liautaud. Reprinted by permission of Simon & Schuster, Inc, NY.


Blurred boundaries—the increasingly smudged juncture where machines cross over into purely human realms—stretch the very definition of the edge. They diminish the visibility of the ethical questions at stake while multiplying the power of the other forces driving ethics today. Two core questions demonstrate why we need to continually reverify that our framing prioritizes humans and humanity in artificial intelligence.

First, as robots become more lifelike, humans (and possibly machines) must update regulations, societal norms, and standards of organizational and individual behavior. How can we avoid leaving control of ethical risks in the hands of those who control the innovations or prevent letting machines decide on their own? A non-binary, nuanced assessment of robots and AI, with attention to who is programming them, does not mean tolerating a distortion of how we define what is human. Instead, it requires assuring that our ethical decision-making integrates the nuances of the blur and that decisions that follow prioritize humanity. And it means proactively representing the broad diversity across humanity — ethnicity, gender, sexual orientation, geography and culture, socioeconomic status, and beyond.

Second, a critical recurring question in an Algorithmic Society is: Who gets to decide? For example, if we use AI to plan traffic routes for driverless cars, assuming we care about efficiency and safety as principles, then who gets to decide when one principle is prioritized over another, and how? Does the developer of the algorithm decide? The management of the company manufacturing the car? The regulators? The passengers? The algorithm making decisions for the car? We have not come close to sorting out the extent of the decision power and responsibility we will or should grant robots and other types of AI—or the power and responsibility they may one day assume with or without our consent.

One of the main principles guiding the development of AI among many governmental, corporate, and nonprofit bodies is human engagement. For example, the artificial intelligence principles of the Organisation for Economic Co-operation and Development emphasize the human ability to challenge AI-based outcomes. The principles state that AI systems should “include appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society.” Similarly, Microsoft, Google, research lab OpenAI, and many other organizations include the capacity for human intervention in their set of principles. Yet it’s still unclear when and how this works in practice. In particular, how do these controllers of innovation prevent harm—whether from car accidents or from gender and racial discrimination due to artificial intelligence algorithms trained on non-representative data. In addition, certain consumer technologies are being developed that eliminate human intervention altogether. For example, Eugenia Kuyda, the founder of a company manufacturing a bot companion and confidante called Replika, believes that consumers will trust the confidentiality of the app more because there is no human intervention.

We desperately need an “off ” switch for all AI and robotics in my opinion. I In some cases, we need to plant a stake in the ground with respect to outlier, clearly unacceptable robot and AI powers. For example, giving robots the ability to indiscriminately kill innocent civilians with no human supervision or deploying facial recognition to target minorities is unacceptable. What we should not do is quash the opportunities AI offers, such as locating a lost child or a terrorist or dramatically increasing the accuracy of medical diagnoses. We can equip ourselves to get in the arena. We can influence the choices of others (including companies and regulators, but also friends and co-citizens), and make more (not just better) choices for ourselves, with a greater awareness for when a choice is being taken away from us. Companies and regulators have a responsibility to help make our choices clearer, easier, and informed: Think first about who gets to (and should get to) decide and how you can help others be in a position to decide.

Now turning to the aspects of the framework uniquely targeting blurred boundaries:

Blurred boundaries fundamentally require us to step back and reconsider whether our principles define the identity we want in this blurry world. Do the most fundamental principles—the classics about treating each other with respect or being accountable—hold up in a world in which what we mean by “each other” is blurry? Do our principles focus sufficiently on how innovation impacts human life and the protection of humanity as a whole? And do we need a separate set of principles for robots? My answer to the latter is no. But we do need to ensure that our principles prioritize humans over machines.

Then, application: Do we apply our principles in the same way in a world of blurred boundaries? Thinking of consequences to humans will help. What happens when our human-based principles are applied to robots? If our principle is honesty, is it acceptable to lie to a bot receptionist? And do we distinguish among different kinds of robots and lies? If you lie about your medical history to a diagnostic algorithm, it would seem that you have little chance of receiving an accurate diagnosis. Do we care whether robots trust us? If the algorithm needs some form of codable trust in order to assure the off switch works, then yes. And while it may be easy to dismiss the emotional side of trust given that robots don’t yet experience emotion, here again we ask what the impact could be on us. Would behaving in an untrustworthy manner with machines negatively affect our emotional state or spread mistrust among humans?

Blurred boundaries increase the challenge of obtaining and understanding information. It’s hard to imagine what we need to know—and that’s before we even get to whether we can know it. Artificial intelligence is often invisible to us; companies don’t disclose how their algorithms work; and we lack the technological expertise to assess the information.

But some key points are clear. Speaking about robots as if they are human is inaccurate. For example, many of Sophia’s functions—a lifelike humanoid robot—are invisible to the average person. But thanks to the Hanson Robotics team, which aims for transparency, I learned that Sophia tweets @RealSophiaRobot with the help of the company’s marketing department, whose character writers compose some of the language and extract the rest directly from Sophia’s machine learning content. And yet, the invisibility of many of Sophia’s functions is essential to the illusion of her seeming “alive” to us.

Also, we can demand transparency about what really matters to us from companies. Maybe we don’t need to know how the bot fast-food employee is coded, but we need to know that it will accurately process our food allergy information and confirm that the burger conforms to health and safety requirements.

Finally, when we look closer, some blur isn’t as blurry as it might first seem. Lilly, a creator of a male romantic robotic companion called inMoovator, doesn’t consider her robot to be a human. The concept of a romance between a human and a machine is blurry, but she openly acknowledges that her fiancé is a machine.

For the time being, responsibility lies with the humans creating, programming, selling, and deploying robots and other types of AI—whether it’s David Hanson, a doctor who uses AI to diagnose cancer, or a programmer who develops the AI that helps make immigration decisions. Responsibility also lies with all of us as we make the choices we can about how we engage with machines and as we express our views to try to shape both regulation and society’s tolerance levels for the blurriness. (And it bears emphasizing that holding responsibility as a stakeholder does not make robots any more human, nor does it give them the same priority as a human when principles conflict.)

We also must take care to consider how robots might be more important for those who are vulnerable. So many people are in difficult situations where human assistance is not safe or available, whether due to cost, being in an isolated or conflict zone, inadequate human re-sources, or other reasons. We can be more proactive in considering stakeholders. Support the technology leaders who shine a light on the importance of the diversity of data and perspectives in building and regulating the technology—not just sorting out the harm. Ensure that non-experts from a wide variety of backgrounds, political perspectives, and ages are lending their views, reducing the risk that blur-creating technologies contribute to inequality.

Blurred boundaries also compromise our ability to see potential consequences over time, leading to blurred visibility. We don’t yet have enough research or insight into potential mutations. For example, we don’t know the long-term psychological or economic impact of robot caregivers, or the impact on children growing up with AI in social media and digital devices. And just as we’ve seen social media platforms improve connections and give people a voice, we’ve also seen that they can be addictive, a mental health concern, and weaponized to spread compromised truth and even violence.

I would urge companies and innovators creating seemingly friendly AI to go one step further: Build in technology breaks—off switches— more often. Consider where the benefits of their products and services might not be useful enough to society to warrant the additional risks they create. And we all need to push ourselves harder to use the control we have. We can insist on truly informed consent. If our doctor uses AI to diagnose, we should be told that, including the risks and benefits. (Easier said than done, as doctors cannot be expected to be AI experts.) We can limit what we say to robots and AI devices such as Alexa, or even whether we use them at all. We can redouble our efforts to model good behavior to children around these technologies, humanoid or not. And we can urgently support political efforts to prioritize and improve regulation, education, and research.


Author: A. Tarantola
Source: Engadget

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!