AI & RoboticsNews

The future of American democracy hinges on ethical AI

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Earlier this summer, the National Artificial Intelligence Research Resource (NAIRR) Task Force released a request for information (RFI) on how to build an implementation roadmap for a shared AI research infrastructure. Along with requests for ideas on how to own and implement this agenda, it requested guidance on how to best ensure that privacy, civil liberties, and civil rights are protected going forward. To accomplish this objective, values-based ethical reasoning education and training resources must be at the core of the Task Force’s strategy.

What’s at stake

Congress’s passage of the ​​National Defense Authorization Act for Fiscal 2021 directing the Biden White House to create the NAIRR Task Force could be as consequential to America’s democratic ideals as numerous wars, policy, and civil rights movements in our past.

While the early public announcements of the NAIRR Task Force do not refer explicitly to foreign governments, make no mistake that geopolitical competition with China, Russia, and other nation-states looms large in the urgency of its mission.

Not since the Manhattan Project and the race to develop the atomic bomb has a technology been as important in its potential to reshape the balance of power between Western democracy and what Stanford’s Institute for Human-Centered Artificial Intelligence calls “digital authoritarianism.” Similar to the nuclear arms race, the path the United States takes in developing and deploying this technology will determine the ambit of freedom and quality of life for billions of people on Earth. The stakes are that high.

The precedents are clear

While the stated delivery date for the NAIRR Task Force’s report and roadmap is not until November 2022, it is important to keep in mind that ensuring ethics in AI that uphold America’s values is a long process, and one extremely core to the American identity. Yet, the precedent for an ethical, inclusive roadmap is written in our history, and we can look to the military, medical, and legal professions for examples of how to do so successfully.

The military. On July 26, 1948, President Harry Truman issued Executive Order 9981 to initiate the desegregation of the military. This led to the establishment of the President’s Committee on Equality of Treatment and Opportunity in the Armed Services and one of the most consequential ethics and values reports in United States history. But it’s worth noting that it wasn’t until January 2, 2021, that retired 4-Star General Lloyd Austin III was appointed to be the first Black Secretary of Defense. Embedding ethics and American values in the AI-related disciplines will require the same sustained and unrelenting effort.

The medical field. The Code of Medical Ethics of the American Medical Association (AMA) is considered the gold standard in ethics and values in a professional discipline, dating all the way back to fifth century BCE and the ideals of Greek physician Hippocrates to “relieve suffering and promote well-being in a relationship of fidelity with the patient.” Despite this deep and rich history of ethics at the core of medicine, it took until 1977 for Johns Hopkins to become the first medical school in the nation to implement a required course on medical ethics in its core curriculum.

The law. Bar associations began to introduce ethical codes for attorneys and judges in the United States in the early 1800s, but it was not until the early twentieth century and the widespread adoption of the Harvard Method in law schools that legal ethics were tied to professional responsibility and a clear set of moral duties to society were embedded into legal education and the profession.

The road ahead

The AI-related disciplines (computer science, engineering, and design) are far behind other professions in ethics requirements, education, and training. There are, however, dozens of promising tech ethics-driven organizations and initiatives working to promote and unify ethical reasoning education and training in AI.

Higher education. Embedding ethics and values training into the core curriculum of every college-trained engineer, designer, and computer scientist must be a core goal of any national AI strategy.

To this end, The Markkula Center for Applied Ethics at the University of Santa Clara is one of the most prolific producers of actionable technology ethics curricula, case-studies, and decision-making training for students and practitioners. Likewise, MIT has started to develop an AI-specific ethics curriculum for this purpose and should also be consulted during the implementation planning process. Moreover, ethics in AI institutes are being created all over the world and represent fertile ground for resources to be added to the NAIRR Task Force.

While most of these efforts focus on higher-education and current professionals, the Task Force also has an opportunity to begin sharing values and ethics resources to the large STEM-focused high school programs emerging across the country. The Committee on STEM Education of the National Science and Technology Council has highlighted the need for more ethics education at all levels of STEM education, and the NAIRR Task Force has the opportunity to distribute and unify those resources.

Public-private partnerships and consortiums. Leading public-private and professional organizations are building best-in-class offerings that train AI professionals on methods to build ethically sound AI. Consulting these outside groups will be essential as the NAIRR Task Force forges ahead with its national AI strategy.

For example, The World Economic Forum (WEF)’s Shaping The Future of Technology Governance: Artificial Intelligence and Machine Learning Platform is having a significant impact on governments and corporations around the world through consulting, publicly available research, white papers, ethical toolkits, and case studies. These products can help accelerate the benefits and mitigate the risks of artificial intelligence and machine learning.

Similarly, the Responsible AI Institute (RAI) has created the first independent, accredited certification program for Responsible AI. In fact, RAI has already been tapped by the United States Department of Defense Joint Artificial Intelligence Command (JAIC) to embed ethical and values-driven responsible AI guardrails into procurement practices.

Looking ahead, it will take us years to embed ethics and values into AI-related professional disciplines, but it is possible. As the NAIRR Task Force institutes its roadmap, the team must reference our history, provide resources for ethical training in university settings, scale that training to high school STEM programs, and work with professional organizations to instill best in class materials to upskill those currently in the industry. If we are going to win the AI innovation race while preserving our democratic principles, we must start here and we must start now.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Will Griffin, Hypergiant Industries
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!