Join Transform 2021 this July 12-16. Register for the AI event of the year.
Implementing AI responsibly implies adopting AI in a manner that’s ethical, transparent, and accountable as well as consistent with laws, regulations, norms, customer expectations, and organizational values. “Responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable.
But organizations often underestimate the challenges in attaining this. According to Boston Consulting Group (BCG), less than half of enterprises that achieve AI at scale have fully mature, responsible AI deployments. Organizations’ AI programs commonly neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation, BCG analysts found.
The Responsible AI Institute (RAI) is among the consultancies aiming to help companies realize the benefits of AI implemented thoughtfully. An Austin, Texas-based nonprofit founded in 2017 by University of Texas, USAA, Anthem, and CognitiveScale, the firm works with academics, policymakers, and nongovernmental organizations with the goal of “unlocking the potential of AI while minimizing unintended consequences.”
According to chairman and founder Manoj Saxena, adopting AI responsibly requires a wholistic and end-to-end approach, ideally using a multidisciplinary team. There’s multiple ways that AI checks can be put into production, including:
- Awareness of the context in which AI will be used and could create biased outcomes.
- Engaging product owners, risk assessors, and users in fact-based conversations about potential biases in AI systems.
- Establishing a process and methodology to continually identify, test, and fix biases.
- Continuing investments in new research coming out around bias and AI to make black-box algorithms more responsible and fair.
“[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact,” Saxena told VentureBeat via email. “[They also need to] invest more to ensure members who are designing the systems are diverse.”
Involving stakeholders
Mark Rolston, founder of global product design consultancy Argodesign and advisor at RAI, anticipates that trust in AI systems will become as paramount as the rule of law has been to the past several hundred years of progress. The future growth for AI into more abstract concept processing capabilities will present even more critical needs around trust and validation of AI, he believes.
“Society is becoming increasingly dependent on AI to support every aspect of modern life. AI is everywhere. And because of this we must build systems to ensure that AI is running as intended — that it is trustworthy. The argument is fundamentally that simple,” Rolston told VentureBeat in an interview. “Today we’re bumping up on the fundamental challenge of AI being too focused on literal problem solving. It’s well-understood that the future lies in teaching AI to think more abstractly … For our part as designers, that will demand the introduction of a whole new class of user interface that convey those abstractions.”
Saxena advocates for AI to be designed, deployed, and managed with “a strong orientation toward human and societal impact,” noting that AI evolves with time as opposed to traditional rules-based computing paradigms. Guardrails need to be established to ensure that the right data is fed into AI systems, he says, and that the right testing is done of various models to guarantee positive outcomes.
Responsible AI practices can bring major business value to bear. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.
“As the adoption of AI continues into all aspects of our personal and professional lives, the need for ensuring that these AI systems are transparent, accountable, bias-free, and auditable is only going to grow exponentially … On the technology and academic front, responsible AI is going to become an important focus for research, innovation, and commercialization by universities and entrepreneurs alike,” Saxena said. “With the latest regulations on the power of data analytics from the FTC and EU, we see hope in the future of responsible AI that will merge the power and promise of AI and machine learning systems with a world that is fair and balanced.”
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Kyle Wiggers
AI Staff Writer
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Kyle Wiggers
Source: Venturebeat