AI & RoboticsNews

Trustworthy AI is now within reach

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The artificial intelligence (AI) boom began in earnest in 2012 when Alex Krizhevsky, in collaboration with Ilya Sutskever and Geoffrey Hinton (who was Krizhevsky’s Ph.D. advisor), created AlexNet, which then won the ImageNet Large Scale Visual Recognition Challenge. The goal of that annual competition, which had begun in 1996, was to classify the 1.3 million high-resolution photographs in the ImageNet training set into 1,000 different classes. In other words, to correctly identify a dog and a cat. 

AlexNet consisted of a deep learning neural network and was the first entrant to break 75% accuracy in the competition. Perhaps more impressively, it halved the existing error rate on ImageNet visual recognition to 15.3%. It also established, arguably for the first time, that deep learning had substantive real-world capabilities. Among other applications, this paved the way for the visual recognition systems used across industries from agriculture to manufacturing.

This deep learning breakthrough triggered accelerated use of AI. But beyond the unquestioned genius of these and other early practitioners of deep learning, it was the confluence of several major technology trends that boosted AI. The internet, mobile phones and social media led to a data explosion, which is the fuel for AI. Computing continued its metronome-like Moore’s Law advance of doubling performance about every 18 months, enabling the processing of vast amounts of data. The cloud provided ready access to data from anywhere and lowered the cost of large-scale computing. Software advances, largely open-source, led to a flourishing of AI code libraries available to anyone. 

The AI gold rush

All of this led to an exponential increase in AI adoption and a gold rush mentality. Research from management consulting firm PwC shows global GDP could be up to 14% higher in 2030 as a result of AI, the equivalent of an additional $15.7 trillion — making it the biggest commercial opportunity in today’s economy. According to Statista, global AI startup company funding has grown exponentially from $670 million in 2011 to $36 billion in 2020. Tortoise Intelligence reported that this more than doubled to $77 billion in 2021. In the past year alone, there have been over 50 million online mentions of AI in news and social media.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

All of that is indicative of the groundswell of AI development and implementation. Already present in many consumer applications, AI is now gaining broad adoption in the enterprise. According to Gartner, 75% of businesses are expected to shift from piloting to operationalizing AI by 2024.

It is not only deep learning that is driving this. Deep learning is a subset of machine learning (ML), some of which has existed for several decades. There are a large variety of ML algorithms in use, for everything from email spam filters to predictive maintenance for industrial and military equipment. ML has benefitted from the same technology trends that are driving AI development and adoption.

With a rush to adoption have come some notable missteps. AI systems are essentially pattern recognition technologies that scour existing data, most of which has been collected over many years. If the datasets upon which AI acts contain biased data, the output from the algorithms can reflect that bias. As a consequence, there have been chatbots that have gone terribly awry, hiring systems that reinforce gender stereotypes, inaccurate and possibly biased facial recognition systems that lead to wrongful arrests, and historical bias that leads to loan rejections. 

A clear need for trustworthy and Responsible AI

These and other problems have prompted legitimate concerns and led to the field of AI Ethics. There is a clear need for Responsible AI, which is essentially a quest to do no harm with AI algorithms. To do this requires that bias be eliminated from the datasets or otherwise mitigated. It is also possible that bias is unconsciously introduced into the algorithms themselves by those who develop them and needs to be identified and countered. And it requires that the operation of AI systems be explainable so that there is transparency in how the insights and decisions are reached.

The goal of these endeavors is to ensure that AI systems not only do no specific harm but are trustworthy. As Forrester Research notes in a recent blog, this is critical for business, as it cannot afford to ignore the ethical debt that AI technology has accrued.

Responsible AI is not easy, but is critically important to the future of the AI industry. There are new applications using AI coming online all the time where this could be an issue, such as determining which U.S. Army candidates are deserving of promotion. Recognizing that the problem exists has focused considerable efforts over the last few years on developing corrective measures.

The birth of a new field

There is good news on this front, as techniques and tools have been developed to mitigate algorithm bias and other problems at different points in AI development and implementation, whether in the original design, in deployment or after it is in production. These capabilities are leading to the emerging field of algorithmic auditing and assurance which will build trust in AI systems.

Besides bias, there are other issues in building Trustworthy AI, including the ability to explain how an algorithm reaches its recommendations and whether the results are replicable and accurate, ensure privacy and data protection, and secure against adversarial attack. The auditing and assurance field will address all these issues, as found in research done by Infosys and the University College of London. The purpose is to provide standards, practical codes and regulations to assure users of the safety and legality of algorithmic systems.

There are four primary activities involved. 

Development: An audit will have to account for the process of development and documentation of an algorithmic system.

Assessment: An audit will have to evaluate an algorithmic system’s behaviors and capacities.

Mitigation: An audit will have to recommend service and improvement processes for addressing high-risk features of algorithmic systems.

Assurance: An audit will be aimed at providing a formal declaration that an algorithmic system conforms to a defined set of standards, codes of practice or regulations.

Ideally, a business would include these concepts from the beginning of an AI project to protect itself and its customers. If this is widely implemented, the result would produce an ecosystem of Trustworthy AI and Responsible AI. In doing so, algorithmic systems would be properly appraised, all plausible measures for reducing or eliminating risk would be undertaken, and users, providers and third parties would be assured of the systems’ safety. 

Only a decade ago, AI was practiced mostly by a small group of academics. The development and adoption of these technologies has since expanded dramatically. For all the considerable advances, there have been shortcomings. Many of these can be addressed and resolved with algorithmic auditing and assurance. With the wild ride of AI over the last 10 years, this is no small accomplishment. 

Bali (Balakrishna) DR is senior vice president, service offering head — ECS, AI and automation at Infosys.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Balakrishna DR,  Infosys
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!