AI & RoboticsNews

Escalating concerns for AI in 2023 and what can be done

Check out all the on-demand sessions from the Intelligent Security Summit here.


When people think of artificial intelligence (AI), what comes to mind is a cadre of robots uniting as sentient beings to overthrow their masters.

Of course, while this is still far out of the realm of possibility, throughout 2022, AI has still woven its way into the daily lives of consumers. It arrives in the form of good recommendation engines when they’re shopping online, automatically recommending solutions for customer service questions from the knowledge base, and suggestions on how to fix grammar when writing an email.

This trend follows what was established last year. According to McKinsey’s “The State of AI in 2021” report, 57% of companies in emerging economies had adopted some form of AI, up from 45% in 2020. In 2022, an IBM survey found that, though AI adoption is gradual, four out of five companies plan to leverage the technology at some point soon.

In 2023, I expect the industry to further embrace AI as a means to continue software evolution. Users will witness the technology providing contextual understanding of written and spoken language, helping arrive at decisions faster and with better accuracy, and telling the bigger picture story behind disparate data points in more useful and applicable ways.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

Privacy a central topic

These exciting developments are not without mounting concerns. I expect privacy, or lack thereof, to remain a central topic of discussion and fear among consumers (in fact, I believe that if the metaverse doesn’t take off, it will be due to privacy concerns).

Pieces of AI also need to be trained, and current processes for doing so carry a high likelihood of introducing biases, such as misunderstanding spoken language or skewing data points. Simultaneously, the media and international governance have not caught up to where AI currently sits and is headed in 2023.

Despite all these problems, AI is going to move the needle for enterprises in 2023, and in turn, they will capitalize by improving experiences or processes, continuing on what I’ve been seeing AI do for the last handful of years. Doing so will require a sharp focus on what might go wrong, what’s currently going right, and how to swiftly apply forward-thinking solutions. Here’s more information on all three and how companies can kick off the process.

AI fully enters the mainstream

As mentioned before, mainstream adoption of AI is on the way. Devices, apps, and experience platforms are all likely to come equipped with AI right from the get-go. No longer will consumers be able to opt-in, which will accelerate and heighten concerns about AI that exist in the mainstream already.

Privacy reigns supreme in this regard. Given how many public data breaches have occurred over the last few years — including those at LinkedIn, MailChimp, and Twitch — consumers are understandably wary of giving out personal information to tech companies. It’s unfortunate because consumers have proven that they are willing to share some personal information if it leads to a better experience. In fact, according to the 2022 Global Data Privacy report by Global Data and Marketing Alliance, 49% of consumers are comfortable with providing personal data and 53% believe doing so is of paramount importance for maintaining the modern tech landscape.

One of the central issues is that there isn’t any consensus on what best practices look like across the industry; it’s tough to garner data if the concept of ethical collection is fluid. AI isn’t necessarily new, but the technology is still in its nascent stages, and governance has not yet matured to the point where there exists any consistency across companies. For example, California has enacted strong privacy laws that protect consumers — the California Consumer Privacy Act (CCPA) — yet, at this moment, they remain one of the only states to take direct action. (Some states, such as Utah and Colorado, have legislation in the pipeline.)

Full transparency a must

To prepare for the inevitability of AI-first technology, companies could demonstrate full transparency by providing easy access to their privacy policies — or, if none exist, compose them as soon as possible and make them readily available to view on the company’s website.

Privacy policy composition is still driven by 1998 guidance from the Federal Trade Commission (FTC), which stipulates that policies contain these five elements:

  • Notice/Awareness: Consumers must be made aware that their information is going to be collected, how that information will be used, and who will be receiving it;
  • Choice/Consent: Consumers have the opportunity to opt-in or opt-out of data collection, and to what degree;
  • Access/Participation: Consumers can view their data at any point and implement tweaks as needed;
  • Integrity/Security: Consumers are provided with the steps a company is taking to ensure their data remains secure and accurate while obscuring irrelevant personal details;
  • Enforcement/Redress: Finally, consumers must understand how troubleshooting will occur and what consequences exist for poor handling of data.

Granular language, while generally frowned upon in communicating with a non-tech-savvy audience, is welcome in this instance, as consumers with a full understanding of how their data gets used are more likely to share bits and pieces.

Biases in AI must be eliminated

Biases are often invisible, even if their effects are pronounced, which means their elimination is difficult to guarantee. And, despite its advanced state, AI in 2023 remains just as prone to biases as its human counterparts. Sometimes, this technology has trouble parsing accents; perhaps it fails to present a balanced set of data points; at times, it could eschew accessibility and disenfranchise a cohort of users.

Biases are usually introduced early in the process. AI needs to be trained, and many companies opt either for purchasing synthetic data from third-party vendors — which is prone to distinct biases — or having it comb the general internet for contextual clues. However, no one is regulating or monitoring the world wide web (it’s worldwide, after all) for biases, and they’re likely to creep into an AI platform’s foundation.

Financial investments in AI aren’t likely to trivialize anytime soon, so in 2023, it’s of particular importance to establish processes and best practices to scrub as many biases, known or unknown, as quickly as possible.

Human safeguards

One of the most effective safeguards against bias is keeping humans between the data collection and processing phases of AI training. For example, here at Zoho, some employees join AI in combing through publicly available data to first scrub any trace of personally identifiable data — not only to protect these individuals, but to ensure only crucial pieces of information make it through. Then, the data is further distilled to include only what’s relevant.

For example, an AI system that will be reaching out to pregnant women does not require behavior data on women who are not pregnant, and it’s unreasonable to expect AI to make this distinction right away.

An important thing to remember about bias is that it remains an evolving concept and a moving target, particularly as access to data improves. That’s why it’s essential for companies to ensure that they are routinely scanning for new information and accordingly updating their criteria for bias. If the company has been treating its data like code — with proper tags, version control, access control, and coherent data branches — this process can be completed more swiftly and effectively.

The media narrative remains relentless

At the center of the above two issues sits the media, which is prone to repeat and reemphasize two conflicting narratives. On the one hand, the media reports that AI is a marvelous piece of technology with the potential to revolutionize our daily lives in both obvious and unseen ways. On the other, though, they continue to insinuate that AI technology is one step away from taking people’s jobs and declaring itself supreme overlord of Earth.

As AI technology becomes more ubiquitous in 2023, expect the current media’s approach to remain mostly the same. It’s reasonable to anticipate a slight increase in stories about data breaches, though, as more access to AI will lead to a greater possibility that a consumer could find themselves affected.

This trend could exacerbate a bit of a catch-22: AI cannot truly improve without increased adoption, yet adoption numbers are likely to stagnate due to lags in technology improvement.

Companies can pave their own trail away from the media’s low-grade fear-mongering by embracing direct-to-consumer (D2C) marketing. The strongest way to subvert media narratives is for companies to build one of their own through word-of-mouth. Once consumers get their hands on the technology itself, they can better understand its wow factor and potential to save them countless amounts of time accomplishing basic tasks — or tasks they hadn’t even considered could be tackled by AI. This marketing tack also affords companies a chance to get ahead of news stories by accentuating privacy policies and comprehensive protocols in the event of an issue.

Customers guide the future of AI

Best of all, a strong customer base in 2023 opens lines of communication between vendor and client. Direct, detailed feedback drives relevant, comprehensive updates to AI. Together, companies and their customers can forge an AI-driven future that pushes the technology envelope while remaining responsible with safe, secure, and unbiased data collection.

Just don’t tell the robots.

Ramprakash “Ram” Ramamoorthy is head of labs and AI research at Zoho.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Ramprakash Ramamoorthy, Zoho Corporation
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!