Regardless of the sector, companies of all sizes are moving to implement large language models (LLMs) into their workflows — to drive efficiencies and deliver better customer experiences.
However, a new Salesforce survey found that this so-called “race” to build out generative AI as soon as possible might come at the cost of a “trust gap” with customers. The company’s sixth State of the Connected Customer report, featuring data gathered between May 3 and July 14, 2023, surveyed more than 14,000 consumers and business buyers across 25 countries. It shows that even though customers and buyers are generally open to the use of AI for better experiences, a large number of folks still don’t trust their companies to use AI ethically.
The findings highlight a major issue that enterprises implementing gen AI need to address in order to deliver the best possible AI experiences to their customers and keep their business growing.
While the concept of “trust” seems simple at first, the reality is it can be very complex and multifaceted. For instance, a person might trust the quality of a company’s product but not its efforts toward sustainability. Similarly, they might not trust the company to protect their data.
For AI, trust is rooted in ethical principles where the system adheres to well-defined guidelines regarding certain fundamental values like individual rights, privacy and non-discrimination. According to the Salesforce survey, this is where the problem may appear.
Out of the 14,000 respondents surveyed, 76% said they trust companies to make honest claims about their products and services but nearly 50% claimed they do not trust them to use use AI ethically.
While they highlighted multiple concerns, the most prominent challenges they reported were the lack of transparency and the lack of a human in the loop to validate the output of the AI — demanded by more than 80%. Just 37% of the respondents said they actually trust AI to provide as accurate responses as a human would.
Other concerns they flagged included data security risks, the possibility of bias (where the system may discriminate against a gender, for example), and unintended consequences for society.
Among the survey respondents, business buyers expressed more optimism towards AI than consumers did, with 73% of them noting they are open to the use of this technology by businesses for better experiences. In contrast, just 51% of consumers shared the same view.
What’s intriguing here is that the general sentiment still seems to have dipped since 2022 when generative AI — capable of producing new content in a matter of seconds — came onto the scene. Last year, as many as 82% of business buyers and 65% of consumers were open to the use of AI for better experiences, Salesforce said.
Notably, on the vendor side, optimism continues to remain sky-high, with a majority of professionals at the forefront of customer engagement (from IT and marketing to sales and service teams) saying generative AI will help their companies serve customers better.
Even though companies cannot stop AI implementation — after all, they have to stay relevant in today’s dynamic environment — the survey found that a few key steps can help them win consumers’ trust and make sure they are on board with the shift.
The first, as mentioned above, would be ensuring a greater level of transparency and human validation of AI’s outputs. More than half of the customers surveyed said this would boost their trust. Beyond that, 49% of respondents said companies should focus on giving them more control of where and how AI is applied in engagement — such as opportunities to opt out; 39% called for third-party ethics review; and 36% sought government oversight.
Other suggested steps included industry standards for AI implementation, solicitation of customer feedback on how to improve AI’s use, training on diverse datasets and making the underlying algorithms publicly available.
“As brands find new ways to keep up with rising customer expectations, they must also consider diverse viewpoints among their (targeted) base,” Michael Affronti, SVP and GM for Salesforce Commerce Cloud, said in a press release.
“Leading with strong values and ethical use of emerging technologies like generative AI will be a key indicator of future success,” he added.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Regardless of the sector, companies of all sizes are moving to implement large language models (LLMs) into their workflows — to drive efficiencies and deliver better customer experiences.
However, a new Salesforce survey found that this so-called “race” to build out generative AI as soon as possible might come at the cost of a “trust gap” with customers. The company’s sixth State of the Connected Customer report, featuring data gathered between May 3 and July 14, 2023, surveyed more than 14,000 consumers and business buyers across 25 countries. It shows that even though customers and buyers are generally open to the use of AI for better experiences, a large number of folks still don’t trust their companies to use AI ethically.
The findings highlight a major issue that enterprises implementing gen AI need to address in order to deliver the best possible AI experiences to their customers and keep their business growing.
What does trust mean for AI?
While the concept of “trust” seems simple at first, the reality is it can be very complex and multifaceted. For instance, a person might trust the quality of a company’s product but not its efforts toward sustainability. Similarly, they might not trust the company to protect their data.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
For AI, trust is rooted in ethical principles where the system adheres to well-defined guidelines regarding certain fundamental values like individual rights, privacy and non-discrimination. According to the Salesforce survey, this is where the problem may appear.
Out of the 14,000 respondents surveyed, 76% said they trust companies to make honest claims about their products and services but nearly 50% claimed they do not trust them to use use AI ethically.
While they highlighted multiple concerns, the most prominent challenges they reported were the lack of transparency and the lack of a human in the loop to validate the output of the AI — demanded by more than 80%. Just 37% of the respondents said they actually trust AI to provide as accurate responses as a human would.
Other concerns they flagged included data security risks, the possibility of bias (where the system may discriminate against a gender, for example), and unintended consequences for society.
Business buyers remain more confident
Among the survey respondents, business buyers expressed more optimism towards AI than consumers did, with 73% of them noting they are open to the use of this technology by businesses for better experiences. In contrast, just 51% of consumers shared the same view.
What’s intriguing here is that the general sentiment still seems to have dipped since 2022 when generative AI — capable of producing new content in a matter of seconds — came onto the scene. Last year, as many as 82% of business buyers and 65% of consumers were open to the use of AI for better experiences, Salesforce said.
Notably, on the vendor side, optimism continues to remain sky-high, with a majority of professionals at the forefront of customer engagement (from IT and marketing to sales and service teams) saying generative AI will help their companies serve customers better.
What can businesses do?
Even though companies cannot stop AI implementation — after all, they have to stay relevant in today’s dynamic environment — the survey found that a few key steps can help them win consumers’ trust and make sure they are on board with the shift.
The first, as mentioned above, would be ensuring a greater level of transparency and human validation of AI’s outputs. More than half of the customers surveyed said this would boost their trust. Beyond that, 49% of respondents said companies should focus on giving them more control of where and how AI is applied in engagement — such as opportunities to opt out; 39% called for third-party ethics review; and 36% sought government oversight.
Other suggested steps included industry standards for AI implementation, solicitation of customer feedback on how to improve AI’s use, training on diverse datasets and making the underlying algorithms publicly available.
“As brands find new ways to keep up with rising customer expectations, they must also consider diverse viewpoints among their (targeted) base,” Michael Affronti, SVP and GM for Salesforce Commerce Cloud, said in a press release.
“Leading with strong values and ethical use of emerging technologies like generative AI will be a key indicator of future success,” he added.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Shubham Sharma
Source: Venturebeat
Reviewed By: Editorial Team