Believe it or not! Artificial intelligence is now a part of everyday life. It is built into things like phones, TVs, home appliances, transportastion and tools used at work. However, increased adoption does not necessarily translate to acceptance. A recent discussion with multiple speakers about AI trust brought up a paradox that will affect the next phase of deployment: people use AI every day, often without realizing it, but many feel uneasy when asked directly about their trust in the technology.
Research presented during the panel discussion, The Next 20 Years: A Bright Future for Vision AI at Samsung’s First Look at CES, indicated that respondents, especially in developed economies, are more inclined to reject AI than to accept it. The panel contended that this gap stems from confusion regarding the definition of AI, rather than outright rejection of the technology. AI powers many important services, such as email systems, navigation tools, airline check-ins and logistics platforms, but users don’t often think of these as AI-driven. So, perception has a bigger impact on the debate than direct experience.
Making trust visible rather than assumed
One of the central arguments was that trust cannot remain an abstract promise. A senior executive from Samsung said the rapid spread of AI features has outpaced people’s understanding of how their data is handled. According to him, AI often feels like a hidden process, which creates anxiety around privacy and control.

He said, “Trust must be communicated through clear signals.” Predictable behaviour was identified as the first requirement: systems should act consistently and not push users into features they do not understand. The second signal was strong local data protection, with sensitive information processed and stored on the device rather than automatically sent to the cloud. The third was ecosystem-level security, in which connected devices authenticate and protect one another, reducing single points of failure.
In this framing, trust becomes part of product design rather than a marketing claim.
Personalisation with limits
Another speaker approached trust from the user’s point of view. She argued that people want personalisation, but only if they can see how it works. Transparency about whether data is processed locally or remotely and the ability to control those choices, were described as baseline expectations.
She also outlined three pillars providers must meet and said, “Credibility means the system performs reliably and admits uncertainty. Benevolence refers to AI acting in the user’s interest, such as remembering preferences to reduce effort. Integrity means the system behaves consistently over time and does not quietly expand its scope.” As AI becomes more proactive, she said, these principles matter more than feature counts.
Safety versus surveillance
The discussion then turned to a harder question: where protection ends and surveillance begins. One panellist warned that many safety arguments rely on collecting more data, without addressing who controls that data once collected. She described this as a conflict between defending against low-resource misuse and preventing abuse by high-resource actors such as states or large organisations.
Advertising-driven business models were identified as a specific risk. When users cannot trace how their data is monetised, trust erodes, even if systems claim to improve safety or convenience. Explainability, she argued, must extend beyond model outputs to include where data travels after a command is executed.
Why do developing markets appear more open
The panel also examined why respondents in developing economies appear more accepting of AI. One speaker rejected the idea that this reflects lower exposure. He pointed out that AI tools are already common in these markets, from creative applications to mobility and payments. The difference, he suggested, lies in narrative. In many developed markets, AI is discussed in terms of fear of job loss or loss of control, while in other regions, it is framed as a practical tool.
Looking toward 2030, he said long-term trust will depend on output quality, dependability, and risk approval. In enterprises, insurers and compliance teams often decide whether AI can scale. In consumer markets, convenience drives adoption first, with trust acting as a threshold rather than a purchase motive.
Disinformation, incentives, and responsibility
On misinformation, the panel disagreed sharply. Some argued that technology can counter technology, using detection tools and improved literacy. Others said this ignores incentive structures that reward engagement over accuracy. Regulation, they noted, often arrives after damage is done. Financial incentives for safer behaviour were suggested as a faster lever than rules alone.
From an industry perspective, the Samsung executive said companies must invest in internal AI safety testing and collaborate with partners such as Google and Microsoft to share threat research rather than working in isolation.
Responding to a question on two practical points of AI: hallucinations and standardisation by The Mobile Indian. A panelist argued that hallucinations are not a fixed ceiling, pointing to work on confidence estimation and systems that can say “I don’t know” rather than producing an answer. Another response returned to the product layer: transparency, clear explanation of what data is used and user choice between on-device processing and cloud features, including settings that adjust response sensitivity.
On standards, the moderator noted political incentives can push countries toward separate approaches rather than uniform rules, even if standardisation would help users.
Trust as lived experience
The discussion closed on a practical note. Trust in AI, the moderator concluded, grows through direct experience, clear privacy guardrails, visible user choice and confidence that skills and jobs will not be discarded without support. He said, “The message was not that trust will arrive on its own, but that it must be built into systems people already rely on.”
As AI shifts from novelty to infrastructure, the panel’s consensus was narrow but clear: trust will not be won through ambition alone. It will be earned through design decisions users can see, understand and control.
Author: Sandeep Budki
Source: The Mobile Indian
Reviewed By: Editorial Team