AI & RoboticsNews

AI or BS: Distinguishing artificial intelligence from trade show hype

Though it’s a coincidence that I’m writing this article roughly one year after my colleague Khari Johnson railed against the “public nuisance” of “charlatan AI,” the annual Consumer Electronics Show (CES) clearly inspired both missives. At the tail end of last year’s show, Khari called out a seemingly fake robot AI demo at LG’s CES press conference, noting that for society’s benefit, “tech companies should spare the world overblown or fabricated pitches of what their AI can do.”

Having spent last week at CES, I found it painfully obvious that tech companies — at least some of them — didn’t get the message. Once again, there were plenty of glaring examples of AI BS on the show floor, some standing out like sore thumbs while others blended into the massive event’s crowded event halls.

AI wasn’t always poorly represented, though: There were some legitimate and legitimately exciting examples of artificial intelligence at CES. And all the questionable AI pitches were more than counterbalanced by the automotive industry, which is doing a better job than others at setting expectations for AI’s growing role in its products and services, even if its own marketing isn’t quite perfect.

When AI is more artificial than intelligent

Arguably the biggest AI sore thumb at CES was Neon, a Samsung-backed project that claims to be readying “artificial human” assistants to have conversations and assist users with discrete tasks later this year. Using ethereal music that recalled Apple’s memorable reveal video for the original Apple Watch, the absurdly large Neon booth filled dozens of screens with life-sized examples of virtual assistants, including a gyrating dancer, a friendly police officer, and multiple female and male professionals. As we noted last week, the assistants looked “more like videos than computer-generated characters.”

The problem, of course, is that the assistants were indeed videos of humans, not computer-generated characters. Samsung subsidiary Star Labs filmed people to look like cutting-edge CG avatars against neutral backgrounds, but the only “artificial human” element was the premise that the humans were indeed artificial. Absent more conspicuous disclosures, booth visitors had no clue that this was the case unless they stooped down to the ground and noticed, at the very bottom of the giant displays, a white small print disclaimer: “Scenarios for illustrative purposes only.”

I can’t think of a bigger example of “charlatan AI” at CES this year than an entire large booth dedicated to fake AI assistants, but there wasn’t any shortage of smaller examples of the misuse or dilution of “AI” as a concept. The term was all over booths at this year’s show, both explicit (“AI”) and implied (“intelligence”), as likely to appear on a new television set or router as in an advanced robotics demonstration.

As just one example of small-scale AI inflation, TCL tried to draw people to its TVs with an “AI Photo Animator” demonstration that added faux bubbles to a photo of a mug of beer, or steam to a mug of tea. The real world applications of this feature are questionable at best, and the “AI” component — recognizing one of several high-contrast props when held in a specific location within an image — is profoundly limited. It’s unclear why anyone would be impressed by a slow, controlled, TV-sized demo of something less impressive than what Snapchat and Instagram do in real time on pocketable devices every day; describing it as “AI” with so little intelligence felt like a stretch.

When AI’s there, but to an unknown extent

Despite last year’s press conference “AI robot” shenanigans, I’m not going to say that all of LG’s AI initiatives are nonsense. To the contrary, I’ll take the company seriously when it says that its latest TVs are powered by the α9 Gen3 AI Processor (that’s Alpha 9, styled in the almost mathematical format shown in the photo below), which it claims uses deep learning technology to upscale 4K images to 8K, selectively optimize text and faces, or dynamically adjust picture and sound settings based on content.

Unlike an artificial human that looks completely photorealistic while having natural conversations with you, these are bona fide tasks that AI can handle in the year 2020, even if I’d question the actual balance of algorithmic versus true AI processing that’s taking place. Does an LG TV with the α9 Gen3 processor automatically learn to get better over time at upscaling videos? Can it be told when it’s made a mistake? Or is it just using a series of basic triggers to do the same types of things that HD and 4K TVs without AI have been doing for years?

Because of past follies, these types of questions over the legitimacy of AI now dog both LG and other companies that are exhibiting similar technologies. So when Ford and Agility Robotics offered an otherwise remarkable CES demonstration of a bipedal package loading and delivery robot — a walking, semi-autonomous humanoid robot that works in tandem with a driverless van — the question wasn’t so much whether the robot could move or generally perform its tasks, but whether a human hiding somewhere was actually controlling it.

For the record, the robot appeared to be operating independently — more or less. It moved with the unsettling gait of Boston Dynamics’ robotic dog Spot, grabbing boxes from a table, then walking over and placing them in a van, as well as going in the opposite direction. At one point, a human gave a box on the table a little push towards the robot to help it recognize and pick up the object. So even as slightly tainted by human interaction as the demo might have been, the AI tasks it was apparently completing autonomously were thousands of times more complicated than adding bubbles to a static photo of someone holding a fake beer mug.

Automotive autonomy is a good but imperfect model for quantifying AI for end users

Automotive companies have been somewhat better in disclosing the actual extent of a given car AI system’s autonomy, though the lines dividing engineers from marketers obviously vary from company to company. Generally, self-driving car and taxi companies describe their vehicles’ capabilities using the Society of Automotive Engineers’ J3016 standard, which defines six “levels” of car automation: Level 0 has “no automation,” advancing upwards to slight steering and/or acceleration assistance (“level 1”); highway-capable autopilot (“level 2”); semi-autonomous but human-monitored autopilot (“level 3”); full autonomous driving in mapped, fair-weather situations (“level 4”); and full autonomous driving in all conditions (“level 5”).

It’s worth noting that end users don’t need to know which specific AI techniques are being used to achieve a given level of autonomy. Whether you’re buying or taking a ride in an autonomous car, you just need to know that the vehicle is capable of no, some, or full autonomous driving in specific conditions, and SAE’s standard does that. Generally.

When I opened the Lyft app to book a ride during CES last week, I was offered the option to take a self-driving Aptiv taxi, notably at no apparent discount or surcharge compared with regular rates, so I said yes. Since even prototypes of level 5 vehicles are pretty uncommon, I wasn’t shocked that Aptiv’s taxi was a level 4 vehicle, or that a human driver was sitting behind the steering wheel with a trainer in the adjacent passenger seat. I also wasn’t surprised that part of the “autonomous” ride actually took place under human control.

But I wasn’t expecting the ratio of human to autonomous control to be as heavily tilted as it was in favor of the human driver, Based on how often the word “manual” appeared on the front console map, my estimate was that the car only was driving itself a quarter or third of the time, and even so, with constant human monitoring. That’s low for a vehicle that by the “level 4” definition should have been capable of fully driving itself on a mild day with no rain.

The trainer suggested that they were engaging manual mode to override the car’s predispositions, which would have delayed us due to abnormally heavy CES traffic and atypical lane blockages. Even so, my question after the experience was whether “full autonomy” is really an appropriate term for car AI that needs a human (or two) to tell it what to do. Marketing aside, the experience felt like it was closer to an SAE level 3 experience than level 4.

Applying the automotive AI model to other industries

After canvassing as many of CES’s exhibits as I could handle, I’m convinced that the auto industry’s broad embrace of level 0 to level 5 autonomy definitions was a good move, even if those definitions are sometimes (as with Tesla’s “Autopilot”) somewhat fuzzy. So long as the levels stay defined or become clearer over time, drivers and passengers should be able to make reasonable assumptions about the AI capabilities of their vehicles, and prepare accordingly.

Applying the same type of standards across other AI-focused industries wouldn’t be easy, but a basic implementation would be to set up a small collection of straightforward levels. Level 0 would disclose no AI, with 1 for basic AI that might assist with one- or two-step, previously non-AI tasks (say, TV upscaling), 2 for more advanced multi-step AI, 3 for AI that’s capable of learning and updating itself, and so on. The definitions might vary between product types, or they might broadly correspond to larger industry norms.

In my view, the “disclosure of actual AI capabilities” step is already overdue, and will only become worse once products marketed with “AI” begin conspicuously failing to meet their claims. If consumers discover, for instance, that LG’s new AI washing machines don’t actually extend “the life of garments by 15 percent,” class action lawyers may start taking AI-boosting tech companies to the cleaners. And if numerous AI solutions are otherwise overblown or fabricated — the equivalent of level 0 or 1 performance when they promise to deliver level 3 to 5 results — the very concept of AI will quickly lose whatever currency it presently has with consumers.

It’s probably unrealistic to hope that companies inclined to toss the word “AI” into their press releases or marketing materials would provide at least a footnote disclosing the product’s current/as-demonstrated and planned final states of autonomy. But if the alternative is continued overinflation or fabrication of AI functionality where it doesn’t actually perform or exist, the CE industry as a whole will be a lot better off in the long term if it starts self-policing these claims now, rather than being held accountable for it in the courts of public opinion — or real courts — later.


Author: Jeremy Horwitz
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!