AI & RoboticsNews

Artificial stupidity: ‘Move slow and fix things’ could be the mantra AI needs

“Let’s not use society as a test-bed for technologies that we’re not sure yet how they’re going to change society,” warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. “Let’s try to think through some of these issues — move slower and fix things, rather than move fast and break things.”

Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society.

The “move fast and break things” ethos embodied by Facebook’s rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society — often to humans’ chagrin.

Although Facebook officially ditched its “move fast and break things” mantra five years ago, it seems that the crux of many of today’s problems come down to the fact that companies are adopting a similar AI ethos as they did with the products of yore — “full-steam ahead, and to hell with the consequences.”

‘Artificial stupidity’

3D Rendering Robots - Top Tech News

Above: 3D rendering of robots speaking no evil, hearing no evil, seeing no evil. Image Credit: Getty Images / Westend61

This week, news emerged that Congress has been investigating how facial recognition technology is being used by the military in the U.S. and abroad, noting that the technology is just not accurate enough yet.

“The operational benefits of facial recognition technology for the warfighter are promising,” a letter from Congress read. “However, overreliance on this emerging technology could also have disastrous consequences if faulty or inaccurate facial scans result in the inadvertent targeting of civilians or the compromise of mission requirements.”

The letter went on to note that the “accuracy rates for images depicting black and female subjects were consistently lower than for those of white and male subjects.”

While there are countless other examples of how far AI still has to go in terms of addressing biases in the algorithms, the broader issue at play here is that AI just isn’t good or trustworthy enough across the spectrum.

“Everyone wants to be at the cutting edge, or the bleeding edge — from universities, to companies, to government,” said Dr. Kristinn R. Thórisson, an AI researcher and founder of the Icelandic Institute for Intelligent Machines, speaking in the same panel discussion as Carly Kind. “And they think artificial intelligence is the next [big] thing. But we’re actually in the age of artificial stupidity.”

Thórisson is a leading proponent of what is known as artificial general intelligence (AGI), which is concerned with integrating disparate systems to create a more complex AI with humanlike attributes, such as self-learning, reasoning, and planning. Depending on who you ask, AGI is coming in 5 years, it’s a long way off, or it’s never happening — Thórisson, however, evidently does believe that AGI will happen one day. When that will be, he is not so sure — but what he is sure of is that today’s machines are not as smart as some may think.

“You use the word ‘understanding’ a lot when you’re talking about AI, and it used to be that people put ‘understanding’ in quotation marks when they talked about it in the context of AI,” Thórisson said. “When it comes down to it, these machines don’t really understand anything, and that’s the problem.”

For all the positive spins on how amazing AI now is in terms of trumping humans at pokerAlphaGo, or Honor of Kings, there are numerous examples of AI fails in the wild. By most accounts, driverless cars are nearly ready for prime time, but there is other evidence to suggest that there are still some obstacles to overcome before they can be left to their own devices.

For instance, news emerged this week that regulators are investigating Tesla’s recently launched automated Smart Summon feature, which allows drivers to remotely beckon their car inside a parking lot. In the wake of the feature’s official rollout last week, a number of users posted videos online showing crashes, near-crashes, and a general comical state of affairs.

This isn’t to pour scorn on the huge advances that have been made by autonomous carmakers, but it shows that the fierce battle to bring self-driving vehicles to market can sometimes lead to half-baked products that perhaps aren’t quite ready for public consumption.

Crossroads

The growing tension — between consumers, corporations, governments, and academia — around the impact of AI technology on society is palpable. With the tech industry prizing innovation and speed over iterative testing at a slower pace, there is a danger of things getting out of hand — the quest to “be first,” or to secure lucrative contracts and keep shareholders happy, might just be too alluring.

All the big companies, from Facebook, Amazon, and Google through to Apple, Microsoft, and Uber, are competing on multiple business fronts, with AI a common thread permeating it all. There has been a concerted push to vacuum up all the best AI talent, either through acquiring startups or simply hiring the top minds from the best universities. And then there is the issue of securing big-name clients with big dollars to spend — Amazon and Microsoft are currently locking horns to win a $10 billion Pentagon contract for delivering AI and cloud services.

In the midst of all this, tech firms are facing increasing pressure over their provision of facial recognition services (FRS) to the government and law enforcement. Back in January, a coalition of more than 85 advocacy groups penned an open letter to Google, Microsoft, and Amazon, urging them to cease selling facial recognition software to authorities — before it’s too late.

“Companies can’t continue to pretend that the ‘break then fix’ approach works,” said Nicole Ozer, technology and civil liberties director for the American Civil Liberties Union (ACLU) of California. “History has clearly taught us that the government will exploit technologies like face surveillance to target communities of color, religious minorities, and immigrants. We are at a crossroads with face surveillance, and the choices made by these companies now will determine whether the next generation will have to fear being tracked by the government for attending a protest, going to their place of worship, or simply living their lives.”

Then in April, two dozen AI researchers working across the technology and academia sphere called on Amazon specifically to stop selling its Rekognition facial recognition software to law enforcement agencies. The crux of the problem, according to the researchers, was that there isn’t sufficient regulation to control how the technology is used.

Rekognition - Top Tech News

Above: An illustration shows Amazon Rekognition’s support for detecting faces in crowds.
Image Credit: Amazon

“We call on Amazon to stop selling Rekognition to law enforcement as legislation and safeguards to prevent misuse are not in place,” it said. “There are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties.”

However, Amazon later went on record to say that it would serve any federal government with facial recognition technology — so long as it’s legal.

These controversies are not limited to the U.S. either — it’s a global problem that countries and companies everywhere are having to tackle. London’s King’s Cross railway station hit the headlines in August when it was found to have deployed facial recognition technology in CCTV security cameras, leading to questions not only ethics, but also legality. A separate report published also discovered that local police had submitted photos of seven people for use in conjunction with King’s Cross’s facial recognition system, in a deal that was not disclosed until yesterday.

All these examples serve to feed the argument that AI development is outpacing society’s ability to put adequate checks and balances in place.

Pushback

Digital technology has often moved too fast for regulation or external oversight to keep up, but we’re now starting to see major regulatory pushbacks — particularly relating to data privacy. The California Consumer Privacy Act (CCPA), which is due to take effect on Jan 1, 2020, is designed to enhance privacy rights of consumers living across the state, while Europe is also currently weighing a new ePrivacy Regulation, which covers an individual’s right to privacy regarding electronic communications.

But the biggest regulatory advance in recent times has been Europe’s General Data Protection Regulation (GDPR), which stipulates all manner of rules around how companies should manage and protect their customers’ data. Huge fines await any company that contravenes GDPR, as Google found out earlier this year when it was hit with a €50 million ($57 million) fine by French data privacy body CNIL for “lack of transparency” over how it personalized ads. Elsewhere, British Airways (BA) and hotel giant Marriott were slapped with $230 million and $123 million fines respectively over gargantuan data breaches. Such fines may serve as incentives for companies to better manage data in the future, but in some respects the regulations we’re starting to see now are too little too late — the privacy ship has sailed.

“Rolling back is a really difficult thing to do — we’ve seen it around the whole data protection field of regulation, where technology moves much faster than regulation can move,” Kind said. “All these companies went ahead and started doing all these practices; now we have things like the GDPR trying to pull some of that back, and it’s very difficult.”

From looking back at the past 15 years or so, a time during which cloud computing and ubiquitous computing have taken hold, there are perhaps lessons to be learned in terms of how society proceeds with AI research, development, and deployment.

“Let’s slow things down a bit before we roll out some of this stuff, so that we do actually understand the societal impacts before we forge ahead,” Kind continued. “I think what’s at stake is so vast.”


Author: Paul Sawers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!