Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
It’s 2022, and developments in the AI industry are off to a slow — but nonetheless eventful — start. While the spread of the Omicron variant put a damper on in-person conferences, enterprises aren’t letting the pandemic disrupt the course of technological progress.
John Deere previewed a tractor that uses AI to find a way to a field on its own and plow the soil without instructions. As Wired’s Will Knight point outs, it and — self-driving tractors like it — could help to address the growing labor shortage in agriculture; employment of agriculture workers is expected to increase just 1% from 2019 to 2029. But they also raise questions about vendor lock-in and the role of human farmers alongside robots.
For example, farmers could become increasingly reliant on Deere’s systems for decision-making. The company could also use the data it collects from the autonomous tractors to develop features that it then gates behind a subscription, taking away farmers’ autonomy.
Driverless tractors are a microcosm of the growing role of automation across industries. As countless reports warn, while AI could lead to increased productivity, profitability, and creativity, these gains won’t be evenly distributed. AI will complement roles in fields where there’s no substitute for skilled workers, like health care. But in industries relying on standard routines, AI has the potential to replace rather than support jobs.
A report by American University suggests that legislators address these gaps by focusing on restructuring school curricula to reflect the changing skill demands. Regulation has a role to play, too, in preventing companies from monopolizing AI in certain industries to pursue consumer-hostile practices. The right solution — or, more accurately, a mix of solutions — remains elusive. But the mass-market advent of self-driving tractors is yet another reminder that technology often runs ahead of policymaking.
Regulating algorithms
Speaking of regulators, China this week further detailed its plans to curtail the algorithms used in apps to recommend what consumers buy, read, and watch online. According to a report in South China Morning Post, companies that use these types of “recommender” algorithms will be required to “promote positive energy” by allowing users to decline suggestions offered by their services.
The move — which will impact corporate giants including Alibaba, Tencent, and TikTok owner ByteDance, among others — is aimed at bringing the Chinese tech industry to heel. But it also reflects a broader effort by governments to prevent abuse of AI technologies for profit at any cost.
Beyond the European Union’s (EU) comprehensive AI Act, a government think tank in India has proposed an AI oversight board to establish a framework for “enforcing responsible AI principles.” In the U.K., the government launched a national standard for algorithmic transparency, which recommends that public sector bodies in the country explain how they’re using AI to make decisions. And in the U.S., the White House released draft guidance that includes principles for U.S. agencies when deciding whether — and how — to regulate AI.
A recent Deloitte report predicts that 2022 will see increased discussion about regulating AI “more systematically,” although the coauthors concede that enacting proposals into regulation will likely happen in 2023 (or beyond). Some jurisdictions may even try to ban — and, indeed, have banned — whole subfields of AI, like facial recognition in public spaces and social scoring systems, the report notes.
Why now? AI is becoming pervasive and ubiquitous, which is attracting greater regulatory scrutiny. The technology’s implications for fairness, bias, discrimination, diversity and privacy are also coming into clearer view, as is the geopolitical leverage that AI regulations could give countries that implement them early.
Regulating AI will not be easy. AI systems remain difficult to audit, and it can’t always be guaranteed that the data used to train them is “free of errors and complete” (as the EU’s AI Act would require). Moreover, countries could pass conflicting regulations that make it challenging for companies to comply with all of them. But Deloitte presents as a best-case scenario the emergence of a “gold standard,” as happened with the EU’s General Data Protection Regulation around privacy,
“More regulations over AI will be enacted in the very near term. Though it’s not clear exactly what those regulations will look like, it is likely that they will materially affect AI use,” Deloitte writes. That’s a safe bet.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Kyle Wiggers
Senior Staff Writer
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Kyle Wiggers
Source: Venturebeat