Microsoft and OpenAI, the AI research lab in which Microsoft has invested over $1 billion, today submitted a document to the U.S. government describing how a “digitally transformed” export controls system might work and the benefits it could provide. The organizations suggest that their proposed solutions could bring commercial benefits to users, as well as a more powerful, dynamic, and targeted method for controlling U.S. exports of fundamental technologies.
Following a mandate in the Export Control Reform Act of 2018, the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) undertook efforts to identify and control exports of “emerging” or “foundational” technologies ostensibly vital to national security. In comment periods ending in January 2019 and earlier this week, BIS solicited comment from the industry on how to identify and approach control of these technologies.
Microsoft and OpenAI take issue with the restrictions promulgated via traditional export control approaches. Restrictions based on performance criteria, they assert, would ignore the fact that technologies containing the same criteria have both beneficial and nefarious use cases. Microsoft and OpenAI also speculate that hard-and-fast rules might fail to keep pace with technological developments and quickly become outdated. Moreover, they argue restrictions could cut off U.S. companies’ access to global markets, even in allied countries that aren’t bound by similar regulations.
“This overly broad approach would have the unintended consequence of foreclosing beneficial uses,” Sarah O’Hare O’Neal, associate general counsel of global trade at Microsoft, wrote in a blog post. “Taking facial recognition as an example, the same digital biometrics technology, software and hardware capture and analyze information to identify people, whether for the purpose of finding a terrorist or a missing child versus finding an oppressed dissident or minority.”
The solution, according to Microsoft and OpenAI, is fourfold:
- Software features designed into technologies that enable controls against prohibited uses and users. These features could include, for example, identity verification systems and information flow controls to discern whether facts and criteria are consistent with authorized users and uses.
- Hardware roots of trust built into devices that contain sensitive technologies, namely identity verification through coprocessors akin to those used to secure mobile payments and prohibit cheating in video game consoles.
- Tamper-resistant tools for technologies including software and hardware applications that can harden infrastructures against “subversion.”
- AI techniques to “more dexterously” identify and restrict problematic end users or uses by continuously improving to incorporate government policy changes or observations from unauthorized user or use attempts. Microsoft and OpenAI cite GPT-3, a large neural language model trained on a range of internet data to complete texts that users enter, an example of such a technique already in development. (Notably, Microsoft recently agreed to exclusively license GPT-3 from OpenAI.)
Microsoft and OpenAI suggest that these methods combined could lay the groundwork for systems that secure supply chains and protect critical infrastructure. They also posit that they could encourage industry corporate social responsibility to ensure technologies aren’t used in “destructive and dangerous” ways.
“For example, OpenAI is working collaboratively with its customers on AI-driven systems to direct model outputs so that they conform to customer expectations and OpenAI’s mission that AI benefit all of humanity, such as being able to provide reliable safeguards against user-generated hate speech,” O’Neal wrote. “And Microsoft has long publicly supported regulations on the use of facial recognition technology and has committed to self-enforce similar restrictions based on our Facial Recognition Principles. More recently, Microsoft imposed gating restrictions on its Custom Neural Voice service, a synthetic voice generating technology with incredible benefits, such as allowing people with degenerative diseases to preserve and project their own voices from a computing device when they can no longer speak.”
Microsoft and OpenAI’s comments come not only in response to the Export Control Reform Act, but after a surge in tech nationalism globally. China imposed new rules around tech exports, with the country’s Ministry of Commerce adding 23 items to its restricted list, including technologies such as personal information push services based on data analysis and AI interactive interface technology. And following Nvidia’s announcement earlier this year that it intends to acquire U.K.-based chipmaker Arm, the majority of U.K.-area IT experts said that the government should intervene to protect the country’s tech sector, according to a survey from the industry’s professional body (The Chartered Institute for IT).
You can’t solo security
COVID-19 game security report: Learn the latest attack trends in gaming. Access here
Author: Kyle Wiggers
Source: Venturebeat