Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.
It has been a great year for artificial intelligence. Companies are spending more on large AI projects, and new investment in AI startups is on pace for a record year. All this investment and spending is yielding results that are moving us all closer to the long-sought holy grail — artificial general intelligence (AGI). According to McKinsey, many academics and researchers maintain that there is at least a chance that human-level artificial intelligence could be achieved in the next decade. And one researcher states: “AGI is not some far-off fantasy. It will be upon us sooner than most people think.”
A further boost comes from AI research lab DeepMind, which recently submitted a compelling paper to the peer-reviewed journal titled “Reward is Enough.” They posit that reinforcement learning — a form of deep learning based on behavior rewards — will one day lead to replicating human cognitive capabilities and achieve AGI. This breakthrough would allow for instantaneous calculation and perfect memory, leading to an artificial intelligence that would outperform humans at nearly every cognitive task.
We are not ready for artificial general intelligence
Despite assurances from stalwarts that AGI will benefit all of humanity, there are already real problems with today’s single-purpose narrow AI algorithms that calls this assumption into question. According to a Harvard Business Review story, when AI examples from predictive policing to automated credit scoring algorithms go unchecked, they represent a serious threat to our society. A recently published survey by Pew Research of technology innovators, developers, business and policy leaders, researchers, and activists reveals skepticism that ethical AI principles will be widely implemented by 2030. This is due to a widespread belief that businesses will prioritize profits and governments continue to surveil and control their populations. If it is so difficult to enable transparency, eliminate bias, and ensure the ethical use of today’s narrow AI, then the potential for unintended consequences from AGI appear astronomical.
And that concern is just for the actual functioning of the AI. The political and economic impacts of AI could result in a range of possible outcomes, from a post-scarcity utopia to a feudal dystopia. It is possible too, that both extremes could co-exist. For instance, if wealth generated by AI is distributed throughout society, this could contribute to the utopian vision. However, we have seen that AI concentrates power, with a relatively small number of companies controlling the technology. The concentration of power sets the stage for the feudal dystopia.
Perhaps less time than thought
The DeepMind paper describes how AGI could be achieved. Getting there is still some ways away, from 20 years to forever, depending on the estimate, although recent advances suggest the timeline will be at the shorter end of this spectrum and possibly even sooner. I argued last year that GPT-3 from OpenAI has moved AI into a twilight zone, an area between narrow and general AI. GPT-3 is capable of many different tasks with no additional training, able to produce compelling narratives, generate computer code, autocomplete images, translate between languages, and perform math calculations, among other feats, including some its creators did not plan. This apparent multifunctional capability does not sound much like the definition of narrow AI. Indeed, it is much more general in function.
Even so, today’s deep-learning algorithms, including GPT-3, are not able to adapt to changing circumstances, a fundamental distinction that separates today’s AI from AGI. One step towards adaptability is multimodal AI that combines the language processing of GPT-3 with other capabilities such as visual processing. For example, based upon GPT-3, OpenAI introduced DALL-E, which generates images based on the concepts it has learned. Using a simple text prompt, DALL-E can produce “a painting of a capybara sitting in a field at sunrise.” Though it may have never “seen” a picture of this before, it can combine what it has learned of paintings, capybaras, fields, and sunrises to produce dozens of images. Thus, it is multimodal and is more capable and general, though still not AGI.
Researchers from the Beijing Academy of Artificial Intelligence (BAAI) in China recently introduced Wu Dao 2.0, a multimodal-AI system with 1.75 trillion parameters. This is just over a year after the introduction of GPT-3 and is an order of magnitude larger. Like GPT-3, multimodal Wu Dao — which means “enlightenment” — can perform natural language processing, text generation, image recognition, and image generation tasks. But it can do so faster, arguably better, and can even sing.
Conventional wisdom holds that achieving AGI is not necessarily a matter of increasing computing power and the number of parameters of a deep learning system. However, there is a view that complexity gives rise to intelligence. Last year, Geoffrey Hinton, the University of Toronto professor who is a pioneer of deep learning and a Turing Award winner, noted: “There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses.” Synapses are the biological equivalent of deep learning model parameters.
Wu Dao 2.0 has apparently achieved this number. BAAI Chairman Dr. Zhang Hongjiang said upon the 2.0 release: “The way to artificial general intelligence is big models and [a] big computer.” Just weeks after the Wu Dao 2.0 release, Google Brain announced a deep-learning computer vision model containing two billion parameters. While it is not a given that the trend of recent gains in these areas will continue apace, there are models that suggest computers could have as much power as the human brain by 2025.
Source: Mother Jones
Expanding computing power and maturing models pave road to AGI
Reinforcement learning algorithms attempt to emulate humans by learning how to best reach a goal through seeking out rewards. With AI models such as Wu Dao 2.0 and computing power both growing exponentially, might reinforcement learning — machine learning through trial and error — be the technology that leads to AGI as DeepMind believes?
The technique is already widely used and gaining further adoption. For example, self-driving car companies like Wayve and Waymo are using reinforcement learning to develop the control systems for their cars. The military is actively using reinforcement learning to develop collaborative multi-agent systems such as teams of robots that could work side by side with future soldiers. McKinsey recently helped Emirates Team New Zealand prepare for the 2021 Americas Cup by building a reinforcement learning system that could test any type of boat design in digitally simulated, real-world sailing conditions. This allowed the team to achieve a performance advantage that helped it secure its fourth Cup victory.
Google recently used reinforcement learning on a dataset of 10,000 computer chip designs to develop its next generation TPU, a chip specifically designed to accelerate AI application performance. Work that had taken a team of human design engineers many months can now be done by AI in under six hours. Thus, Google is using AI to design chips that can be used to create even more sophisticated AI systems, further speeding-up the already exponential performance gains through a virtuous cycle of innovation.
While these examples are compelling, they are still narrow AI use cases. Where is the AGI? The DeepMind paper states: “Reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and imitation.” This means that AGI will naturally arise from reinforcement learning as the sophistication of the models matures and computing power expands.
Not everyone buys into the DeepMind view, and some are already dismissing the paper as a PR stunt meant to keep the lab in the news more than advance the science. Even so, if DeepMind is right, then it is all the more important to instill ethical and responsible AI practices and norms throughout industry and government. With the rapid rate of AI acceleration and advancement, we clearly cannot afford to take the risk that DeepMind is wrong.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Gary Grossman, Edelman
Source: Venturebeat