AI & RoboticsNews

Top minds in machine learning predict where AI is going in 2020

AI is no longer poised to change the world someday; it’s changing the world now. As we begin a new year and decade, VentureBeat turned to some of the keenest minds in AI to revisit progress made in 2019 and look ahead to how machine learning will mature in 2020. We spoke with PyTorch creator Soumith Chintala, University of California professor Celeste Kidd, Google AI chief Jeff Dean, Nvidia director of machine learning research Anima Anandkumar, and IBM Research director Dario Gil.

Everyone always has predictions for the coming year, but these are people shaping the future today — individuals with authority in the AI community who treasure scientific pursuit and whose records have earned them credibility. While some predict advances in subfields like semi-supervised learning and the neural symbolic approach, virtually all the ML luminaries VentureBeat spoke with agree that great strides were made in Transformer-based natural language models in 2019 and expect continued controversy over tech like facial recognition. They also want to see the AI field grow to value more than accuracy.

If you’re interested in taking a look back, last year we spoke with people like Facebook AI Research chief scientist Yann LeCun, Landing.ai founder Andrew Ng, and Accenture global responsible AI lead Rumman Chowdhury.

Soumith Chintala

Director, principal engineer, and creator of PyTorch

Depending on how you gauge it, PyTorch is the most popular machine learning framework in the world today. A derivative of the Torch open source framework introduced in 2002, PyTorch became available in 2015 and is growing steadily in extensions and libraries.

This fall, Facebook released PyTorch 1.3 with quantization and TPU support, alongside Captum, a deep learning interpretability tool, and PyTorch Mobile. There are also things like PyRobot and PyTorch Hub for sharing code and encouraging ML practitioners to embrace reproducibility.

In a conversation with VentureBeat this fall at PyTorch Dev Con, Chintala said he saw few breakthrough advances in machine learning in 2019.

“I actually don’t think we’ve had a groundbreaking thing … since Transformer, basically. We had ConvNets in 2012 that reached prime time, and Transformer in 2017 or something. That’s my personal opinion,” he said.

He went on to call DeepMind’s AlphaGo groundbreaking in its contributions to reinforcement learning, but he said the results are hard to implement for practical tasks in the real world.

Chintala also believes the evolution of machine learning frameworks like PyTorch and Google’s TensorFlow — the overwhelming favorites among ML practitioners today — have changed how researchers explore ideas and do their jobs.

“That’s been a breakthrough in the sense that it’s making them move one or two orders of magnitude faster than they used to,” he said.

This year, Google and Facebook’s open source frameworks introduced quantization to boost model training speeds. In the years ahead, Chintala expects “an explosion” in the importance and adoption of tools like PyTorch’s JIT compiler and neural network hardware accelerators like Glow.

“With PyTorch and TensorFlow, you’ve seen the frameworks sort of converge. The reason quantization comes up, and a bunch of other lower-level efficiencies come up, is because the next war is compilers for the frameworks — XLA, TVM, PyTorch has Glow, a lot of innovation is waiting to happen,” he said. “For the next few years, you’re going to see … how to quantize smarter, how to fuse better, how to use GPUs more efficiently, [and] how to automatically compile for new hardware.”

Like most of the other industry leaders VentureBeat spoke with for this article, Chintala predicts the AI community will place more value on AI model performance beyond accuracy in 2020 and begin turning attention to other important factors, like the amount of power it takes to create a model, how output can be explained to humans, and how AI can better reflect the kind of society people want to build.

“If you think about the last five, six years, we’ve just focused on accuracy and raw numbers like ‘Is Nvidia’s model more accurate? Is Facebook’s model more accurate?’” he said. “I actually think 2020 will be the year when we start thinking [in a more complex way], where it doesn’t matter if your model is 3% more accurate if it … doesn’t have a good interoperability mechanism [or meet other criteria].”

Celeste Kidd

Celeste Kidd is director of Kidd Lab at the University of California, Berkeley, where she and her team explore how kids learn. Their insights can help the creators of neural networks who are attempting to train models in ways not too dissimilar to raising a child.

“Human babies don’t get tagged data sets, yet they manage just fine, and it’s important for us to understand how that happens,” she said.

One thing that surprised Kidd in 2019 is the number of neural net creators who casually disparage their own work, or that of other researchers, as incapable of doing something a baby can do.

When you average together baby behavior, she said, you see evidence that they understand some things, but they definitely aren’t perfect learners, and that kind of talk paints an overly rosy picture of what babies can do.

“Human babies are great, but they make a lot of errors, and a lot of the comparisons that I saw people casually making, they were making to sort of idealize baby behavior at the population level,” she said. “I think that it’s likely that there’s going to be an increased appreciation for the connection between what you currently know and what you want to understand next.”

In AI, the phrase “black box” has been around for years now. It’s used to critique neural networks’ lack of explainability, but Kidd believes 2020 may spell the end of the perception that neural networks are uninterpretable.

“The black box argument is bogus … brains are also black boxes, and we’ve made a lot of progress in understanding how brains work,” she said.

In demystifying this perception of neural networks, Kidd looks to the work of people like Aude Oliva, executive director of the MIT-IBM Watson AI Lab.

“We were talking about this, and I said something about the system being a black box, and she chastised me reasonably [saying] that of course they’re not a black box. Of course you can dissect them and take them apart and see how they work and run experiments on them, the same [as] we do for understanding cognition,” Kidd said.

Last month, Kidd delivered the opening keynote address at the Neural Information Processing Systems (NeurIPS) conference, the largest annual AI research conference in the world. Her talk focused on how human brains hold onto stubborn beliefs, attention systems, and Bayesian statistics.

The Goldilocks zone for the delivery of information, she said, is between a person’s previous interests and understandings and what’s surprising to them. People tend to engage less with overly surprising content.

She then said there’s no such thing as a neutral tech platform, and she turned her attention to how the makers of content recommendation systems can manipulate people’s beliefs. Systems built in pursuit of maximum engagement can have a significant impact on how people form beliefs and opinions.

Kidd finished the speech by speaking about the misperception among men in machine learning that being alone with a female colleague will lead to sexual harassment allegations and end a man’s career. That misperception, she said, can instead damage the careers of women in the field.

For speaking out about sexual misconduct at the University of Rochester, Kidd was named Time Person of the Year in 2017, alongside other women who helped bring about what we now call the #MeToo movement for the equitable treatment of women. At the time, Kidd thought speaking up would end her career.

In 2020, she wants to see increased awareness of the real-life implications of tech tools and technical decisions and a rejection of the idea that the makers of tools aren’t responsible for what people do with them.

“I’ve heard a lot of people try to defend themselves by saying, ‘Well I’m not the moderator of truth,’” she said. “I think that there has to be increased awareness of that being a dishonest stance.”

“We really need to, as a society and especially as the people that are working on these tools, directly appreciate the responsibility that that comes with.”

Jeff Dean

Dean has led Google AI for nearly two years now, but he’s been at Google for two decades and is the architect of many of the company’s early search and distributed network algorithms and an early member of Google Brain.

Dean spoke with VentureBeat last month at NeurIPS, where he delivered talks on machine learning for ASIC semiconductor design and ways the AI community can address climate change, which he said is the most important issue of our time. In his talk about climate change, Dean discussed the idea that AI can strive to become a zero-carbon industry and that AI can be used to help change human behavior.

He expects to see progress in 2020 in the fields of multimodal learning, which is AI that relies on multiple media for training, and multitask learning, which involves networks designed to complete multiple tasks at once.

Unequivocally, one of the biggest machine learning trends of 2019 was the continued growth and proliferation of natural language models based on Transformer, the model Chintala previously referred to as one of the biggest breakthroughs in AI in recent years. Google open-sourced BERT, a Transformer-based model, in 2018. And a number of the top-performing models released this year, according to the GLUE leaderboard — like Google’s XLNet, Microsoft’s MT-DNN, and Facebook’s RoBERTa — were based on Transformers. XLNet 2 is due out later this month, a company spokesperson told VentureBeat.

Dean pointed to the progress that has been made, saying “… that whole research thread I think has been quite fruitful in terms of actually yielding machine learning models that [let us now] do more sophisticated NLP tasks than we used to be able to do.” But he added that there’s still room for growth. “We’d still like to be able to do much more contextual kinds of models. Like right now BERT and other models work well on hundreds of words, but not 10,000 words as context. So that’s kind of [an] interesting direction.”

Dean said he wants to see less of an emphasis on slight state-of-the-art advances in favor of creating more robust models.

Google AI will also work to advance new initiatives, like Everyday Robot, an internal project introduced in November 2019 to make robots that can accomplish common tasks in the home and workplace.

Anima Anandkumar

Anandkumar joined GPU maker Nvidia following her time as a principal scientist at AWS. At Nvidia, AI research continues across a number of areas, from federated learning for health care to autonomous driving, supercomputers, and graphics.

One area of emphasis for Nvidia and Anandkumar in 2019 was simulation frameworks for reinforcement learning, which are getting more popular and mature.

In 2019, we saw the rise of Nvidia’s Drive autonomus driving platform and Isaac robotics simulator, as well as models that produce synthetic data from simulations and generative adversarial networks, or GANs.

Last year also ushered in the rise of AI like StyleGAN, a network that can make people question whether they’re looking at a computer-generated human face or a real person, and GauGAN, which can generate landscapes with a paintbrush. StyleGAN2 made its debut last month.

GANs are technologies that can blur the lines of reality, and Anandkumar believes they can help with major challenges the AI community is trying to tackle, like grasping robotic hands and autonomous driving. (Read more about progress GANs made in 2019 in this report by VentureBeat AI staff writer Kyle Wiggers.)

Anandkumar also expects to see progress in the year ahead from iterative algorithms, self-supervision, and self-training methods of training models, which are the kinds of models that can improve through self-training with unlabeled data.

“All kinds of different iterative algorithms I think are the future, because if you just do one feed-forward network, that’s where robustness is an issue. Whereas if you try to do many iterations and you adapt iterations based on the kinds of data or the kind of accuracy requirements you want, there’s much more chance of achieving that,” she said.

Anandkumar sees numerous challenges for the AI community in 2020, like the need to create models made especially for specific industries in tandem with domain experts. Policymakers, individuals, and the AI community will also need to grapple with issues of representation and the challenge of ensuring data sets used to train models account for different groups of people.

“I think [the issues with facial recognition are] so easy to grasp, but there are so many [other areas] where … people don’t realize there are privacy issues with the use of data,” she said.

Facial recognition gets the most attention, Anandkumar said, because it’s easy to understand how that can violate an individual’s privacy, but there are a number of other ethical issues for the AI community to confront in 2020.

“We will have increasing scrutiny in terms of how the data is collected and how it’s used. I think it’s happening in Europe, but in the U.S. we’ll certainly see more of that, and for [the] right reasons, from groups like the National Transportation and Safety Board [NTSB] and the FTA [Federal Transit Administration],” she said.

One of the great surprises of 2019, in Anandkumar’s view, was the rate at which text generation models progressed.

“2019 was the year of language models, right? Now, for the first time, we got to the point of more coherent text generation and generation at the length of paragraphs, which wasn’t possible before [and] which is great,” Anandkumar said.

In August 2019, Nvidia introduced the Megatron natural language model. With 8 billion parameters, Megatron is known as the world’s largest Transformer-based AI model. Anandkumar said she was surprised by the way people began characterizing models as having personalities or characters, and she looks forward to seeing more industry-specific text models.

“We are still not at the stage of dialogue generation that’s interactive, that can keep track and have natural conversations. So I think there will be more serious attempts made in 2020 in that direction,” she said.

The development of frameworks for control of text generation will be more challenging than, say, the development of frameworks for images that can be trained to identify people or objects. Text generation models can also come with the challenge of, for example, defining a fact for a neural model.

Finally, Anandkumar said she was heartened to see Kidd’s speech at NeurIPS get a standing ovation and by signs of a growing sense of maturity and inclusion within the machine learning community.

“I feel like right now is the watershed moment,” she said. “In the beginning is where it’s hard to even make small changes, and then the dam breaks. And I hope that’s what it is, because to me it feels like that, and I hope we can keep up the momentum and make even bigger structural changes and make it for all groups, everybody here, to thrive.”

Above: Photo credit: John O’Boyle

Dario Gil

Gil heads a group of researchers actively advising the White House and enterprises around the world. He believes major leaps forward in 2019 include progress around generative models and the increasing quality with which plausible language can be generated.

He predicts continued progress toward training more efficiently with reduced-precision architectures. The development of more efficient AI models was an emphasis at NeurIPS, where IBM Research introduced techniques for deep learning with an 8-bit precision model.

“It’s still so broadly inefficient the way we train deep neural networks with existing hardware with GPU architectures,” he said. “So a really fundamental rethinking on that is very important. We’ve got to improve the computational efficiency of AI so we can do more with it.”

Gil cited research suggesting that demand for ML training doubles every three and a half months, much faster than the growth predicted in Moore’s law.

Gil is also excited about how AI can help accelerate scientific discovery, but IBM Research will be primarily focused on neural symbolic approaches to machine learning.

In 2020, Gil hopes AI practitioners and researchers will develop a focus on metrics beyond accuracy to consider the value of models deployed in production. Shifting the field toward building trusted systems instead of prioritizing accuracy above all else will be a central pillar to the continued adoption of AI.

“There are some members of the community that may go on to say, ‘Don’t worry about it, just deliver accuracy. It’s okay, people will get used to the fact that the thing is a bit of a black box,’ or they’ll make the argument that humans don’t generate explanations sometimes on some of the decisions that we make. I think it’s really, really important that we concentrate the intellectual firepower of the community to do much better on that. AI systems cannot be a black box on mission-critical applications,” he said.

Gil believes in getting rid of the perception that AI is something only a limited number of machine learning wizards can do to ensure AI is adopted by more people with data science and software engineering skills.

“If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption,” he said.

In the year ahead, Gil is particularly interested in neural symbolic AI. IBM will look to neural symbolic approaches to power things like probabilistic programming, where AI learns how to operate a program, and models that can share the reasoning behind their decisions.

“By [taking] this blended approach of a new contemporary approach to bring learning and reasoning together through these neural symbolic approaches, where the symbolic dimension is embedded in learning a program, we’ve demonstrated that you can learn with a fraction of the data that is required,” he said. “Because you learn a program, you end up getting something interpretable, and because you have something interpretable, you have something much more trusted.”

Issues of fairness, data integrity, and the selection of data sets will continue to garner a lot of attention, as will “anything that has to do with biometrics,” he said. Facial recognition gets a lot of attention, but it’s just the beginning. Speech data will be viewed with growing sensitivity, as will other forms of biometrics. He went on to cite Rafael Yuste, a professor at Columbia who works on neural technology and is exploring ways to extract neural patterns on the visual cortex.

“I give this as an example that everything that has to do with identity and the biometrics of people and the advances that AI makes in analyzing that will continue to be front and center,” Gil said.

In addition to neural symbolic and common sense reasoning, a flagship initiative of the MIT-IBM Watson Lab, in 2020 Gil said IBM Research will also explore quantum computing for AI, and analog hardware for AI beyond reduced precision architectures.

Final thoughts

Machine learning is continuing to shape business and society, and the researchers and experts VentureBeat spoke with see a number of trends on the horizon:

  • Advances in natural language models were a major story of 2019 as Transformers fueled great leaps forward. Look for more variations of BERT and Transformer-based models in 2020.
  • The AI industry should look for ways to value model outputs beyond accuracy.
  • Methods like semi-supervised learning, a neural symbolic approach to machine learning, and subfields like multitask and multimodal learning may progress in the year ahead.
  • Ethical challenges related to biometric data like speech recordings will likely continue to be controversial.
  • Compilers and approaches like quantization may grow in popularity for machine learning frameworks like PyTorch and TensorFlow as ways to optimize model performance.

Know about transformative technology VentureBeat should be covering? Email AI editor Seth Colaner, senior AI staff writer Khari Johnson, or staff writer Kyle Wiggers.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!