AI & RoboticsNews

Centaur rising: How a decades-old paradigm is changing the way that top institutions look at AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022


Are you ready to be a centaur?

What’s the most important part of an AI system?

Is it the terabytes of data you use to train the foundation model? The billions of weights and biases sitting at the pinnacle of your gradient tower? The meticulously engineered network architectures built on decades of brutally hard work?

What bottleneck are we stuck in?

Is it that our GPUs simply aren’t powerful enough? Do we need a few clever architecture tweaks and a couple of points of accuracy to lead us to full automation? Or maybe shoveling hundreds of millions of dollars into the money pit of data labeling will click us over into the future?

Or maybe, just maybe, we’ve been thinking about everything the wrong way. 

Maybe we have already moved into a new paradigm for AI as a direct result of the meteoric rise of deep learning techniques. Maybe the most important part of your AI system is the person operating it.

Power of the people

With all of the focus on full automation and level 5 autonomy, it seems almost foolish to focus on the person operating the system. After all, they’re only temporary. However, as the adoption of AI within the enterprise continues to accelerate, we’re seeing a very different picture come into view.

Overwhelmingly, the success of AI initiatives comes down to transparency, control, and trust. Eighty-four percent of enterprises still don’t trust AI, and a profound gap in specialized AI talent — compounded by the pandemic — is one of the top barriers to AI adoption. The modern appetite for automation can’t wait for a workforce of hundreds of thousands of AI experts that aren’t coming.

All of this points toward a critical need to rethink the way that we build AI systems. How do we empower the citizen data scientist and bring the next tranche of AI users into the fold? We have to stop thinking of these as autonomous systems with incidental humans. The development, operation and maintenance of these systems are all fundamentally centered around people.

Enter the centaur.

After Garry Kasparov’s famous loss to Deep Blue in 1997, the world watched with bated breath, wondering what the future looked like for humans in chess. One person that didn’t wait was Garry Kasparov. In the truest expression of “If you can’t beat them, join them,” Garry Kasparov teamed up with a chess program called Fritz 5 to become the world’s first Centaur. In 1998 he competed in the world’s first centaur chess competition against Veselin Topalov paired with ChessBase 7.0.

Even today, with two more decades of AI progress under our belts, centaurs are competitive with the top AI in the world. Given the obvious complexities of benchmarking centaurs against pure chess AI, the exact state of the art is somewhat contentious, but Garry Kasparov claimed in 2017 that there was “no doubt” that “a human paired with a set of programs is better than playing against just the single strongest computer program in chess.”

The paradox here is that human control and direction add value even when the AI is performing at levels that are obviously superhuman. The assumption that sufficiently advanced AI will eliminate the need for humans seems false. Instead, we’re now tasked with creating the appropriate interface for mutualism between us and AI.

Even massive organizations, steadfast in their dedication to Artificial General Intelligence, have started to embrace more holistic approaches that recognize humans as a necessary part of the process. Obvious pieces of evidence include the progressive focus on few shot learning over zero shot learning, the closely related rise of prompt engineering, and Microsoft’s promotion of Machine Teaching.

Famously, OpenAI has even started to effectively include human beings in its training architecture. In a recent paper, they dramatically outperformed state-of-the-art summarizations by directly integrating a human feedback loop into the structure of their experiment. That is a whole lot of human involvement for a field that is supposedly about automating humans away.

But we shouldn’t be surprised.

History repeats and repeats

The first industrial revolution started with steam power and iron, but it was built with the power loom and machine tools. The breakthrough was critical, but the interface with humans was the thing that changed the world.

The second industrial revolution started with steel and sparks, but it was built with the rail and the telegraph. Even when technology was crude by our modern standards, it wormed its way into the critical thoroughfares of everyday life.

The third industrial revolution started with digital logic and computability, but it was built with silicon, HTML and javascript. Just like in every other industrial revolution, those early advances have a timeless quality to them. Pong, MS Paint, notepad — even with the radical improvements to technology since their release, their interfaces are still relevant and influential.

The fourth industrial revolution — the AI revolution — is ongoing. Data infrastructure, machine learning and cloud computing are key enablers, but the core technology will only be obvious in hindsight. What is clear is that the core interfaces that will echo through history have not yet been developed.

For all of the incredible work we have done in improving the technology, the vast majority of our AI interfaces have remained unchanged for decades. We are sorely lacking the interfaces we need to enable self-driving cars, automated assistants and other theoretically game-changing technology.

This is our challenge. The next generation of AI problems must center on user experience and human cognition as much as they center on the development and improvement of massive neural networks. We have to start learning from the lessons of the past to recognize that these human/machine interfaces we build are not some intermediate state on our way to a utopic, automated future. They are the future.

So I’ll ask again.

Are you ready to be a centaur?

I am.

Slater Victoroff is founder and CTO of Indico Data.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Slater Victoroff
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!