AI & RoboticsNews

AI chatbot frenzy: Everything everywhere (all at once) 

Everything Everywhere All at Once

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The Academy Award-winning film Everything Everywhere All at Once demonstrates that life is messy and unpredictable, implying — perhaps — that we should embrace chaos, find joy, learn to let go of our expectations and trust that everything will work out in the end.

This approach echoes the way in which many are currently approaching AI. That said, experts are split on whether this technology will provide unlimited benefits and a golden era or lead to our destruction. Bill Gates, for one, focuses mostly on the hopeful message in his recent Age of AI letter.

There is little doubt now that AI is hugely disruptive. Craig Mundie, the former chief research and strategy officer for Microsoft, knows a lot about technical breakthroughs. When Gates stepped down from his daily involvement with Microsoft in 2008, Mundie was tapped to fulfill his role as technological visionary.

Mundie said recently of the freshly launched GPT-4 and the updated ChatGPT: “This is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The possibilities of “superhuman” amounts of work

The current level of excitement around generative AI might simply reflect peak hype per the concept described by Gartner, referring to the peak of inflated expectations. AI has been in this position before, then suffered through two “AI winters” when excitement outpaced actual accomplishments.

These periods were characterized by collapsed investment and general disinterest by all except for a relatively small cadre of researchers. This time truly appears to be different, however, driven by the ongoing exponential growth of data, computing power and code leading to numerous impactful use cases.

For example, Fortune reported on work by Ethan Mollick, a Wharton professor of management. In only 30 minutes, he used generative AI tools to do market research, create a positioning document, write an email campaign, create a website, create a logo and hero image graphic, make a social media campaign for multiple platforms, develop a script and create a video.

He said in a detailed blog post, “what it accomplished was superhuman,” performing in a half hour what normally would have taken a team days to do. He then asks, “When we all can do superhuman amounts of work, what happens?”

A Cambrian explosion of generative AI

It is not an overstatement to say there is a Cambrian explosion of generative AI underway. This is especially true recently for chatbots powered by large language models (LLMs). The burst of activity was highlighted by the March 14 release of GPT-4, the latest LLM update from OpenAI. While GPT-4 was already in use within Bing Chat from Microsoft, the tech is now incorporated into ChatGPT and is rapidly being integrated into other products.

Google followed only a week later by formally launching Bard, their chatbot based on the LaMDA LLM. Bard had been announced several weeks before, but is now available in preview mode, accessible via a waitlist. The initial reviews show similarities with ChatGPT — with the same facilities including writing poems and code — as well as shortcomings (such as hallucinations).

Google is stressing that Bard is not a replacement for its search engine but, rather, a “compliment to search” — a bot that users can bounce ideas off of, generate writing drafts, or just chat about life.

Proliferation of generative AI

These were hardly the only significant generative AI announcements in recent weeks. Microsoft also announced that the image generation model DALL-E 2 is being incorporated into several of its tools. Google announced no fewer than five recent updates to their use of LLMs in Google products.

Beyond these developments were several additional chatbot introductions. Anthropic launched Claude, a “constitutional AI” chatbot using a “principle-based” approach to aligning AI systems with human intentions. Databricks released open-source code that companies can use to create their own chatbots.

Meta released the LLaMA LLM as a research tool for the scientific community, which was quickly leaked online, enabling any interested party to download and modify the model. Researchers at Stanford University used one of the leaked Meta models as a starting point and trained it using ChatGPT APIs, resulting in a system they claim performs similarly to ChatGPT but was produced for only $600.

Transformative, but how?

The chatbot frenzy overshadowed other generative AI achievements, including the ability to reconstruct high-resolution and reasonably accurate images from brain activity. Unlike previous attempts, this latest effort, as documented in a research paper, didn’t need to train or fine-tune the AI models to create the images.

Instead, this reconstruction was achieved using diffusion models, such as what underpins DALLE-2, Midjourney, Stable Diffusion and other AI image generation tools. Journalist Jacob Ward says this discovery could one day lead to the ability for humans to beam images to each other via brain-to-brain communication.

Image beaming is still somewhere in the future. What might be the next big thing is video generation from text prompts. News from Runway about version 2 of their video generator points to this near-term reality. For now, the video clips generated are short — only several seconds — but the potential is apparent.

All these recent AI advances are dizzying and even mesmerizing, leading to the proclamations of an unimaginable transformation and a new age for humanity, which is entirely plausible. However, historian Yuval Harari cautions that this is an important moment to slow down. He reminds us that language is the operating system of human culture.

With the new LLMs, “A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization.” While the ceiling of benefits is sky-high, so are the downside risks. Harari’s perspective is warranted and timely.

Do these advances move us closer to artificial general intelligence?

While many believe that artificial general intelligence (AGI) will never be achieved, it is starting to look like it may already have arrived. New research from Microsoft discusses GPT-4 and states it is: “a first step towards a series of increasingly generally intelligent systems.”

As reported by Futurism, the paper adds: “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

GPT-4 is based on deep learning, and there have been questions about whether this is a suitable basis for creating AGI, the stated mission of OpenAI. Gary Marcus, a leading voice on AI issues, has argued for a hybrid AI model to achieve AGI, one that incorporates both deep learning and classical symbolic operations. It appears OpenAI is doing just this by enabling plug-ins for ChatGPT.

WolframAlpha is one of those plug-ins. As reported by Stephen Wolfram in Stratechery: “For decades, there’s been a dichotomy in thinking about AI between ‘statistical approaches’ of the kind ChatGPT uses, and ‘symbolic approaches’ that are in effect the starting point for Wolfram|Alpha.

But now — thanks to the success of ChatGPT — as well as all the work we’ve done in making Wolfram|Alpha understand natural language — there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.”

Already, the plug-in is noticeably minimizing the hallucinations within ChatGPT, leading to more accurate and useful results. But, even more significantly, the path to AGI just became much shorter. 

Indeed, everything everywhere all at once.  

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!