Did I write this, or was it ChatGPT?
It’s hard to tell, isn’t it?
For the sake of my editors, I will follow that quickly with: I wrote this article (I swear). But the point is that it’s worth exploring generative artificial intelligence’s limitations and areas of utility for developers and users. Both are revealing. The same is true for Web3 and blockchain.
While we’re already seeing the practical applications of Web3 and generative AI play out in tech platforms, online interactions, scripts, games and social media apps, we’re also seeing a replay of the responsible AI and blockchain 1.0 hype cycles of the mid-2010s.
“We need a set of principles or ethics to guide innovation.” “We need more regulation.” “We need less regulation.” “There are bad actors poisoning the well for the rest of us.” “We need heroes to save us from AI and/or blockchain.” “Technology is too sentient.” “Technology is too limited.” “There is no enterprise-level application.” “There are countless enterprise-level applications.”
If you exclusively read the headlines, you will come out the other side with the conclusion that the combo of generative AI and blockchain will either save the world or destroy it.
We’ve seen this play (and every act and intermission) before with the hype cycles of both responsible AI and blockchain. The only difference this time is that the articles we’re reading about ChatGPT’s implications may, in fact, have been written by ChatGPT. And the term blockchain has a bit more heft behind it thanks to investment from Web2 giants like Google Cloud, Mastercard and Starbucks.
That said, it’s notable that OpenAI’s leadership recently called for an international regulatory body akin to the International Atomic Energy Agency (IAEA) to regulate and, when necessary, rein in AI innovation. The proactive move illuminates an awareness of both AI’s massive potential and potentially society-crumbling pitfalls. It also conveys that the technology itself is still in test mode.
The other significant subtext: Public sector regulation at the federal and sub-federal levels commonly limits innovation.
As with Web3, and whether or not regulatory action takes place, responsibility needs to be at the core of generative AI innovation and adoption. As the technology evolves rapidly, it’s important for vendors and platforms to assess every potential use case to ensure responsible experimentation and adoption. And, as OpenAI’s Sam Altman and Google’s Sundar Pichai notably point out, working with the public sector to evolve regulation is a significant part of that equation.
It’s also important to surface limitations, transparently report on them, and provide guardrails if or when issues become apparent.
While AI and blockchain have both been around for decades, the impact of AI, in particular, is now visible with ChatGPT, Bard and the entire field of generative AI players. Together with Web3’s decentralized power, we’re about to witness an explosion of practical applications that build on progress automating interactions and advancing Web3 in more visible ways.
From a user-centric perspective (and whether we know it or not), generative AI and blockchain are both already transforming how people interact in the real world and online. Solana recently made it official with a ChatGPT integration. And exchange Bitget backed away from theirs.
Promising or puzzling, every signal indicates that it remains to be seen where the technologies best intersect in the name of user experience and user-centric innovation. From where I sit as the head of a layer1 blockchain built for scale and interoperability, the question becomes: How should AI and blockchain join forces in pursuit of Web3’s own ChatGPT moment of mainstream adoption?
Tools like ChatGPT and Bard will accelerate the next major waves of innovation on Web2 and Web3. The convergence of generative AI and Web3 will be like the pairing of peanut butter and jelly on fresh bread — but, you know, with code, infrastructure, and asset portability. And, as hype is replaced with practical applications and constant upgrades, persistent questions about whether these technologies will take hold in the mainstream will be toast.
Enterprise leaders should view generative AI as a tool worth exploring, testing, and after doing both, integrating. Specifically, they should focus efforts on exploring how the “generative” element can improve work outcomes internally with teams and externally with customers or partners. And they should continuously map out its enterprise-wide potential and limitations.
It’s time to begin to map out and document where not to use generative AI, which is equally important in my book. Don’t rely on the technology for anything where you need to apply facts and hard data to outputs for community members, partners, teams or investors, and don’t rely on it for protocol upgrades, software engineering, coding sprints or international business operations.
On a practical level, enterprise leaders should consider incorporating generative AI into administrative workflows to keep their company’s day-to-day workflows moving faster and more efficiently. Explore its seemingly universal utility to kick off text- or code-heavy projects across engineering, marketing, business and executive functions. And since this tech changes by the day, enterprise leaders should look at every possible new use case to decide whether to responsibly experiment with it en route to adoption, which also applies to work in Web3.
Mo Shaikh is cofounder and CEO of Aptos Labs.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Did I write this, or was it ChatGPT?
It’s hard to tell, isn’t it?
For the sake of my editors, I will follow that quickly with: I wrote this article (I swear). But the point is that it’s worth exploring generative artificial intelligence’s limitations and areas of utility for developers and users. Both are revealing. The same is true for Web3 and blockchain.
While we’re already seeing the practical applications of Web3 and generative AI play out in tech platforms, online interactions, scripts, games and social media apps, we’re also seeing a replay of the responsible AI and blockchain 1.0 hype cycles of the mid-2010s.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
“We need a set of principles or ethics to guide innovation.” “We need more regulation.” “We need less regulation.” “There are bad actors poisoning the well for the rest of us.” “We need heroes to save us from AI and/or blockchain.” “Technology is too sentient.” “Technology is too limited.” “There is no enterprise-level application.” “There are countless enterprise-level applications.”
If you exclusively read the headlines, you will come out the other side with the conclusion that the combo of generative AI and blockchain will either save the world or destroy it.
All over again
We’ve seen this play (and every act and intermission) before with the hype cycles of both responsible AI and blockchain. The only difference this time is that the articles we’re reading about ChatGPT’s implications may, in fact, have been written by ChatGPT. And the term blockchain has a bit more heft behind it thanks to investment from Web2 giants like Google Cloud, Mastercard and Starbucks.
That said, it’s notable that OpenAI’s leadership recently called for an international regulatory body akin to the International Atomic Energy Agency (IAEA) to regulate and, when necessary, rein in AI innovation. The proactive move illuminates an awareness of both AI’s massive potential and potentially society-crumbling pitfalls. It also conveys that the technology itself is still in test mode.
The other significant subtext: Public sector regulation at the federal and sub-federal levels commonly limits innovation.
As with Web3, and whether or not regulatory action takes place, responsibility needs to be at the core of generative AI innovation and adoption. As the technology evolves rapidly, it’s important for vendors and platforms to assess every potential use case to ensure responsible experimentation and adoption. And, as OpenAI’s Sam Altman and Google’s Sundar Pichai notably point out, working with the public sector to evolve regulation is a significant part of that equation.
It’s also important to surface limitations, transparently report on them, and provide guardrails if or when issues become apparent.
While AI and blockchain have both been around for decades, the impact of AI, in particular, is now visible with ChatGPT, Bard and the entire field of generative AI players. Together with Web3’s decentralized power, we’re about to witness an explosion of practical applications that build on progress automating interactions and advancing Web3 in more visible ways.
From a user-centric perspective (and whether we know it or not), generative AI and blockchain are both already transforming how people interact in the real world and online. Solana recently made it official with a ChatGPT integration. And exchange Bitget backed away from theirs.
Promising or puzzling, every signal indicates that it remains to be seen where the technologies best intersect in the name of user experience and user-centric innovation. From where I sit as the head of a layer1 blockchain built for scale and interoperability, the question becomes: How should AI and blockchain join forces in pursuit of Web3’s own ChatGPT moment of mainstream adoption?
Tools like ChatGPT and Bard will accelerate the next major waves of innovation on Web2 and Web3. The convergence of generative AI and Web3 will be like the pairing of peanut butter and jelly on fresh bread — but, you know, with code, infrastructure, and asset portability. And, as hype is replaced with practical applications and constant upgrades, persistent questions about whether these technologies will take hold in the mainstream will be toast.
So, what does all this mean for enterprise leaders?
Enterprise leaders should view generative AI as a tool worth exploring, testing, and after doing both, integrating. Specifically, they should focus efforts on exploring how the “generative” element can improve work outcomes internally with teams and externally with customers or partners. And they should continuously map out its enterprise-wide potential and limitations.
It’s time to begin to map out and document where not to use generative AI, which is equally important in my book. Don’t rely on the technology for anything where you need to apply facts and hard data to outputs for community members, partners, teams or investors, and don’t rely on it for protocol upgrades, software engineering, coding sprints or international business operations.
On a practical level, enterprise leaders should consider incorporating generative AI into administrative workflows to keep their company’s day-to-day workflows moving faster and more efficiently. Explore its seemingly universal utility to kick off text- or code-heavy projects across engineering, marketing, business and executive functions. And since this tech changes by the day, enterprise leaders should look at every possible new use case to decide whether to responsibly experiment with it en route to adoption, which also applies to work in Web3.
Mo Shaikh is cofounder and CEO of Aptos Labs.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Author: Mo Shaikh, Aptos Labs
Source: Venturebeat