AI & RoboticsNews

Just because we can’t trust generative AI (yet) doesn’t mean we should fear it

generative AI’

Although the release of ChatGPT brought with it a lot of chatter about generative AI’s revolutionary impact on technology, there’s been an equal focus on some of the technology’s shortcomings. Indeed, there have been some heated debates about generative AI’s potentially hazardous impact on society, its imaginable negative applications, and the significant ethical concerns that surround its development.

But from an IT and software development standpoint — where many predict generative AI will have the most telling impact going forward — one question, in particular, keeps coming up: How much can enterprises actually trust this technology to handle their critical and creative tasks?

>>Follow VentureBeat’s ongoing generative AI coverage<<

The answer, at least right now, is not very much. The technology is too riddled with inaccuracies, has severe reliability issues, and lacks real-world context for enterprises to completely bank on it. There are also some very justified concerns about its security vulnerabilities, namely how bad actors are using the technology to produce and spread misleading deepfake content.

All of these concerns certainly require businesses to question whether they can really ensure the responsible use of generative AI. But they shouldn’t also instill fear in them. Sure, businesses must always balance caution and the technology’s endless possibilities. But enterprise decision-makers — and in particular, tech pros — should already be used to acting responsibly when handed new innovations that promise to upend their entire industry.

Let’s break down why.

Generative AI isn’t the first technology to be met with fear and skepticism. Even cloud computing, which has been nothing short of a saving grace since the start of the remote work revolution, caused alarms to sound among business leaders due to concerns about data security, privacy and reliability. Many organizations actually hesitated to adopt cloud solutions for fear of unauthorized access, data breaches and potential service outages.

Over time, however, as cloud providers improved security measures, implemented robust data protection protocols and demonstrated high reliability, organizations gradually embraced it.

Open-source software (OSS) is another example. Initially, there were concerns it would lack quality, security and support compared to proprietary alternatives. Skepticism persisted due to the fear of unregulated code modifications and a perceived lack of accountability. But the open-source movement gained momentum, leading to the development of highly reliable and widely adopted projects such as Linux, Apache, and MySQL. Today, open-source software is pervasive across IT domains, offering cost-effective solutions, rapid innovation and community-driven support.

In other words, after an initial bout of caution, enterprises adopted and embraced these technologies.

This isn’t to minimize people’s worries about generative AI. There is, after all, a long list of unique — and justified — concerns surrounding the technology. For example, there are issues with fairness and bias that must be addressed before businesses can truly trust it. Generative AI models learn from existing data, which means they may inadvertently perpetuate biases and unfair practices present in the training dataset. These biases, in turn, can result in discriminatory or skewed outputs.

In fact, when our recent survey of 400 CIOs and CTOs about their adoption of, and views on, generative AI asked these leaders about their ethical concerns, “ensuring fairness and avoiding bias” was the most important ethical consideration they cited.

Inaccuracies or subtle “hallucinations” are another threat. These aren’t colossal errors, but they’re errors nonetheless. For instance, when I recently prompted ChatGPT to tell me more about my business, it falsely named three specific companies as past clients.

These are certainly concerns that must be addressed. But if you dig deeper, you find some that are perhaps overblown, too, like those speculating that these AI-powered innovations will replace human talent. All you have to do is conduct a quick Google search to see headlines about the top 10 jobs at risk or why workers’ AI anxiety is warranted. Usually, its impact on software development is a particularly hot topic.

But if you ask IT professionals, this really isn’t a concern. Job loss actually ranked last among the ethical considerations of CIOs and CTOs in the aforementioned survey. Further, an overwhelming 88% said they believe generative AI cannot replace software developers, and half said they think it will actually increase the strategic importance of IT leaders.

Enterprises need to recognize the need to approach generative AI with caution, just as they’ve had to do with other emerging technologies. But they can do so while also celebrating the transformative potential it has to offer to drive progress in the IT industry and beyond. The reality is, the technology is already reshaping the IT and software development spaces, and businesses will never be able to stop it.

And they shouldn’t want to stop it, given its promise to strengthen the capabilities of their best tech talent and improve the quality of software. These are capabilities they shouldn’t fear. At the same time, they’re capabilities that they cannot fully appreciate until they address generative AI’s downfalls. It’s only when they do this that they will maximize the power of generative AI to support IT and software development, improve efficiency and build more advanced software solutions.

Natalie Kaminski is cofounder and CEO of IT development firm JetRockets

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


Although the release of ChatGPT brought with it a lot of chatter about generative AI’s revolutionary impact on technology, there’s been an equal focus on some of the technology’s shortcomings. Indeed, there have been some heated debates about generative AI’s potentially hazardous impact on society, its imaginable negative applications, and the significant ethical concerns that surround its development.

But from an IT and software development standpoint — where many predict generative AI will have the most telling impact going forward — one question, in particular, keeps coming up: How much can enterprises actually trust this technology to handle their critical and creative tasks?

>>Follow VentureBeat’s ongoing generative AI coverage<<

The answer, at least right now, is not very much. The technology is too riddled with inaccuracies, has severe reliability issues, and lacks real-world context for enterprises to completely bank on it. There are also some very justified concerns about its security vulnerabilities, namely how bad actors are using the technology to produce and spread misleading deepfake content.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

All of these concerns certainly require businesses to question whether they can really ensure the responsible use of generative AI. But they shouldn’t also instill fear in them. Sure, businesses must always balance caution and the technology’s endless possibilities. But enterprise decision-makers — and in particular, tech pros — should already be used to acting responsibly when handed new innovations that promise to upend their entire industry.

Let’s break down why.

Learning from past innovations

Generative AI isn’t the first technology to be met with fear and skepticism. Even cloud computing, which has been nothing short of a saving grace since the start of the remote work revolution, caused alarms to sound among business leaders due to concerns about data security, privacy and reliability. Many organizations actually hesitated to adopt cloud solutions for fear of unauthorized access, data breaches and potential service outages.

Over time, however, as cloud providers improved security measures, implemented robust data protection protocols and demonstrated high reliability, organizations gradually embraced it.

Open-source software (OSS) is another example. Initially, there were concerns it would lack quality, security and support compared to proprietary alternatives. Skepticism persisted due to the fear of unregulated code modifications and a perceived lack of accountability. But the open-source movement gained momentum, leading to the development of highly reliable and widely adopted projects such as Linux, Apache, and MySQL. Today, open-source software is pervasive across IT domains, offering cost-effective solutions, rapid innovation and community-driven support.

In other words, after an initial bout of caution, enterprises adopted and embraced these technologies.

Addressing generative AI’s unique challenges

This isn’t to minimize people’s worries about generative AI. There is, after all, a long list of unique — and justified — concerns surrounding the technology. For example, there are issues with fairness and bias that must be addressed before businesses can truly trust it. Generative AI models learn from existing data, which means they may inadvertently perpetuate biases and unfair practices present in the training dataset. These biases, in turn, can result in discriminatory or skewed outputs.

In fact, when our recent survey of 400 CIOs and CTOs about their adoption of, and views on, generative AI asked these leaders about their ethical concerns, “ensuring fairness and avoiding bias” was the most important ethical consideration they cited.

Inaccuracies or subtle “hallucinations” are another threat. These aren’t colossal errors, but they’re errors nonetheless. For instance, when I recently prompted ChatGPT to tell me more about my business, it falsely named three specific companies as past clients.

These are certainly concerns that must be addressed. But if you dig deeper, you find some that are perhaps overblown, too, like those speculating that these AI-powered innovations will replace human talent. All you have to do is conduct a quick Google search to see headlines about the top 10 jobs at risk or why workers’ AI anxiety is warranted. Usually, its impact on software development is a particularly hot topic.

But if you ask IT professionals, this really isn’t a concern. Job loss actually ranked last among the ethical considerations of CIOs and CTOs in the aforementioned survey. Further, an overwhelming 88% said they believe generative AI cannot replace software developers, and half said they think it will actually increase the strategic importance of IT leaders.

Cracking the code to generative AI’s future

Enterprises need to recognize the need to approach generative AI with caution, just as they’ve had to do with other emerging technologies. But they can do so while also celebrating the transformative potential it has to offer to drive progress in the IT industry and beyond. The reality is, the technology is already reshaping the IT and software development spaces, and businesses will never be able to stop it.

And they shouldn’t want to stop it, given its promise to strengthen the capabilities of their best tech talent and improve the quality of software. These are capabilities they shouldn’t fear. At the same time, they’re capabilities that they cannot fully appreciate until they address generative AI’s downfalls. It’s only when they do this that they will maximize the power of generative AI to support IT and software development, improve efficiency and build more advanced software solutions.

Natalie Kaminski is cofounder and CEO of IT development firm JetRockets

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Natalie Kaminski, JetRockets
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!