AI & RoboticsNews

How AI is reshaping the rules of business

OpenAI

Over the past few weeks, there have been a number of significant developments in the global discussion on AI risk and regulation. The emergent theme, both from the U.S. hearings on OpenAI with Sam Altman and the EU’s announcement of the amended AI Act, has been a call for more regulation.

But what’s been surprising to some is the consensus between governments, researchers and AI developers on this need for regulation. In the testimony before Congress, Sam Altman, the CEO of OpenAI, proposed creating a new government body that issues licenses for developing large-scale AI models.

He gave several suggestions for how such a body could regulate the industry, including “a combination of licensing and testing requirements,” and said firms like OpenAI should be independently audited.

However, while there is growing agreement on the risks, including potential impacts on people’s jobs and privacy, there is still little consensus on what such regulations should look like or what potential audits should focus on. At the first Generative AI Summit held by the World Economic Forum, where AI leaders from businesses, governments and research institutions gathered to drive alignment on how to navigate these new ethical and regulatory considerations, two key themes emerged:

First, we need to update our requirements for businesses developing and deploying AI models. This is particularly important when we question what “responsible innovation” really means. The U.K. has been leading this discussion, with its government recently providing guidance for AI through five core principles, including safety, transparency and fairness. There has also been recent research from Oxford highlighting that  “LLMs such as ChatGPT bring about an urgent need for an update in our concept of responsibility.”

A core driver behind this push for new responsibilities is the increasing difficulty of understanding and auditing the new generation of AI models. To consider this evolution, we can consider “traditional” AI vs. LLM AI, or large language model AI, in the example of recommending candidates for a job.

If traditional AI was trained on data that identifies employees of a certain race or gender in more senior-level jobs, it might create bias by recommending people of the same race or gender for jobs. Fortunately, this is something that could be caught or audited by inspecting the data used to train these AI models, as well as the output recommendations.

With new LLM-powered AI, this type of bias auditing is becoming increasingly difficult, if not at times impossible, to test for bias and quality. Not only do we not know what data a “closed” LLM was trained on, but a conversational recommendation might introduce biases or a “hallucinations” that are more subjective.

For example, if you ask ChatGPT to summarize a speech by a presidential candidate, who’s to judge whether it is a biased summary?

Thus, it is more important than ever for products that include AI recommendations to consider new responsibilities, such as how traceable the recommendations are, to ensure that the models used in recommendations can, in fact, be bias-audited rather than just using LLMs.

It is this boundary of what counts as a recommendation or a decision that is key to new AI regulations in HR. For example, the new NYC AEDT law is pushing for bias audits for technologies that specifically involve employment decisions, such as those that can automatically decide who is hired.

However, the regulatory landscape is quickly evolving beyond just how AI makes decisions and into how the AI is built and used.

This brings us to the second key theme: the need for governments to define clearer and broader standards for how AI technologies are built and how these standards are made clear to consumers and employees.

At the recent OpenAI hearing, Christina Montgomery, IBM’s chief privacy and trust officer, highlighted that we need standards to ensure consumers are made aware every time they’re engaging with a chatbot. This kind of transparency around how AI is developed and the risk of bad actors using open-source models is key to the recent EU AI Act’s considerations for banning LLM APIs and open-source models.

The question of how to control the proliferation of new models and technologies will require further debate before the tradeoffs between risks and benefits become clearer. But what is becoming increasingly clear is that as the impact of AI accelerates, so does the urgency for standards and regulations, as well as awareness of both the risks and the opportunities.

The impact of AI is perhaps being most rapidly felt by HR teams, who are being asked to both grapple with new pressures to provide employees with opportunities to upskill and to provide their executive teams with adjusted predictions and workforce plans around new skills that will be needed to adapt their business strategy.

At the two recent WEF summits on Generative AI and the Future of Work, I spoke with leaders in AI and HR, as well as policymakers and academics, on an emerging consensus: that all businesses need to push for responsible AI adoption and awareness. The WEF just published its “Future of Jobs Report,” which highlights that over the next five years, 23% of jobs are expected to change, with 69 million created but 83 million eliminated. That means at least 14 million people’s jobs are deemed at risk.

The report also highlights that not only will six in 10 workers need to change their skillset to do their work — they will need upskilling and reskilling — before 2027, but only half of employees are seen to have access to adequate training opportunities today.

So how should teams keep employees engaged in the AI-accelerated transformation? By driving internal transformation that’s focused on their employees and carefully considering how to create a compliant and connected set of people and technology experiences that empower employees with better transparency into their careers and the tools to develop themselves.

The new wave of regulations is helping shine a new light on how to consider bias in people-related decisions, such as in talent — and yet, as these technologies are adopted by people both in and out of work, the responsibility is greater than ever for business and HR leaders to understand both the technology and the regulatory landscape and lean in to driving a responsible AI strategy in their teams and businesses.

Sultan Saidov is president and cofounder of Beamery.

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


Over the past few weeks, there have been a number of significant developments in the global discussion on AI risk and regulation. The emergent theme, both from the U.S. hearings on OpenAI with Sam Altman and the EU’s announcement of the amended AI Act, has been a call for more regulation.

But what’s been surprising to some is the consensus between governments, researchers and AI developers on this need for regulation. In the testimony before Congress, Sam Altman, the CEO of OpenAI, proposed creating a new government body that issues licenses for developing large-scale AI models.

He gave several suggestions for how such a body could regulate the industry, including “a combination of licensing and testing requirements,” and said firms like OpenAI should be independently audited.

However, while there is growing agreement on the risks, including potential impacts on people’s jobs and privacy, there is still little consensus on what such regulations should look like or what potential audits should focus on. At the first Generative AI Summit held by the World Economic Forum, where AI leaders from businesses, governments and research institutions gathered to drive alignment on how to navigate these new ethical and regulatory considerations, two key themes emerged:

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The need for responsible and accountable AI auditing

First, we need to update our requirements for businesses developing and deploying AI models. This is particularly important when we question what “responsible innovation” really means. The U.K. has been leading this discussion, with its government recently providing guidance for AI through five core principles, including safety, transparency and fairness. There has also been recent research from Oxford highlighting that  “LLMs such as ChatGPT bring about an urgent need for an update in our concept of responsibility.”

A core driver behind this push for new responsibilities is the increasing difficulty of understanding and auditing the new generation of AI models. To consider this evolution, we can consider “traditional” AI vs. LLM AI, or large language model AI, in the example of recommending candidates for a job.

If traditional AI was trained on data that identifies employees of a certain race or gender in more senior-level jobs, it might create bias by recommending people of the same race or gender for jobs. Fortunately, this is something that could be caught or audited by inspecting the data used to train these AI models, as well as the output recommendations.

With new LLM-powered AI, this type of bias auditing is becoming increasingly difficult, if not at times impossible, to test for bias and quality. Not only do we not know what data a “closed” LLM was trained on, but a conversational recommendation might introduce biases or a “hallucinations” that are more subjective.

For example, if you ask ChatGPT to summarize a speech by a presidential candidate, who’s to judge whether it is a biased summary?

Thus, it is more important than ever for products that include AI recommendations to consider new responsibilities, such as how traceable the recommendations are, to ensure that the models used in recommendations can, in fact, be bias-audited rather than just using LLMs.

It is this boundary of what counts as a recommendation or a decision that is key to new AI regulations in HR. For example, the new NYC AEDT law is pushing for bias audits for technologies that specifically involve employment decisions, such as those that can automatically decide who is hired.

However, the regulatory landscape is quickly evolving beyond just how AI makes decisions and into how the AI is built and used.

Transparency around conveying AI standards to consumers

This brings us to the second key theme: the need for governments to define clearer and broader standards for how AI technologies are built and how these standards are made clear to consumers and employees.

At the recent OpenAI hearing, Christina Montgomery, IBM’s chief privacy and trust officer, highlighted that we need standards to ensure consumers are made aware every time they’re engaging with a chatbot. This kind of transparency around how AI is developed and the risk of bad actors using open-source models is key to the recent EU AI Act’s considerations for banning LLM APIs and open-source models.

The question of how to control the proliferation of new models and technologies will require further debate before the tradeoffs between risks and benefits become clearer. But what is becoming increasingly clear is that as the impact of AI accelerates, so does the urgency for standards and regulations, as well as awareness of both the risks and the opportunities.

Implications of AI regulation for HR teams and business leaders

The impact of AI is perhaps being most rapidly felt by HR teams, who are being asked to both grapple with new pressures to provide employees with opportunities to upskill and to provide their executive teams with adjusted predictions and workforce plans around new skills that will be needed to adapt their business strategy.

At the two recent WEF summits on Generative AI and the Future of Work, I spoke with leaders in AI and HR, as well as policymakers and academics, on an emerging consensus: that all businesses need to push for responsible AI adoption and awareness. The WEF just published its “Future of Jobs Report,” which highlights that over the next five years, 23% of jobs are expected to change, with 69 million created but 83 million eliminated. That means at least 14 million people’s jobs are deemed at risk.

The report also highlights that not only will six in 10 workers need to change their skillset to do their work — they will need upskilling and reskilling — before 2027, but only half of employees are seen to have access to adequate training opportunities today.

So how should teams keep employees engaged in the AI-accelerated transformation? By driving internal transformation that’s focused on their employees and carefully considering how to create a compliant and connected set of people and technology experiences that empower employees with better transparency into their careers and the tools to develop themselves.

The new wave of regulations is helping shine a new light on how to consider bias in people-related decisions, such as in talent — and yet, as these technologies are adopted by people both in and out of work, the responsibility is greater than ever for business and HR leaders to understand both the technology and the regulatory landscape and lean in to driving a responsible AI strategy in their teams and businesses.

Sultan Saidov is president and cofounder of Beamery.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Sultan Saidov, Beamery
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!