AI & RoboticsNews

2 clear and consistent paths toward effective, accelerated AI regulation

AI has been transformative, especially with the public drop of ChatGPT. But for all the potential AI holds, its development at its current pace, if left unchecked, comes with more than a few concerns. Leading AI research lab Anthropic (along with many others) is worried about the destructive power of AI — even as it competes with ChatGPT. Other concerns, including the elimination of millions of jobs, the collection of personal data and the spread of misinformation have drawn the attention of various parties around the globe, particularly government bodies.

The U.S. Congress has increased its efforts over the last few years, introducing a series of different bills that touch on transparency requirements of AI, developing a risk-based framework for the technology, and more.

Acting on this in October, the Biden-Harris administration rolled out an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which offers guidelines in a wide variety of areas including cybersecurity, privacy, bias, civil rights, algorithmic discrimination, education, workers’ rights and research (among others). The Administration, as part of the G7, also recently introduced an AI code of conduct.

The European Union has also made notable strides with its proposed AI legislation, the EU AI Act. This focuses on high-risk AI tools that may infringe upon the rights of individuals and systems that form part of high-risk products, such as AI products to be used in aviation. The EU AI Act lists several controls that need to be wrapped around high-risk AI, including robustness, privacy, safety and transparency. Where an AI system poses an unacceptable risk, it would be banned from the market.

Although there’s much debate around the role government should play in regulating AI and other technologies, smart regulation around AI is good for business, too, as striking a balance between innovation and governance has the potential to protect businesses from unnecessary risk and provide them with a competitive advantage.

Businesses have a duty to minimize the repercussions of what they sell and use. Generative AI requires large amounts of data, raising questions about information privacy. Without proper governance, consumer loyalty and sales will falter as customers worry a business’s use of AI could compromise the sensitive information they provide.

What’s more, businesses must consider the potential liabilities of gen AI. If generated materials resemble an existing work, it could open up a business to copyright infringement. An organization may even find itself in a position where the data owner seeks compensation for the output already sold.

Finally, it is important to remind ourselves that AI outputs can be biased, replicating the stereotypes we have in society and coding them into systems that make decisions and predictions, allocate resources and define what we will see and watch. Appropriate governance means establishing rigorous processes to minimize the risks of bias. This includes involving those who may be impacted the most to review parameters and data, deploy a diverse workforce and massage the data to achieve the output that the organization perceives as fair.

Moving forward, this is a crucial point for governance to adequately protect the rights and best interests of people while also accelerating the use of a transformative technology.

Proper due diligence can limit risk. However, it’s just as important to establish a solid framework as it is to follow regulations. Enterprises must consider the following factors.

While experts might disagree on the largest potential threat of unchecked AI, there has been some consensus on jobs, privacy, data protection, social inequality, bias, intellectual property and more. When it comes to your business, take a look at those consequences and evaluate the unique risks your type of business carries. If your company can come to an agreement on what risks to look out for, you can create guidelines to ensure your company is ready to tackle them when they come up and take preventative measures.

For example, my company Wipro recently released a four pillars framework on ensuring a responsible AI-empowered future. This framework is based on individual, social, technical and environmental focuses. This is just one possible way companies can set strong guidelines for their continued interactions with AI systems. 

Businesses that rely on AI need governance. This helps to ensure accountability and transparency throughout the AI lifecycle, helping to document how a model has been trained. This can minimize the risk of unreliability in the model, biases entering the model, changes in the relationship between variables and loss of control over processes. In other words, governance makes monitoring, managing and directing AI activities much easier.

Every AI artifact is a sociotechnical system. This is because an AI system is a bundle of data, parameters and people. It isn’t enough to simply focus on the technological requirements of regulations; companies must also consider the social aspects. That’s why it’s become increasingly important for everyone to be involved: businesses, academia, government and society in general. Otherwise, we’ll begin to see a proliferation of AI developed by very homogenous groups that could lead to unimaginable issues.

Ivana Bartoletti is the global chief privacy officer for Wipro Limited.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


AI has been transformative, especially with the public drop of ChatGPT. But for all the potential AI holds, its development at its current pace, if left unchecked, comes with more than a few concerns. Leading AI research lab Anthropic (along with many others) is worried about the destructive power of AI — even as it competes with ChatGPT. Other concerns, including the elimination of millions of jobs, the collection of personal data and the spread of misinformation have drawn the attention of various parties around the globe, particularly government bodies.

The U.S. Congress has increased its efforts over the last few years, introducing a series of different bills that touch on transparency requirements of AI, developing a risk-based framework for the technology, and more.

Acting on this in October, the Biden-Harris administration rolled out an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which offers guidelines in a wide variety of areas including cybersecurity, privacy, bias, civil rights, algorithmic discrimination, education, workers’ rights and research (among others). The Administration, as part of the G7, also recently introduced an AI code of conduct.

The European Union has also made notable strides with its proposed AI legislation, the EU AI Act. This focuses on high-risk AI tools that may infringe upon the rights of individuals and systems that form part of high-risk products, such as AI products to be used in aviation. The EU AI Act lists several controls that need to be wrapped around high-risk AI, including robustness, privacy, safety and transparency. Where an AI system poses an unacceptable risk, it would be banned from the market.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Although there’s much debate around the role government should play in regulating AI and other technologies, smart regulation around AI is good for business, too, as striking a balance between innovation and governance has the potential to protect businesses from unnecessary risk and provide them with a competitive advantage.

The role of business in AI governance

Businesses have a duty to minimize the repercussions of what they sell and use. Generative AI requires large amounts of data, raising questions about information privacy. Without proper governance, consumer loyalty and sales will falter as customers worry a business’s use of AI could compromise the sensitive information they provide.

What’s more, businesses must consider the potential liabilities of gen AI. If generated materials resemble an existing work, it could open up a business to copyright infringement. An organization may even find itself in a position where the data owner seeks compensation for the output already sold.

Finally, it is important to remind ourselves that AI outputs can be biased, replicating the stereotypes we have in society and coding them into systems that make decisions and predictions, allocate resources and define what we will see and watch. Appropriate governance means establishing rigorous processes to minimize the risks of bias. This includes involving those who may be impacted the most to review parameters and data, deploy a diverse workforce and massage the data to achieve the output that the organization perceives as fair.

Moving forward, this is a crucial point for governance to adequately protect the rights and best interests of people while also accelerating the use of a transformative technology.

A framework for regulatory practices

Proper due diligence can limit risk. However, it’s just as important to establish a solid framework as it is to follow regulations. Enterprises must consider the following factors.

Focus on the known risks and come to an agreement

While experts might disagree on the largest potential threat of unchecked AI, there has been some consensus on jobs, privacy, data protection, social inequality, bias, intellectual property and more. When it comes to your business, take a look at those consequences and evaluate the unique risks your type of business carries. If your company can come to an agreement on what risks to look out for, you can create guidelines to ensure your company is ready to tackle them when they come up and take preventative measures.

For example, my company Wipro recently released a four pillars framework on ensuring a responsible AI-empowered future. This framework is based on individual, social, technical and environmental focuses. This is just one possible way companies can set strong guidelines for their continued interactions with AI systems. 

Get smarter with governance

Businesses that rely on AI need governance. This helps to ensure accountability and transparency throughout the AI lifecycle, helping to document how a model has been trained. This can minimize the risk of unreliability in the model, biases entering the model, changes in the relationship between variables and loss of control over processes. In other words, governance makes monitoring, managing and directing AI activities much easier.

Every AI artifact is a sociotechnical system. This is because an AI system is a bundle of data, parameters and people. It isn’t enough to simply focus on the technological requirements of regulations; companies must also consider the social aspects. That’s why it’s become increasingly important for everyone to be involved: businesses, academia, government and society in general. Otherwise, we’ll begin to see a proliferation of AI developed by very homogenous groups that could lead to unimaginable issues.

Ivana Bartoletti is the global chief privacy officer for Wipro Limited.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Ivana Bartoletti, Wipro
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!