AI & RoboticsNews

GPT-4: Algorithms auditing for responsible AI beyond human scale

Responsible AI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Artificial intelligence (AI) is revolutionizing industries, streamlining processes, and, hopefully, on its way to improving the quality of life for people around the world — all very exciting news. That said, with the increasing influence of AI systems, it’s crucial to ensure that these technologies are developed and implemented responsibly.

Responsible AI is not just about adhering to regulations and ethical guidelines; it is the key to creating more accurate and effective AI models.

In this piece, we will discuss how responsible AI leads to better-performing AI systems; explore the existing and upcoming regulations related to AI compliance; and emphasize the need for software and AI solutions to tackle these challenges.

Why does responsible AI lead to more accurate and effective AI models?

Responsible AI defines a commitment to designing, developing and deploying AI models in a way that is safe, fair and ethical. By ensuring that models perform as expected — and do not produce undesirable outcomes — responsible AI can help to increase trust, protect against harm and improve model performance.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

To be responsible, AI must be understandable. This has ceased to be a human-scale issue; we need algorithms to help us understand the algorithms.

GPT-4, the latest version of OpenAI’s large language model (LLM), is trained on the text and imagery of the internet, and as we all know, the internet is full of inaccuracies, ranging from small misstatements to full-on fabrications. While these falsehoods can be dangerous on their own, they also inevitably produce AI models that are less accurate and intelligent. Responsible AI can help us solve these problems and move toward developing better AI. Specifically, responsible AI can:

  1. Reduce bias: Responsible AI focuses on addressing biases that may inadvertently be built into AI models during development. By actively working to eliminate biases in data collection, training and implementation, AI systems become more accurate and provide better results for a more diverse range of users.
  2. Enhance generalizability: Responsible AI encourages the development of models that perform well in diverse settings and across different populations. By ensuring that AI systems are tested and validated with a wide range of scenarios, the generalizability of these models is enhanced, leading to more effective and adaptable solutions.
  3. Ensure transparency: Responsible AI emphasizes the importance of transparency in AI systems, making it easier for users and stakeholders to understand how decisions are made and how the AI operates. This includes providing understandable explanations of algorithms, data sources and potential limitations. By fostering transparency, responsible AI promotes trust and accountability, enabling users to make informed decisions and promoting effective evaluation and improvement of AI models.

Regulations on AI compliance and ethics

In the EU, the General Data Protection Regulation (GDPR) was signed into law in 2016 (and implemented in 2018) to enforce strict rules around data privacy.

Enterprises quickly realized that they needed software to track where and how they were using consumer data, and then ensure that they were complying with those regulations.

OneTrust is a company that emerged quickly to provide enterprises with a platform to manage their data and processes as it relates to data privacy. OneTrust has experienced incredible growth since its founding, much of that growth driven by GDPR.

We believe that the current and near-future states of AI regulation reflect data privacy regulation’s 2015/2016 timeframe; the importance of responsible AI is beginning to be recognized globally, with various regulations emerging as a way to drive ethical AI development and deployment.

  1. EU AI Act
    In April 2021, the European Commission proposed new regulations — the EU AI Act — to create a legal framework for AI in the European Union. The proposal includes provisions on transparency, accountability and user rights, aiming to ensure AI systems are safe and respect fundamental rights. We believe that the EU will continue to lead the way on AI regulation. The EU AIA is anticipated to pass by the end of 2023, with the legislation then taking effect in 2024/2025.
  1. AI regulation and initiatives in the U.S.
    The EU AIA will likely set the tone for regulation in the U.S. and other countries. In the U.S., governing bodies, such as the FTC, are already putting forth their own sets of rules, especially related to AI decision-making and bias; and NIST has published a Risk Management Framework that will likely inform U.S. regulation.

So far, at the federal level, there has been little comment on regulating AI, with the Biden administration publishing the AI Bill of Rights — non-binding guidance on the design and use of AI systems. However, Congress is also reviewing the Algorithm Accountability Act of 2022 to require impact assessments of AI systems to check for bias and effectiveness. But these regulations are not moving very quickly toward passing.

Interestingly (but maybe not surprisingly), a lot of the early efforts to regulate AI in the U.S. are at the state and local level, with much of this legislation targeting HR tech and insurance. New York City has already passed Local Law 144, also known as the NYC Bias Audit Mandate, which takes effect in April 2023 and prohibits companies from using automated employment decision tools to hire candidates or promote employees in NYC unless the tools have been independently audited for bias.

California has proposed similar employment regulations related to automated decision systems, and Illinois already has legislation in effect regarding the use of AI in video interviews.

In the insurance sector, the Colorado Division of Insurance has proposed legislation known as the Algorithm and Predictive Model Governance Regulation that aims to “protect consumers from unfair discrimination in insurance practices.”

The role of software in ensuring responsible AI

It is quite clear that regulators (starting in the EU and then expanding elsewhere) and businesses will be taking AI systems and related data very seriously. Major financial penalties will be levied — and we believe that business reputations will be put at risk — for non-compliance and for mistakes due to non-understanding of AI models.

Purpose-built software will be required to track and manage compliance; regulation will serve as a major tailwind for technology adoption. Specifically, the crucial roles of software solutions in managing the ethical and regulatory challenges associated with responsible AI include:

  1. AI model tracking and inventory: Software tools can help organizations maintain an inventory of their AI models, including their purpose, data sources and performance metrics. This enables better oversight and management of AI systems, ensuring that they adhere to ethical guidelines and comply with relevant regulations.
  2. AI risk assessment and monitoring: AI-powered risk assessment tools can evaluate the potential risks associated with AI models, such as biases, data privacy concerns and ethical issues. By continuously monitoring these risks, organizations can proactively address any potential problems and maintain responsible AI practices.
  3. Algorithm auditing: In the future, we can expect the emergence of algorithms capable of auditing other algorithms — the holy grail! This is no longer a human-scale problem with the vast amounts of data and computing power that goes into these models. This will allow for real-time, automated, unbiased assessments of AI models, ensuring that they meet ethical standards and adhere to regulatory requirements.

These software solutions not only streamline compliance processes but also contribute to the development and deployment of more accurate, ethical and effective AI models. By leveraging technology to address the challenges of responsible AI, organizations can foster trust in AI systems and unlock their full potential.

The importance of responsible AI

In summary, responsible AI is the foundation for developing accurate, effective and trustworthy AI systems; by addressing biases, enhancing generalizability, ensuring transparency and protecting user privacy, responsible AI leads to better-performing AI models. Complying with regulations and ethical guidelines is essential in fostering public trust and acceptance of AI technologies, and as AI continues to advance and permeate our lives, the need for software solutions that support responsible AI practices will only grow.

By embracing this responsibility, we can ensure the successful integration of AI into society and harness its power to create a better future for all!

Aaron Fleishman is partner at Tola Capital

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Aaron Fleishman, Tola Capital
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!