AI & RoboticsNews

3 ways businesses can ethically and effectively develop generative AI models

AI model

President Biden is meeting with AI experts to examine the dangers of AI. Sam Altman and Elon Musk are publicly voicing their concerns. Consulting giant Accenture became the latest to bet on AI, announcing plans to invest $3 billion in the technology and double its AI-focused staff to 80,000. That’s on top of other consulting firms, with Microsoft, Alphabet and Nvidia joining the fray.

Major companies aren’t waiting for the bias problem to disappear before they adopt AI, which makes it even more urgent to solve one of the biggest challenges facing all of the major generative AI models. But AI regulation will take time.

Because every AI model is constructed by humans and trained on data collected by humans, it’s impossible to eliminate bias entirely. Developers should strive, however, to minimize the amount of “real-world” bias they replicate in their models.

To understand real-world bias, imagine an AI model trained to determine who is eligible to receive a mortgage. Training that model based on the decisions of individual human loan officers — some of whom might implicitly and irrationally avoid granting loans to people of certain races, religions or genders — poses a massive risk of replicating their real-world biases in the output.

The same goes for models that are meant to mimic the thought processes of doctors, lawyers, HR managers and countless other professionals.

>>Follow VentureBeat’s ongoing generative AI coverage<<

AI offers a unique opportunity to standardize these services in a way that avoids bias. Conversely, failing to limit the bias in our models poses the risk of standardizing severely defective services to the benefit of some and at the expense of others.

Here are three key steps that founders and developers can take to get it right:

ChatGPT, for example, falls under the broader category of machine learning as a large language model (LLM), meaning it absorbs enormous quantities of text data and infers relationships between words within the text. On the user side, that translates into the LLM filling in the blank with the most statistically probable word given the surrounding context when answering a question.

But there are many ways to train data for machine learning models. Some health tech models, for example, rely on big data in that they train their AI using the records of individual patients or the decisions of individual doctors. For founders building models that are industry-specific, such as medical or HR AI, such big-data approaches can lend themselves to more bias than necessary.

Let’s picture an AI chatbot trained to correspond with patients to produce clinical summaries of their medical presentations for doctors. If built with the approach described above, the chatbot would craft its output based on consulting with the data — in this case, records — of millions of other patients.

Such a model might produce accurate output at impressive rates, but it also imports the biases of millions of individual patient records. In that sense, big-data AI models become a cocktail of biases that’s hard to track, let alone fix.

An alternative method to such machine-learning methods, especially for industry-specific AI, is to train your model based on the gold standard of knowledge in your industry to ensure bias isn’t transferred. In medicine, that’s peer-reviewed medical literature. In law, it could be the legal texts of your country or state, and for autonomous vehicles, it might be actual traffic rules as opposed to data of individual human drivers.

Yes, even those texts were produced by humans and contain bias. But considering that every doctor strives to master medical literature and every lawyer spends countless hours studying legal documents, such texts can serve as a reasonable starting point for building less-biased AI.

There’s tons of human bias in my field of medicine, but it’s also a fact that different ethnic groups, ages, socio-economic groups, locations and sexes face different levels of risk for certain diseases. More African Americans suffer from hypertension than Caucasians do, and Ashkenazi Jews are infamously more vulnerable to certain illnesses than other groups.

Those are differences worth noting, as they factor into providing the best possible care for patients. Still, it’s important to understand the root of these differences in the literature before injecting them into your model. Are doctors giving women a certain medication at higher rates — as a result of bias toward women — that is putting them at higher risk for a certain disease?

Once you understand the root of the bias, you’re much better equipped to fix it. Let’s go back to the mortgage example. Fannie Mae and Freddie Mac, which back most mortgages in the U.S., found people of color were more likely to earn income from gig-economy jobs, Business Insider reported last year. That disproportionately prevented them from securing mortgages because such incomes are perceived as unstable — even though many gig-economy workers still have strong rent-payment histories.

To correct for that bias, Fannie Mae decided to add the relevant rent-payment history variable into credit-evaluation decisions. Founders must build adaptable models that are able to balance official evidence-based industry literature with changing real-world facts on the ground.

To detect and correct for bias, you’ll need a window into how your model arrives at its conclusions. Many AI models don’t trace back to their originating sources or explain their outputs.

Such models often confidently produce responses with stunning accuracy — just look at ChatGPT’s miraculous success. But when they don’t, it’s almost impossible to determine what went wrong and how to prevent inaccurate or biased output in the future.

Considering that we’re building a technology that will transform everything from work to commerce to medical care, it’s crucial for humans to be able to spot and fix the flaws in its reasoning — it’s simply not enough to know that it got the answer wrong. Only then can we responsibly act upon the output of such a technology.

One of AI’s most promising value propositions for humanity is to cleanse a great deal of human bias from healthcare, hiring, borrowing and lending, justice and other industries. That can only happen if we foster a culture among AI founders that works toward finding effective solutions for minimizing the human bias we carry into our models.

Dr. Michal Tzuchman-Katz, MD, is cofounder, CEO and chief medical officer of Kahun Medical.

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


President Biden is meeting with AI experts to examine the dangers of AI. Sam Altman and Elon Musk are publicly voicing their concerns. Consulting giant Accenture became the latest to bet on AI, announcing plans to invest $3 billion in the technology and double its AI-focused staff to 80,000. That’s on top of other consulting firms, with Microsoft, Alphabet and Nvidia joining the fray.

Major companies aren’t waiting for the bias problem to disappear before they adopt AI, which makes it even more urgent to solve one of the biggest challenges facing all of the major generative AI models. But AI regulation will take time.

Because every AI model is constructed by humans and trained on data collected by humans, it’s impossible to eliminate bias entirely. Developers should strive, however, to minimize the amount of “real-world” bias they replicate in their models.

Real-world bias in AI

To understand real-world bias, imagine an AI model trained to determine who is eligible to receive a mortgage. Training that model based on the decisions of individual human loan officers — some of whom might implicitly and irrationally avoid granting loans to people of certain races, religions or genders — poses a massive risk of replicating their real-world biases in the output.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The same goes for models that are meant to mimic the thought processes of doctors, lawyers, HR managers and countless other professionals.

>>Follow VentureBeat’s ongoing generative AI coverage<<

AI offers a unique opportunity to standardize these services in a way that avoids bias. Conversely, failing to limit the bias in our models poses the risk of standardizing severely defective services to the benefit of some and at the expense of others.

Here are three key steps that founders and developers can take to get it right:

1. Pick the right training method for your generative AI model

ChatGPT, for example, falls under the broader category of machine learning as a large language model (LLM), meaning it absorbs enormous quantities of text data and infers relationships between words within the text. On the user side, that translates into the LLM filling in the blank with the most statistically probable word given the surrounding context when answering a question.

But there are many ways to train data for machine learning models. Some health tech models, for example, rely on big data in that they train their AI using the records of individual patients or the decisions of individual doctors. For founders building models that are industry-specific, such as medical or HR AI, such big-data approaches can lend themselves to more bias than necessary.

Let’s picture an AI chatbot trained to correspond with patients to produce clinical summaries of their medical presentations for doctors. If built with the approach described above, the chatbot would craft its output based on consulting with the data — in this case, records — of millions of other patients.

Such a model might produce accurate output at impressive rates, but it also imports the biases of millions of individual patient records. In that sense, big-data AI models become a cocktail of biases that’s hard to track, let alone fix.

An alternative method to such machine-learning methods, especially for industry-specific AI, is to train your model based on the gold standard of knowledge in your industry to ensure bias isn’t transferred. In medicine, that’s peer-reviewed medical literature. In law, it could be the legal texts of your country or state, and for autonomous vehicles, it might be actual traffic rules as opposed to data of individual human drivers.

Yes, even those texts were produced by humans and contain bias. But considering that every doctor strives to master medical literature and every lawyer spends countless hours studying legal documents, such texts can serve as a reasonable starting point for building less-biased AI.

2. Balance literature with changing real-world data

There’s tons of human bias in my field of medicine, but it’s also a fact that different ethnic groups, ages, socio-economic groups, locations and sexes face different levels of risk for certain diseases. More African Americans suffer from hypertension than Caucasians do, and Ashkenazi Jews are infamously more vulnerable to certain illnesses than other groups.

Those are differences worth noting, as they factor into providing the best possible care for patients. Still, it’s important to understand the root of these differences in the literature before injecting them into your model. Are doctors giving women a certain medication at higher rates — as a result of bias toward women — that is putting them at higher risk for a certain disease?

Once you understand the root of the bias, you’re much better equipped to fix it. Let’s go back to the mortgage example. Fannie Mae and Freddie Mac, which back most mortgages in the U.S., found people of color were more likely to earn income from gig-economy jobs, Business Insider reported last year. That disproportionately prevented them from securing mortgages because such incomes are perceived as unstable — even though many gig-economy workers still have strong rent-payment histories.

To correct for that bias, Fannie Mae decided to add the relevant rent-payment history variable into credit-evaluation decisions. Founders must build adaptable models that are able to balance official evidence-based industry literature with changing real-world facts on the ground.

3. Build transparency into your generative AI model

To detect and correct for bias, you’ll need a window into how your model arrives at its conclusions. Many AI models don’t trace back to their originating sources or explain their outputs.

Such models often confidently produce responses with stunning accuracy — just look at ChatGPT’s miraculous success. But when they don’t, it’s almost impossible to determine what went wrong and how to prevent inaccurate or biased output in the future.

Considering that we’re building a technology that will transform everything from work to commerce to medical care, it’s crucial for humans to be able to spot and fix the flaws in its reasoning — it’s simply not enough to know that it got the answer wrong. Only then can we responsibly act upon the output of such a technology.

One of AI’s most promising value propositions for humanity is to cleanse a great deal of human bias from healthcare, hiring, borrowing and lending, justice and other industries. That can only happen if we foster a culture among AI founders that works toward finding effective solutions for minimizing the human bias we carry into our models.

Dr. Michal Tzuchman-Katz, MD, is cofounder, CEO and chief medical officer of Kahun Medical.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Michal Tzuchman-Katz, MD, Kahun Medical
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!