AI & RoboticsNews

How will AI be used ethically in the future? AI Responsibility Lab has a plan

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


As the use of AI across all industries and nearly every aspect of society grows, there is an increasingly obvious need to have controls in place for responsible AI .

Responsible AI is about making sure that AI is used in a way that isn’t unethical, that helps respect personal privacy and that also generally avoids bias. There is a seemingly endless stream of companies, technologies and researchers tackling issues associated with responsible AI. Now the aptly named AI Responsibility Labs (AIRL) is joining the fray, announcing  $2 million in pre-seed funding, alongside a preview launch of the company’s Mission Control software-as-a-service (SaaS) platform. 

Leading AIRL is the company’s CEO Ramsay Brown, who was trained as a computational neuroscientist at the University of Southern California, where he spent a lot of time working on mapping the human brain. His first startup was originally known as Dopamine Labs, rebranded as Boundless Mind, with a focus on behavioral engineering and how to use machine learning to make predictions about how people are going to behave. Boundless Mind was acquired by Thrive Global in 2019.

At AIRL, Brown and his team are taking on the issues of AI safety, making sure that AI is used responsibly in a way that doesn’t harm society or the organizations that are using the technology.

“We founded the company and built the software platform for Mission Control to start with helping data science teams do their job better and more accurately and faster,” Brown said. “When we look around the responsible AI community, there are some people working on governance and compliance, but they are not talking to data science teams and finding out what actually hurts.”

What data science teams need to create responsible AI

Brown stated emphatically that no organization likely sets out to build an AI that is purposefully biased and that uses data in an unethical fashion.

Rather, what typically occurs in a complex development with many moving pieces and different people is data being unintentionally misused or machine learning models trained on incomplete data. When Brown and his team asked data scientists what was missing and what hurt development efforts, respondents told him they were looking for project management software more so than a compliance framework. 

“That was our big ‘a-ha’ moment,” he said. “The thing that teams actually missed was not that they didn’t don’t understand regulations, it’s that they didn’t know what their teams were doing.”

Brown noted that two decades ago software engineering was revolutionized with the development of dashboard tools like Atlassian’s Jira, which helped developers to build software faster. Now, his hope is that AIRL’s Mission Control will be the dashboard in data science to help data teams build technologies with responsible AI practices.

Working with existing AI and MLops frameworks

There are multiple tools that organizations can use today to help manage AI and machine learning workflows, sometimes grouped together under the industry category of MLops.

Popular technologies include AWS Sagemaker, Google VertexAI, Domino Data Lab and BigPanda

Brown said that one of the things his company has learned while building out its Mission Control service is that data science teams have many different tools they prefer to use. He said that AIRL isn’t looking to compete with MLops and existing AI tools, but rather to provide an overlay on top for responsible AI usage. What AIRL has done is developed an open API endpoint so that a team using Mission Control can pipe in any data from any platform and have it end up as part of monitoring processes.

AIRL’s Mission Control provides a framework for teams to take what they’ve been doing in ad hoc approaches and create standardized processes for machine learning and AI operations.

Brown said that Mission Control enables users to take data science notebooks and turn them into repeatable processes and workflows that work within configured parameters for responsible AI usage. In such a model, the data is connected to a monitoring system that can alert an organization if there is a violation of policies. For example, he noted that if a data scientist uses a data set that isn’t allowed by policy to be used for a certain machine learning operation, Mission Control can catch that automatically, raise a flag to managers and pause the workflow.

“This centralization of information creates better coordination and visibility,” Brown said. “It also lowers the probability that systems with really gnarly and undesirable outcomes end up in production.”

Looking out to 2027 and the future of responsible AI

Looking out to 2027, AIRL has a roadmap plan to help with more advanced concerns around AI usage and the potential for Artificial General Intelligence (AGI). The company’s 2027 focus is on enabling an effort it calls the Synthetic Labor Incentive Protocol (SLIP). The basic idea is to have some form of smart contract for using AGI-powered labor in the economy.

“We’re looking at the advent of artificial general intelligence, as a logistical business and society level concern that needs to be spoken about not in “sci-fi terms,” but in practical incentive management terms,” Brown said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Sean Michael Kerner
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!