AI & RoboticsNews

New Open License Generator helps ensure AI is used responsibly

Responsible AI is one of the most significant topics of discussion in technology today. 

Model developers want to ensure generative AI and large language models (LLMs) are not used negligently or maliciously. While this is giving rise to AI regulations around the world, AI builders and the organizations using models need methods for recourse now. 

This demand has given rise to the adoption of licenses with specific behavioral use clauses — including those offered through the not-for-profit Responsible AI Licenses (RAIL). These documents legally restrict how AI models, code and training data can be used when shared. 

To further promote both customization and standardization as gen AI adoption accelerates, RAIL has announced the Rail License Generator. With the new tool, AI developers can select artifacts to be licensed and apply usage restrictions from a curated catalog. 

“A foundation model is in most cases designed to be extremely versatile — it’s something that understands all types of different languages, it can be used in downstream applications with little fine-tuning,” Daniel McDuff, co-chair of RAIL, told VentureBeat. 

He continued: “In the past, there wasn’t as much need to restrict applications. But now these models are so generic or versatile that they can be repurposed very easily. We think these licenses are a necessity.”

Since their first introduction in 2018, there are now 41,700 model repositories with RAIL licenses. Notable models with behavioral use clauses include Hugging Face’s BLOOM, Meta’s Llama 2, Stable Diffusion and Grid. 

Today with the release of the Rail License Generator, the goal is to further grow that number by lowering access barriers. The tool was created by the RAIL Working Group — Tooling and Procedural Governance, led by Jesse Josua Benjamin, Scott Cambo and Tim Korjakow (who is also lead developer and project owner.

The generator provides a step-by-step process to create customized licenses. Users first choose a license type, which populates an initial template. 

Types of licenses include: 

In the second step, users choose what artifacts to license/share. This typically involves careful releasing of code, algorithms or models associated with the AI. Then, users can select from a collection of additional restrictions from several categories appropriate to particular systems. 

Finally, the license is exported. In this process, LaTeX, raw text and Markdown format downloads are provided of the full license text, and users receive PNG downloads of chosen domain icons and QR-codes with the link to the final license.  

The goal is to support people who don’t have access to teams of lawyers, McDuff noted (although both large and small parties alike are using RAIL licenses). 

He described a “layer of insecurity” when it comes to writing license documents. Language must be tailored for specific domains, context of use and type of AI artifact. Oftentimes, those building AI “don’t feel comfortable writing such a thing because they aren’t lawyers, they are computer scientists or developers or researchers.”

While they could craft their own legal terms, they can be “unsure whether it would be enforceable, whether it has holes in it.”

With the Rail License Generator, “it takes a matter of minutes to create a license once you know the types of clauses you want to include,” said McDuff. “This is a way of codifying ethical principles in something that does have legal teeth.”

Openness — and open-source — is a core principle of scientific research and helps develop new technologies, RAIL researchers write. When assets and findings are made freely available, they can be verified and technology can be tested and audited. 

This process has undoubtedly benefited AI, too — but the evolutionary technology also introduces challenges, particularly in the case of foundation models that can be easily applied across domains. 

While their creators may have built something “really performant with good intentions,” the model’s versatility means it can be used in unintended or nefarious ways, said McDuff. Decentralization heightens this risk, as downstream users present challenges when it comes to accountability and recourse. 

“Open-source is beneficial in many cases, but it’s also more nuanced when you have tools that a single actor can take and have a large downstream impact with, such as disseminating disinformation,” said McDuff. 

Danish Contractor, co-chair RAIL, pointed out that it can be difficult for developers and users to know what’s restricted where.

“A lot of people think if ‘AI can do X, then AI can do Y,’” said Contractor. 

However, if a developer is releasing a medical model, it could be misapplied — purposefully or not — in robotics or military domains. 

It’s important for those most knowledgeable to communicate in effective, enforceable ways and have access to tooling to track and enforce violations of licenses, Contractor emphasized. 

McDuff agreed that behavioral restrictions provide “some consistency but also some diversity in terms of clauses that are included. Different people have thought deeply about how models could be used, how nuanced restrictions could be appropriate.”

At the same time, standardization is critical, he said, as there are clauses that “everyone across the board” would adopt — discrimination and disinformation are universal, for instance. 

“There’s some need for standardization and support for tooling for licenses that are appropriate for different types of projects,” said McDuff. 

The goal is to give people a tool to develop licenses that are “standardized, familiar” and provide flexibility — and that also provide some level of “legalese,” which is important and impossible to avoid entirely. 

“The risk is simply too high for a company like Google or Microsoft or other entities large or small to use open-source code in a way that violates its license,” said McDuff. 

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.


Responsible AI is one of the most significant topics of discussion in technology today. 

Model developers want to ensure generative AI and large language models (LLMs) are not used negligently or maliciously. While this is giving rise to AI regulations around the world, AI builders and the organizations using models need methods for recourse now. 

This demand has given rise to the adoption of licenses with specific behavioral use clauses — including those offered through the not-for-profit Responsible AI Licenses (RAIL). These documents legally restrict how AI models, code and training data can be used when shared. 

To further promote both customization and standardization as gen AI adoption accelerates, RAIL has announced the Rail License Generator. With the new tool, AI developers can select artifacts to be licensed and apply usage restrictions from a curated catalog. 

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.


Request an invite

“A foundation model is in most cases designed to be extremely versatile — it’s something that understands all types of different languages, it can be used in downstream applications with little fine-tuning,” Daniel McDuff, co-chair of RAIL, told VentureBeat. 

He continued: “In the past, there wasn’t as much need to restrict applications. But now these models are so generic or versatile that they can be repurposed very easily. We think these licenses are a necessity.”

Since their first introduction in 2018, there are now 41,700 model repositories with RAIL licenses. Notable models with behavioral use clauses include Hugging Face’s BLOOM, Meta’s Llama 2, Stable Diffusion and Grid. 

Today with the release of the Rail License Generator, the goal is to further grow that number by lowering access barriers. The tool was created by the RAIL Working Group — Tooling and Procedural Governance, led by Jesse Josua Benjamin, Scott Cambo and Tim Korjakow (who is also lead developer and project owner.

The generator provides a step-by-step process to create customized licenses. Users first choose a license type, which populates an initial template. 

Types of licenses include: 

  • Open RAIL: Developers have freedom to use, distribute and modify licensed artifacts, so long as they (and downstream users) follow behavioral restrictions. 
  • Research RAIL: Developers can use, distribute and modify licensed artifacts only for research (not commercial) purposes and must follow behavioral use restrictions
  • RAIL: These do not include behavioral-use clauses, but may impose additional terms around who can use the licensed artifact and how.

In the second step, users choose what artifacts to license/share. This typically involves careful releasing of code, algorithms or models associated with the AI. Then, users can select from a collection of additional restrictions from several categories appropriate to particular systems. 

Finally, the license is exported. In this process, LaTeX, raw text and Markdown format downloads are provided of the full license text, and users receive PNG downloads of chosen domain icons and QR-codes with the link to the final license.  

The goal is to support people who don’t have access to teams of lawyers, McDuff noted (although both large and small parties alike are using RAIL licenses). 

He described a “layer of insecurity” when it comes to writing license documents. Language must be tailored for specific domains, context of use and type of AI artifact. Oftentimes, those building AI “don’t feel comfortable writing such a thing because they aren’t lawyers, they are computer scientists or developers or researchers.”

While they could craft their own legal terms, they can be “unsure whether it would be enforceable, whether it has holes in it.”

With the Rail License Generator, “it takes a matter of minutes to create a license once you know the types of clauses you want to include,” said McDuff. “This is a way of codifying ethical principles in something that does have legal teeth.”

AI complicates traditional open scientific processes

Openness — and open-source — is a core principle of scientific research and helps develop new technologies, RAIL researchers write. When assets and findings are made freely available, they can be verified and technology can be tested and audited. 

This process has undoubtedly benefited AI, too — but the evolutionary technology also introduces challenges, particularly in the case of foundation models that can be easily applied across domains. 

While their creators may have built something “really performant with good intentions,” the model’s versatility means it can be used in unintended or nefarious ways, said McDuff. Decentralization heightens this risk, as downstream users present challenges when it comes to accountability and recourse. 

“Open-source is beneficial in many cases, but it’s also more nuanced when you have tools that a single actor can take and have a large downstream impact with, such as disseminating disinformation,” said McDuff. 

Danish Contractor, co-chair RAIL, pointed out that it can be difficult for developers and users to know what’s restricted where.

“A lot of people think if ‘AI can do X, then AI can do Y,’” said Contractor. 

However, if a developer is releasing a medical model, it could be misapplied — purposefully or not — in robotics or military domains. 

It’s important for those most knowledgeable to communicate in effective, enforceable ways and have access to tooling to track and enforce violations of licenses, Contractor emphasized. 

McDuff agreed that behavioral restrictions provide “some consistency but also some diversity in terms of clauses that are included. Different people have thought deeply about how models could be used, how nuanced restrictions could be appropriate.”

At the same time, standardization is critical, he said, as there are clauses that “everyone across the board” would adopt — discrimination and disinformation are universal, for instance. 

“There’s some need for standardization and support for tooling for licenses that are appropriate for different types of projects,” said McDuff. 

The goal is to give people a tool to develop licenses that are “standardized, familiar” and provide flexibility — and that also provide some level of “legalese,” which is important and impossible to avoid entirely. 

“The risk is simply too high for a company like Google or Microsoft or other entities large or small to use open-source code in a way that violates its license,” said McDuff. 


Author: Taryn Plumb
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!