AI & RoboticsNews

How should AI be governed? Open source syllabus offers framework

Kevin Frazier has ambitious plans to involve more voices in crucial conversations guiding artificial intelligence (AI). As an assistant professor of law at St. Thomas University spearheading an effort to create new legal educational tools for the AI sector, he recognizes inclusive global cooperation will be key to navigating AI’s complex societal impacts.

In a recent interview with VentureBeat, Frazier outlined his development of an open-source legal syllabus providing teaching materials on AI, law and policy. “Absent having a foundational understanding of what it means to build an AI model, what inputs and outputs go into those models? You need to have that background if you’re going to be able to at least fake it till you make it in some of these AI governance conversations,” Frazier said.

The modular curriculum covers foundational AI concepts, risks and legal frameworks. It also includes lectures from scholars in the field to foster knowledge of technology and its implications. Frazier hopes this will cultivate informed, multidisciplinary dialogue on shaping oversight frameworks.

Drawing on lessons from parallel technologies and calls for broader representation, Frazier’s vision through “living documents” like this syllabus is to “arrive at principles-based solutions.” By welcoming global participation, initiatives stand to best guide responsible progress as technologies reshape society.

Intending to involve more voices in AI governance talks, Frazier’s motivations stem from observing the current landscape. 

“We really see that so much of the policy conversation and legal conversation has been a pretty exclusive group of folks,” Frazier noted. He sees a need for more inclusive, representative discussions given the technology’s wide-ranging implications.

Frazier said governance efforts to date have been “punctuated by ideas of self-regulation by CEOs who may or may not have some question marks next to their name now.” To him, building understanding from a diversity of knowledgeable stakeholders is key.

Developing informed perspectives worldwide is central to Frazier’s vision. As he said, “If we want to see AI be governed in a way that’s really reflective of the fact that it is going to be something that unleashes previously unknown risk, previously unknown benefits, then we really need to make sure that this is an inclusive and expansive research agenda.”

His open-source syllabus directly aims to cultivate such expertise. “This template syllabus is a perpetual work in progress that is intended to build a community of scholars who want to make sure that whether you attend St. Thomas University or the Harvard Kennedy School, you have some opportunity to receive an education that sets you up to be a voice and a player in this AI governance conversation,” Frazier remarked.

When envisioning effective AI governance frameworks, Frazier finds lessons in overseeing other technologies reshaping society. He pointed to geoengineering (also called climate engineering) as an example.

Like AI, geoengineering approaches introducing complex risks through large-scale environmental modifications with wide-reaching and long-term implications. Activities aiming to curb climate change by altering Earth’s energy balance, such as solar radiation modification, could impact populations and ecosystems worldwide.

However, Frazier noted geoengineering discussions, much like early AI policy talks, have involved a limited set of voices. He observed the “legal community and regulatory community more broadly, really lacks an understanding of this underlying technology.”

Without input from scientific communities understanding geoengineering’s technical realities, frameworks have struggled to emerge. Similarly, AI—with its potential to transform nearly every industry and community—requires guidance informed by technical expertise.

By cultivating knowledge of AI systems’ inner workings, Frazier’s open syllabus aims to foster the kind of multidisciplinary, inclusive conversations needed to develop governance aligned with such a pervasive technology’s diverse impacts and opportunities.

In addition to lessons from parallel technologies, Frazier’s initiatives respond to calls for more inclusive participation in global issues. He was particularly motivated by a Member of Parliament from Tanzania who challenged scholars on this front.

During an event hosted by the Future Society, the MP emphasized the importance of “actively soliciting participation from communities in the global south,” according to Frazier. Those from developing regions stand to experience AI’s impacts profoundly yet have limited seats at the governance table.

Representing populations most deeply affected but least involved in discussions to date, the MP’s perspective underscored an imperative to broaden viewpoints shaping the conversation. For Frazier, this interaction reinforced the need “to do a better job” involving more diverse communities.

Partnerships and Business Engagement Critical to Progress

Frazier’s open syllabus exemplifies his vision of cultivating progress through collaborative partnerships. By convening connections between AI policy educators and making resources widely accessible, it aims to foster knowledge sharing that advances inclusive governance.

The modular structure supports ongoing evolution as other institutions contribute localized expertise to the living document.  The Legal Priorities Project, Center for AI Safety and other scholars have provided feedback and support for the syllabus.

Business decision-makers also have a role to play in this, according to Fraizer. He said leaders’ engagement is crucial as discussions will shape rules with long-term impacts on operations and innovation. 

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Kevin Frazier has ambitious plans to involve more voices in crucial conversations guiding artificial intelligence (AI). As an assistant professor of law at St. Thomas University spearheading an effort to create new legal educational tools for the AI sector, he recognizes inclusive global cooperation will be key to navigating AI’s complex societal impacts.

In a recent interview with VentureBeat, Frazier outlined his development of an open-source legal syllabus providing teaching materials on AI, law and policy. “Absent having a foundational understanding of what it means to build an AI model, what inputs and outputs go into those models? You need to have that background if you’re going to be able to at least fake it till you make it in some of these AI governance conversations,” Frazier said.

The modular curriculum covers foundational AI concepts, risks and legal frameworks. It also includes lectures from scholars in the field to foster knowledge of technology and its implications. Frazier hopes this will cultivate informed, multidisciplinary dialogue on shaping oversight frameworks.

Drawing on lessons from parallel technologies and calls for broader representation, Frazier’s vision through “living documents” like this syllabus is to “arrive at principles-based solutions.” By welcoming global participation, initiatives stand to best guide responsible progress as technologies reshape society.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Current AI discourse has been from “a pretty exclusive group”

Intending to involve more voices in AI governance talks, Frazier’s motivations stem from observing the current landscape. 

“We really see that so much of the policy conversation and legal conversation has been a pretty exclusive group of folks,” Frazier noted. He sees a need for more inclusive, representative discussions given the technology’s wide-ranging implications.

Frazier said governance efforts to date have been “punctuated by ideas of self-regulation by CEOs who may or may not have some question marks next to their name now.” To him, building understanding from a diversity of knowledgeable stakeholders is key.

Developing informed perspectives worldwide is central to Frazier’s vision. As he said, “If we want to see AI be governed in a way that’s really reflective of the fact that it is going to be something that unleashes previously unknown risk, previously unknown benefits, then we really need to make sure that this is an inclusive and expansive research agenda.”

His open-source syllabus directly aims to cultivate such expertise. “This template syllabus is a perpetual work in progress that is intended to build a community of scholars who want to make sure that whether you attend St. Thomas University or the Harvard Kennedy School, you have some opportunity to receive an education that sets you up to be a voice and a player in this AI governance conversation,” Frazier remarked.

Learning from Parallel Experiences with Emerging Technologies

When envisioning effective AI governance frameworks, Frazier finds lessons in overseeing other technologies reshaping society. He pointed to geoengineering (also called climate engineering) as an example.

Like AI, geoengineering approaches introducing complex risks through large-scale environmental modifications with wide-reaching and long-term implications. Activities aiming to curb climate change by altering Earth’s energy balance, such as solar radiation modification, could impact populations and ecosystems worldwide.

However, Frazier noted geoengineering discussions, much like early AI policy talks, have involved a limited set of voices. He observed the “legal community and regulatory community more broadly, really lacks an understanding of this underlying technology.”

Without input from scientific communities understanding geoengineering’s technical realities, frameworks have struggled to emerge. Similarly, AI—with its potential to transform nearly every industry and community—requires guidance informed by technical expertise.

By cultivating knowledge of AI systems’ inner workings, Frazier’s open syllabus aims to foster the kind of multidisciplinary, inclusive conversations needed to develop governance aligned with such a pervasive technology’s diverse impacts and opportunities.

A Call for Broader Representation in AI

In addition to lessons from parallel technologies, Frazier’s initiatives respond to calls for more inclusive participation in global issues. He was particularly motivated by a Member of Parliament from Tanzania who challenged scholars on this front.

During an event hosted by the Future Society, the MP emphasized the importance of “actively soliciting participation from communities in the global south,” according to Frazier. Those from developing regions stand to experience AI’s impacts profoundly yet have limited seats at the governance table.

Representing populations most deeply affected but least involved in discussions to date, the MP’s perspective underscored an imperative to broaden viewpoints shaping the conversation. For Frazier, this interaction reinforced the need “to do a better job” involving more diverse communities.

Partnerships and Business Engagement Critical to Progress

Frazier’s open syllabus exemplifies his vision of cultivating progress through collaborative partnerships. By convening connections between AI policy educators and making resources widely accessible, it aims to foster knowledge sharing that advances inclusive governance.

The modular structure supports ongoing evolution as other institutions contribute localized expertise to the living document.  The Legal Priorities Project, Center for AI Safety and other scholars have provided feedback and support for the syllabus.

Business decision-makers also have a role to play in this, according to Fraizer. He said leaders’ engagement is crucial as discussions will shape rules with long-term impacts on operations and innovation. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Bryson Masse
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!