AI & RoboticsNews

Canada wants to be the first country to implement AI regulations: Minister of Innovation

Canada aims to be the first country in the world with official regulations covering the emerging artificial intelligence sector, said François-Philippe Champagne, Canada’s Minister of Innovation, Science and Industry in a speech on Wednesday.

“The world is looking at us to lead in how we’re going to define the guardrails that are going to be put in place here in Canada and inspire the rest of the world,” he said.

In his remarks at the ALL IN conference on artificial intelligence regulations in Montreal, Quebec, Champagne noted that “AI is in the minds of everyone, but also in the minds of leaders around the world, and they expect us to act.” 

Canada doubled down on its national AI strategy this year. 

“Canada will now have a voluntary AI code of conduct which is going to be focused on advanced generative AI,” said Champagne.

The code of conduct, which several major Canadian AI companies including the white-hot enterprise AI startup Cohere, Coveo and Ada, as well as larger enterprises like Blackberry and OpenText have signed on to, aims to “demonstrate to Canadians that the systems that they’re using are going to be safe and certainly further public interest.” It is intended to build trust while national legislation is developed.

The code of conduct follows lawmakers’ introduction of bill C-27 last year, also known as the Digital Charter Implementation Act, an effort to modernize privacy laws and establish regulations around AI usage as the tech advances and proliferates rapidly.

The bill aims to implement Canada’s new Digital Charter which focuses on protecting privacy and personal information online.

It updates Canada’s privacy laws for the first time in over 20 years to account for developments like facial recognition, emotion detection algorithms, and other new uses of data and artificial intelligence.

Bill C-27 would also establish a new federal Artificial Intelligence and Data Act (AIDA), which builds accountability measures for how companies manage and use Canadians’ personal data, creates rights around their data, and implements guidelines for the ethical development and application of AI technologies.

But some activists and even some tech industry leaders have criticized the Canadian government’s efforts so far — both the proposed bill and the voluntary code of conduct, for either doing too little to protect people’s rights, or for going too far in imposing onerous new red tape around innovation.

In a joint letter addressed to the Minister of Innovation, over 30 civil society organizations and experts have raised serious concerns that AIDA fails to adequately protect citizens’ rights and freedoms. 

The letter expresses that AIDA as currently proposed puts economic interests above considerations of human rights impacts. Large definitional gaps and uncertainty are criticized for leaving major aspects of the law illegible and without substance.

Most worrying to some activists is the lack of any meaningful public consultation in the development of AIDA. International peers are noted as having done much more substantial cross-sectoral work to thoughtfully develop AI governance rules. 

To address these shortcomings, the signatories are calling for the outright removal of AIDA from Bill C-27, under which it is currently proposed. This would allow time for AIDA to be properly scrutinized, reopened for public input, and improved through revisions before being brought forward again. Leaving AIDA as is, risks Canadians’ trust in the regulatory approach to such an important emerging technology.

To address these concerns, the Minister stated that through meetings with experts, “we realized that while we are developing a law here in Canada, it will take time and I think that if you ask people industry they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products.” The voluntary code of conduct is a response to these concerns.

Shopify CEO Tobi Lütke took to X, the social platform formerly known as Twitter, to voice his complaints that there isn’t “need for more referees in Canada.” 

In a meeting with the House of Commons Standing Committee on Industry, Science and Technology on Tuesday, the minister announced that further amendments to the bill will be coming to the legislation to address the issues raised by outside groups. 

Canada has been proactively working to develop a framework for responsible AI. The Minister highlighted some of the key steps Canada has already taken, including launching the first national AI strategy in 2017 with almost $500 million in funding. This helped position Canada as a leader in AI from the start. Canada also co-founded the Global Partnership on AI (GPAI) in 2018 together with France to bring together experts to develop best practices on AI.

Internationally, the Minister said Canada is “actively engaged in what we call the Hiroshima AI process… and we’re working to make sure that we have a common approach with like minded countries to managing the arising opportunities coming from generative AI while also tackling the issues that our citizens want us to tackle.” Alignment with international partners is a priority, he said.

In his remarks, the Minister emphasized that “people expect us to come out of this summit with answers to their concerns, but also to demonstrate that the world the opportunities.”

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


Canada aims to be the first country in the world with official regulations covering the emerging artificial intelligence sector, said François-Philippe Champagne, Canada’s Minister of Innovation, Science and Industry in a speech on Wednesday.

“The world is looking at us to lead in how we’re going to define the guardrails that are going to be put in place here in Canada and inspire the rest of the world,” he said.

In his remarks at the ALL IN conference on artificial intelligence regulations in Montreal, Quebec, Champagne noted that “AI is in the minds of everyone, but also in the minds of leaders around the world, and they expect us to act.” 

An emerging national AI strategy

Canada doubled down on its national AI strategy this year. 

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

“Canada will now have a voluntary AI code of conduct which is going to be focused on advanced generative AI,” said Champagne.

The code of conduct, which several major Canadian AI companies including the white-hot enterprise AI startup Cohere, Coveo and Ada, as well as larger enterprises like Blackberry and OpenText have signed on to, aims to “demonstrate to Canadians that the systems that they’re using are going to be safe and certainly further public interest.” It is intended to build trust while national legislation is developed.

The code of conduct follows lawmakers’ introduction of bill C-27 last year, also known as the Digital Charter Implementation Act, an effort to modernize privacy laws and establish regulations around AI usage as the tech advances and proliferates rapidly.

The bill aims to implement Canada’s new Digital Charter which focuses on protecting privacy and personal information online.

It updates Canada’s privacy laws for the first time in over 20 years to account for developments like facial recognition, emotion detection algorithms, and other new uses of data and artificial intelligence.

Bill C-27 would also establish a new federal Artificial Intelligence and Data Act (AIDA), which builds accountability measures for how companies manage and use Canadians’ personal data, creates rights around their data, and implements guidelines for the ethical development and application of AI technologies.

Proposed AI laws have proven controversial

But some activists and even some tech industry leaders have criticized the Canadian government’s efforts so far — both the proposed bill and the voluntary code of conduct, for either doing too little to protect people’s rights, or for going too far in imposing onerous new red tape around innovation.

In a joint letter addressed to the Minister of Innovation, over 30 civil society organizations and experts have raised serious concerns that AIDA fails to adequately protect citizens’ rights and freedoms. 

The letter expresses that AIDA as currently proposed puts economic interests above considerations of human rights impacts. Large definitional gaps and uncertainty are criticized for leaving major aspects of the law illegible and without substance.

Most worrying to some activists is the lack of any meaningful public consultation in the development of AIDA. International peers are noted as having done much more substantial cross-sectoral work to thoughtfully develop AI governance rules. 

To address these shortcomings, the signatories are calling for the outright removal of AIDA from Bill C-27, under which it is currently proposed. This would allow time for AIDA to be properly scrutinized, reopened for public input, and improved through revisions before being brought forward again. Leaving AIDA as is, risks Canadians’ trust in the regulatory approach to such an important emerging technology.

To address these concerns, the Minister stated that through meetings with experts, “we realized that while we are developing a law here in Canada, it will take time and I think that if you ask people industry they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products.” The voluntary code of conduct is a response to these concerns.

Shopify CEO Tobi Lütke took to X, the social platform formerly known as Twitter, to voice his complaints that there isn’t “need for more referees in Canada.” 

In a meeting with the House of Commons Standing Committee on Industry, Science and Technology on Tuesday, the minister announced that further amendments to the bill will be coming to the legislation to address the issues raised by outside groups. 

Canada has long record of AI involvement

Canada has been proactively working to develop a framework for responsible AI. The Minister highlighted some of the key steps Canada has already taken, including launching the first national AI strategy in 2017 with almost $500 million in funding. This helped position Canada as a leader in AI from the start. Canada also co-founded the Global Partnership on AI (GPAI) in 2018 together with France to bring together experts to develop best practices on AI.

Internationally, the Minister said Canada is “actively engaged in what we call the Hiroshima AI process… and we’re working to make sure that we have a common approach with like minded countries to managing the arising opportunities coming from generative AI while also tackling the issues that our citizens want us to tackle.” Alignment with international partners is a priority, he said.

In his remarks, the Minister emphasized that “people expect us to come out of this summit with answers to their concerns, but also to demonstrate that the world the opportunities.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Bryson Masse
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!