The Canadian government plans to consult with the public about the creation of a “voluntary code of practice” for generative AI companies.
According to The National Post, a note detailing the consultations was accidentally posted on the government of Canada’s “Consulting with Canadians” website. The posting, spotted by University of Ottawa professor Michael Geist and shared on social media, revealed that engagement with stakeholders started on August 4 and would end on September 14.
The voluntary code of practice for gen AI systems will be developed through Innovation, Science and Economic Development Canada (ISED), and aims to ensure that participating firms adopt safety measures, testing protocols and disclosure practices.
“ISED officials have begun conducting a brief consultation on a generative AI voluntary code of practice intended for Canadian AI companies with dozens of AI experts, including from academia, industry and civil society, but we don’t have an open link to share for further public consultation,” ISED spokesperson Audrey Champoux said in an email to VentureBeat.
More information would be released soon, she said.
Originally reported by The Logic, internal documents outlined how the voluntary code of practice would have companies build trust in their systems and transition smoothly to comply with forthcoming regulatory frameworks. This initiative would serve as an initial step before binding regulations are implemented. The code of practice is being developed in consultation with AI companies, academics and civil society to ensure its effectiveness and comprehensiveness.
Conservative Party of Canada member of parliament, Michelle Rempel — who leads a multi-party caucus focusing on advanced technologies — expressed surprise at the consultation’s appearance. Rempel emphasized the importance of government engagement with Parliament on a non-partisan basis to avoid polarization on the issue.
“Maybe if it was an actual mistake the department will reach out to us … it’s certainly no secret that we exist,” Rempel told the The National Post.
In a follow up series of tweets, the Minister of Innovation, Science and Industry François-Philippe Champagne reiterated the need for “new guidelines on advanced generative AI systems.”
“These consultations will inform a crucial part of Canada’s next steps on artificial intelligence and that’s why we must take the time to hear from industry experts and leaders,” said Champagne.
While I thank the National Post for correcting its article, I still want to make some things clear:
➡️ Canada is a world leader in trusted and responsible AI. It is essential that we create new guidelines on advanced generative AI systems.
By committing to these guardrails, companies are encouraged to ensure that their AI systems do not engage in activities that could potentially harm users, such as impersonation or providing improper advice.
They are also encouraged to train their AI systems on representative datasets to minimize biased outputs and to employ techniques like “red teaming” to identify and rectify flaws in their systems.
The code also emphasizes the importance of clear labeling of AI-generated content to avoid confusion with human-created material and to enable users to make informed decisions. Additionally, companies are encouraged to disclose key information about the inner workings of their AI systems to foster trust and understanding among users.
Big tech companies like Google, Microsoft and Amazon responded favorably to the government’s plans, telling The Logic that they would be participating in the consultation process. Amazon supports “effective risk and use case-based guardrails” which gives companies “legal certainty,” its spokesperson Sandra Benjamin told The Logic.
Not everyone was satisfied, though. University of Ottawa digital policy expert Geist responded to Champagne’s tweet, calling for more engagement with the “broader public.”
Incredible that @FP_Champagne can post a tweet stream on the private generative AI consultation and *still* not include any reference to the importance of hearing from the broader public. https://t.co/GjV2FIMVET
The Canadian government’s efforts in the field of gen AI are not limited to voluntary guardrails. The government has also proposed legislation, including the Artificial Intelligence and Data Act (AIDA), which sets requirements for “high-impact systems.”
However, the specific criteria and regulations for these systems will be defined by ISED, and they are expected to come into effect at least two years after the bill becomes law.
By developing this code of practice, Canada is taking an active role in shaping the development of responsible AI practices globally. The code aligns with similar initiatives in the United States and the European Union and demonstrates the Canadian government’s commitment to ensuring that AI technology evolves in a way that benefits society as a whole.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
The Canadian government plans to consult with the public about the creation of a “voluntary code of practice” for generative AI companies.
According to The National Post, a note detailing the consultations was accidentally posted on the government of Canada’s “Consulting with Canadians” website. The posting, spotted by University of Ottawa professor Michael Geist and shared on social media, revealed that engagement with stakeholders started on August 4 and would end on September 14.
The voluntary code of practice for gen AI systems will be developed through Innovation, Science and Economic Development Canada (ISED), and aims to ensure that participating firms adopt safety measures, testing protocols and disclosure practices.
“ISED officials have begun conducting a brief consultation on a generative AI voluntary code of practice intended for Canadian AI companies with dozens of AI experts, including from academia, industry and civil society, but we don’t have an open link to share for further public consultation,” ISED spokesperson Audrey Champoux said in an email to VentureBeat.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
More information would be released soon, she said.
Initial step before binding regulations
Originally reported by The Logic, internal documents outlined how the voluntary code of practice would have companies build trust in their systems and transition smoothly to comply with forthcoming regulatory frameworks. This initiative would serve as an initial step before binding regulations are implemented. The code of practice is being developed in consultation with AI companies, academics and civil society to ensure its effectiveness and comprehensiveness.
Conservative Party of Canada member of parliament, Michelle Rempel — who leads a multi-party caucus focusing on advanced technologies — expressed surprise at the consultation’s appearance. Rempel emphasized the importance of government engagement with Parliament on a non-partisan basis to avoid polarization on the issue.
“Maybe if it was an actual mistake the department will reach out to us … it’s certainly no secret that we exist,” Rempel told the The National Post.
In a follow up series of tweets, the Minister of Innovation, Science and Industry François-Philippe Champagne reiterated the need for “new guidelines on advanced generative AI systems.”
“These consultations will inform a crucial part of Canada’s next steps on artificial intelligence and that’s why we must take the time to hear from industry experts and leaders,” said Champagne.
Guardrails to protect individuals who use AI
By committing to these guardrails, companies are encouraged to ensure that their AI systems do not engage in activities that could potentially harm users, such as impersonation or providing improper advice.
They are also encouraged to train their AI systems on representative datasets to minimize biased outputs and to employ techniques like “red teaming” to identify and rectify flaws in their systems.
The code also emphasizes the importance of clear labeling of AI-generated content to avoid confusion with human-created material and to enable users to make informed decisions. Additionally, companies are encouraged to disclose key information about the inner workings of their AI systems to foster trust and understanding among users.
Early support grows, but concerns remain
Big tech companies like Google, Microsoft and Amazon responded favorably to the government’s plans, telling The Logic that they would be participating in the consultation process. Amazon supports “effective risk and use case-based guardrails” which gives companies “legal certainty,” its spokesperson Sandra Benjamin told The Logic.
Not everyone was satisfied, though. University of Ottawa digital policy expert Geist responded to Champagne’s tweet, calling for more engagement with the “broader public.”
The Canadian government’s efforts in the field of gen AI are not limited to voluntary guardrails. The government has also proposed legislation, including the Artificial Intelligence and Data Act (AIDA), which sets requirements for “high-impact systems.”
However, the specific criteria and regulations for these systems will be defined by ISED, and they are expected to come into effect at least two years after the bill becomes law.
By developing this code of practice, Canada is taking an active role in shaping the development of responsible AI practices globally. The code aligns with similar initiatives in the United States and the European Union and demonstrates the Canadian government’s commitment to ensuring that AI technology evolves in a way that benefits society as a whole.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Bryson Masse
Source: Venturebeat
Reviewed By: Editorial Team