My responses to the fifth theme of the Canada AI Survey, focusing on building safe AI systems and strengthening public trust.
This is part 5 of “Responses to the Canada AI Survey” - a series containing my responses to the eight themes of the Canadian government’s public consultation on artificial intelligence.
Theme 5: Building Safe AI Systems and Strengthening Public Trust in AI
Q1: How can Canada build public trust in AI technologies while addressing the risks they present? What are the most important things to do to build confidence?
The most important thing is to do the “right thing” - even if the right thing is admitting to uncertainty or even mistakes. External communication from researchers, government and industry needs to be proactive. The media will need to somehow report on potential and known issues while avoiding the typical catastrophizing or exaggeration incentivized by their business model. Falling behind in communication after incidents will quickly spoil trust.
Grounding public trust in AI on the public’s AI literacy would be extremely unwise. It would also be unfair to the public. Politicians and policy makers have yet to show such AI literacy themselves. Confidence will be built over time by avoiding incidents - and minimizing the politicisation of use cases, especially for those in the public sector.
Q2: What frameworks, standards, regulations and norms are needed to ensure AI products in Canada are trustworthy and responsibly deployed?
Most professional sectors have existing frameworks and oversight bodies to address safety and ethics (medicine, construction, transportation, etc). We should not duplicate their efforts but instead let them take the lead and listen to their calls for additional legislation where needed. Blanket legislation will have unpredictable and unintended consequences otherwise.
Some domains will remain without adequate oversight. I submit that these domains are already fraught with integrity and ethical concerns which have long needed to be addressed, well before our current AI moment. These should be a priority. Oversight should be created that is independent of technology and which can manage the principal incentives giving rise to the problems.
Q3: How can Canada proactively engage citizens and businesses to promote responsible AI use and trust in its governance? Who is best placed to lead which efforts that fuel trust?
Incentives are key. Proactive responsible AI use means recognizing and tackling misaligned incentives. Trust is earned, it is an outcome of other actions. Any PR, advertisement or body with a primary purpose of “fuelling” trust immediately has the opposite effect as soon as there is reason to doubt that trust - especially in the age of social media. This question feels misguided by positioning trust as a primary goal. Just do the right thing.