Thought Experiment
Here’s a thought experiment worth doing.
Choose any of the many ways algorithms have steered a product off the rails recently. I’m happy to provide some examples if you need one to choose from:
Amazon uses an AI recruitment tool that shows clear bias against women.
Uber self-driving car fatally strikes pedestrian despite test-driver at the wheel.
YouTube Kids autoplays content toward a path of questionable rabbit holes.
Apple credit algorithm sets limits based on gender instead of credit worthiness.
COMPAS judicial recidivism tool presents bias against Black defendants.
Instagram’s automated content curation has proven to be unhealthy for the mental health of teenagers, particularly girls.
Tesla’s autopilot system implicated in multiple crashes where motorcyclists are hit from behind despite a driver being at the wheel.
Now here comes the experiment. Run your preferred AI governance or enablement framework against any such scenario.
Would these problems have been prevented?
Would they have been detected early?
How would your organisation have weighed the lapses in safety and ethics against business objectives if the algorithm had been providing value?
Each one of the above was a public relations headache for those involved. Most of these issues were raised by the public, but only after great harm was inflicted.
Is there a single framework that would guard a company, its employees and the users of its products against such future harms - and those yet to be imagined.