This is part 5 of “101 Ways AI Can Go Wrong” - a series exploring the interaction of AI and human endeavor through the lens of the Crossfactors framework.
View all posts in this series or explore the Crossfactors Framework
This topic is another high level one with many subcategories - and like most of the factors I’m discussing, it’s not exclusive to AI.
Public Safety, in this context, refers to the potential harms or impacts of a technology or product on the general public and society at large. The harms can be social, environmental, related to mental and physical health, related to personal security, and many more.
Why It Matters (Context and Impact)
The public safety risk of new products and technologies are not always obvious but are often not recognized quickly enough due to misaligned incentives. In these cases, insurmountable public backlash or an undeniable body of evidence is required to mobilise agencies to protect the public.
Real-World Example (Make It Relatable)
I’m going to name lots of things here so that it doesn’t look like I’m picking on anything in particular.
- The impact of social media on mental health, particularly in teenagers.
- Facial recognition and other crime surveillance technologies leading to wrongful arrests.
- Autonomous vehicles being tested in public settings.
- Alternative motorized mobility devices (e-scooters, e-skateboards, monowheels) leading to loss of control and injuries.
- E-cigarettes and vaping products leading to lung damage and addiction.
- Deepfakes leading to broad misinformation or targeted phishing scams.
- Smart and Internet-of-things devices exposing their owners to cybersecurity risks.
- Chatbots promote eating disorders and other types of self-harm.
- Targeted advertising disclosing sensitive personal information such as pregnancy or sexual orientation.
- Delivery robots limit mobility of actual humans, particularly those using mobility devices.
- Traffic navigation apps erroneously lead drivers into natural disasters while trying to evacuate.
- Patient monitoring tools fail to recognize critical health trajectories.
- News summarization tool misrepresents actual events.
- AI meal planner suggests poisonous ingredients.
Want more? Have a look at the AI Incident Database here: https://incidentdatabase.ai/
Key Dimensions (Break It Down)
With great scale comes great responsibility - rare events and edge cases become difficult to address at massive scale as they cease to be rare.
Second order effects - many harms at scale end up having higher order effects. For example, the use of LLMs for a simple task like multiplication requires orders of magnitude more energy. Or the loss of trust in information sources due to dilution by misinformation.
You can rarely put the genie back in the bottle - I’d appeal to the individual’s own sense of morals and ethics here. By the time the harms become undeniable, it is often very hard to go back to how things were. Think clearly and think early.
Take-away
Most innovators want to see their work impact as many lives as possible. They should spend a great deal of effort thinking about the ways in which their innovation could harm people in the long term.