This is part 9 of “101 Ways AI Can Go Wrong” - a series exploring the interaction of AI and human endeavor through the lens of the Crossfactors framework.
View all posts in this series or explore the Crossfactors Framework
What is it?
Algorithm abandonment is the process by which the use of an algorithm is given up by an organization. This is in contrast to the process by which an algorithm is modified, repaired or replaced. An algorithm may be abandoned because it is recognized that its flaws are systemic and/or causes harms which must be mitigated immediately.
Why It Matters
The need for a managed life cycle for the creation, deployment and updating of algorithms is well established. Ideally, a problematic algorithm is identified within such a framework during the design and testing phase - before widespread deployment.
However, the parameters for discontinuing the use of an algorithm are not always recognized within such a framework. Even if it is, unexpected outcomes or interactions may fall outside the technical scope of the algorithm, or be too far downstream to be captured by it. For this reason, it is often difficult to roll back an established algorithm which has seen widespread deployment and often requires public or legal pressure.
Real-World Examples
The Benefits Tech Advocacy Hub (www.btah.org) maintains a repository of case studies where states have used algorithms to assign resources or determine eligibility in ways that were unfair and harmful to recipients. Unfortunately, the discovery of these problems usually does not occur until the technology is widely deployed, at which point media attention and legal action must be taken to force the abandonment of these tools.
Key Dimensions
The 6 D’s of abandonment (from Johnson et al. 2024).
Discovery - initial recognition of problematic system Diagnosis - evidence of the impacts is gathered Dissemination - issue is amplified and escalated Dialogue - discussion or debate between critics of system and its owners Decision - includes the exploration of other options such as repair Death - where the algorithm is taken out of use, formalizing the recognition that it was problematic.
Take-away
Does your organization have the communication and leadership in place to quickly pull the plug on a problematic algorithm?