Participatory Placation

This is part 8 of “101 Ways AI Can Go Wrong” - a series exploring the interaction of AI and human endeavor through the lens of the Crossfactors framework.

View all posts in this series or explore the Crossfactors Framework

What is it?

Participatory placation is the practice of creating illusory involvement in a process while minimizing the importance of their input - or ignoring it entirely. This instils a fall sense of agency or collaboration to a user within an automated process while presenting resulting decisions or outcomes under this false pretense.

Why It Matters

Human-centered design and human-in-the-loop approaches are usually touted as best practices in the design and deployment of AI systems. However, the goals of such systems when combined with the need for user engagement of satisfaction can unintentionally lead to participatory placation.

Such design traits are in service of wider goals, whether that is part of a system a user is using to achieve a task or a customer experience. There is a risk that elements of participatory placation could be uncovered in negative light and create backlash. Such backlash could be particularly damaging with a popular consumer product, or with expert users who may lose trust in a system and actively choose to interfere with the intended workflow.

Real-World Example

Participatory placation may have its roots in the earliest days of industrial automation. In such settings, humans were often needed to address problems as they arose or to identify malfunctions or edge cases. This is despite the automated process often being able to complete the task entirely under normal conditions. However, when problems arise, a human operator with an immediate understanding of the situation is needed to prevent downtime. A human who had not been paying attention to a task while it was still performing optimally might be slow or ineffective in quickly identifying such a problem. To address the issue of a lack in situational awareness, dummy tasks were introduced in such automated processes to keep humans engaged. While this task could have been automated, it was left to a human to keep them busy and engaged in the overall process.

Today, there are many more real world examples in both the physical and digital worlds:

Key Dimensions

Change management- In the case of industrial processes, it’s easy to understand that such design traits may be motivated by change management. The same may be true for consumer products when dealing with technological trends. For example, in the earlier days of the internet, printing a web page could create badly formatted results. Websites that had printer-friendly formatting typically displayed a button. As browsers and web-design became better, the need for this button became superfluous as the website could be printed well by the browser natively. Nevertheless, for some time, it was common for websites to make this button available as users expected it.

The illusion of control - The placebo effect is very real and even shown to affect physical and mental health outcomes. In the case of systems or tasks, humans can perform better, or be more satisfied with a system’s performance, if they believe they have made choices that affected the system in a positive way.

Ethical decay - Improved outcomes and satisfaction can be used to justify unethical or problematic system behaviours. In particular, if these design choices become the norm, those involved in their implementation may deflect criticism or accountability for harmful outcomes.

Take-away

Participatory Placation is a subtle and insidious issue in human-centered design. Have you ever made choices with a piece of technology only to remain skeptical of its effect?