Latency

This is part 23 of “101 Ways AI Can Go Wrong” - a series exploring the interaction of AI and human endeavor through the lens of the Crossfactors framework.

View all posts in this series or explore the Crossfactors Framework

Our connected world has only been possible by mitigating this one problem - but it’s often made worse by AI.

Latency is concept #23 in my series on 101 Ways to Screw Things Up with AI.

What is it?

Latency is the delay introduced by information transfer, processing or the blocking of other processes because of these - particularly when it affects the user experience or the performance of another process.

Why It Matters

Latency’s real-world impact on user experience is usually obvious and will erode or completely destroy usability. But the technical factors that can cascade to degrade latency are not always present in testing.

Latency can also introduce delays and bottlenecks in background and system processes, where they can stack or domino in a way that’s challenging to trace.

But there’s a bigger issue yet. The individual contributions of each input in an AI system is often opaque. Latency can misalign inputs, or even become its own input, thereby affecting the outcome in an undesired way and perhaps even negating the information of an otherwise valuable source.

Real-World Example

NVidia’s CloudXR technology is meant to deliver low latency augmented reality experiences with computationally expensive processing that is done on the cloud instead of on device (usually a VR headset). In practice, latency still exists which is very difficult to tolerate in an immersive environment as it leads to motion sickness.

Other examples are Google’s Stadia cloud gaming service and Humane’s wearable AI pin.

Key Dimensions

User expectations - users will tolerate different amounts of latency in different contexts. 10 years ago, a user would have waited several seconds for a video to start playing, but that is no longer the case. A user may still wait several seconds for the results of a specialized search (e.g. internal company documentation) while, ironically, they expect near instantaneous results from a search engine.

Real-world conditions - innumerable real-world factors can interplay with systems to create latency in systems where none existed in testing.

UI/UX best practices - many best practices have been established through extensive usability and user behavior testing. New interfaces should be tested against these. However, new best practices need to be established from AI-driven and non-deterministic workflows.

Take-away

Don’t make your users wait, and if you do, let them know why and how long.