Crossfactors arise at the intersection of human endeavors with technology, usually in unexpected and undesired ways.
They are hidden AI traps.
I discovered crossfactors while developing solutions for radiologists to leverage AI in their workflow. Several psychological engineering (aka human factors engineering) principles impacted their ability to interpret medical images with technology.
I realized that all experts, such as pilots and laywers, are affected by such principles. I also realized that we are all somewhat experts in our own daily tasks, such as driving and communicating.
There is tremendous interest in leveraging AI across nearly all industries and functions. The social and business imperatives to do it right is recognized by most.
But few are thinking specifically about the hidden AI traps, especially if they fall outside of current AI ethics and governance frameworks.
I’ve searched, but could not find a comprehensive list.
So I created my own. It is a work in progress and you can find a summarized version below.
Alphabetical List of Crossfactors
A
- Accountability
- Accuracy (model inference)
- Active, adaptive, and continuous learning
- Adoption
- Affordance
- Algorithm abandonment
- Algorithm repair
- Algorithm visibility
- Algorithmic bias
- Algorithmic choice complacency
- Algorithmic surrender
- Algorithmic harm
- Algorithmic influence
- Algospeak
- Anticipatory gap
- Adversarial behavior
- Authority gradients
B
- Backwards compatibility
- Babysitting
C
- Cognitive accessibility
- Cognitive offloading
- Collingridge dilemna
- Communication
- Composability
- Contestability
- Consumer trends
- Content dilution
- Content milling
- Context collapse
- Coordination neglect
- Copyright
- Costs
- Cultural accessibility
- Clumsy automation
D
- Data applicability
- Data bias
- Data drift
- Data governance
- Data leak
- Data noise
- Data poisoning
- Data quality
- Data sourcing
- Data scraping
- Data consent
- Decision fatigue
- Dependability
- Design errors
- Diffusion of responsibility
- Digital divide
- Digital sovereignty
- Disembodiment of self
- Dissociation
E
- Economic impact
- Emergent behaviors
- Emergent misalignment
- Emotional bonding
- Enshitification
- Environmental impact
- Ethics
- Explainability
F
- Failure cascades
- Failure visibility
- Fairness
- Feedback latency
- Feedback loop
- Fingerprinting (content)
- Friction
- Fulfillment (inc. job satisfaction)
- Functional allocation
G
- Goal alignment
- Goal setting
H
- Hallucination, confabulation
- Human bias amplification
- Human garbage can effect
- Human informational needs
- Human machine consensus
- Human substitution effects
- Hypewashing
I
- Imagined proximity
- Implementation costs
- Incentives
- Induced complacency
- Information integrity
- Interface accessibility
- Irreversibility
L
- Labor dynamics
- Latency
- Legacy infrastructure
- Legacy technologies
- Loss of human coordination
- Lumberjack effect
M
- Machiavellian manipulation
- Marginalization
- Mental health
- Mindful friction
- Misappropriated anthropomorphism (humanlikeness)
- Misinformation
- Mode confusion
- Model bias
- Model completeness
O
- Organizational trust
- Over-reliance
P
- Partial automation
- Participatory placation
- Perceived control
- Physical reliability
- Planned obsolescence
- Privacy
- Programming errors
- Public safety
- Public trust
R
- Regulatory
- Right to repair
S
- Scalability
- Situational awareness
- Skill decay
- Social impact
- Spectral limitations
- Synthetic delays
- System complexity
- System uncertainty
T
- Takeover
- Task incompletion
- Techno-optimism
- Tech refusal
- Trust calibration
- Trust inversion
U
- User goals
- User safety
- User satisfaction
- User trust
V
- Vibeware
I welcome outreach from those interested in cross-referencing these against their own use case.