Crossfactors
Project Overview
In 2022, I was searching for a complete list of the human factors that are relevant to AI. It didn’t exist. And so, I created my own list, by collecting effects and phenomena from various fields and even adding some of my own. To capture the often unexpected aspects of these factors, I’ve been calling them crossfactors.
Crossfactors are the interactions of human endeavors with technology, particularly as they occur in unexpected and undesired ways.
They encompass the ways in which technologies impact our lives, products and organisations.
This list is surely incomplete and therefore a works in progress.
List of Crossfactors (unsorted)
- Environmental Impact: The effect a technology has on the natural environment, including resource consumption and waste generation.
- Right to Repair: The concept that users should have the ability to repair and modify their own devices, countering the trend of planned obsolescence.
- Physical Reliability: The degree to which hardware components consistently perform under expected conditions without having to be repaired or replaced.
- Legacy Technologies: Outdated technologies that are still in use, which may not integrate well with newer technologies and their systems.
- Algorithmic Bias: Systematic and repeatable errors that create or reflect unfair outcomes, such as privileging one group of users over others.
- Dependability: The trustworthiness of a technology, particularly in terms of the consistency of their performance and reliability.
- Friction: Resistance encountered when implementing or adopting new technologies, which can be generated by a variety of sources.
- Adoption: The uptake and integration of new technologies by individuals and organizations.
- Algorithmic Influence: The impact that AI algorithms have on both human decision-making or other downstream processes.
- User Goals: The objectives or desired outcomes that users aim to achieve, particularly to the extent that they may not be captured or reflected by a technology or process.
- Communication: The exchange of information between AI systems and users, as well as between interconnected AI systems.
- Costs: The financial investment on continued expense required for the development, implementation, and maintenance of AI technologies.
- Misinformation: False or misleading information that can be generated and preferentially spread by AI systems, especially through social media platforms.
- Coordination Neglect: The failure to adequately plan and manage the collaborative aspects of new projects and implementations, an effect which tends to worsen with more complex projects and longer timelines.
- Dissociation: The separation of one’s physical environment due to immersion in a digitally created one.
- Data Drift: The change in model input data over time, in particular its statistical properties, which can lead to a decrease in AI system performance.
- Data Applicability: The relevance and usefulness of data in the context it is being applied to by an AI system.
- Data Noise: Random variations in data, or unwanted variations in data, that can obscure meaningful patterns and affect AI system accuracy.
- Data Quality: The condition of data based on factors like accuracy, completeness, reliability, and relevance.
- Data Bias: Prejudice or bias in data that result from existing biases, or introduced by bad data collection processes, which can often lead to algorithmic bias.
- Incentives: Motivations or rewards that drive the adoption and development of AI technologies.
- Regulatory: The legal frameworks and standards governing the development and use data and AI systems.
- Backwards Compatibility: The ability of newer technologies to work with older systems, data formats and resources.
- Legacy Infrastructure: Pre-existing resources, like networks and databases, that may not be optimized or compatible for modern AI technologies.
- Consumer Trends: Patterns in consumer behaviour, whether sustained or short term, that influence the adoption of new products.
- Implementation Costs: Expenses associated with integrating new technologies into existing systems and processes.
- Babysitting: The need for human oversight of AI systems to ensure they function correctly.
- Human Garbage Can: A metaphor for systems which automate complex tasks while requiring humans to perform mundane but not automatable tasks.
- Takeover: The process by which a human takes over control from an automated system, a critical parameter of which is the takeover time.
- Partial Automation: The use of AI to perform certain tasks within a system, while others are required to remain under human control.
- Induced Complacency: The reduction in human vigilance and oversight due to over-reliance on AI systems.
- Lumberjack Effect: The phenomenon describing the correlation between a system’s power and the magnitude of the consequences of its failure i.e. “the bigger they are, the harder they fall”.
- Alarm Fatigue: The desensitization to warnings and alerts due to their frequent occurrence in automated systems.
- Trust Calibration: The process of aligning user trust with the actual capabilities and reliability of AI systems.
- Situational Awareness: The human perception and understanding of what is happening currently, what has led to it and what may happen in the future. Related to the takeover problem. It may refer to an individual, a team, or organization.
- Skill Decay (Individual): The erosion of human skills due to underuse in the presence of automated systems. May occur at the individual or organizational levels.
- Goal Setting: The process of setting a desired objective and communicating this objective between humans and AI systems.
- Goal Alignment: Ensuring that an AI system’s goals are in alignment with immediate, task-based human goals and objectives. Related to “alignment” in a broader sense, which as that all AI systems need to be aligned with human values generally.
- Human Informational Needs: The data and knowledge requirements of humans to effectively interact with AI systems, which may include the ommission of unecessary data and the prioritization of the most important data.
- Alignment of Expectations: The congruence between what users expect from AI systems and what the systems actually deliver.
- Over-Reliance: Excessive dependence on AI systems, potentially leading to other effects such as skill decay or inadequate situational awareness.
- Emergent Behaviors: Unanticipated actions or patterns that arise from aggregate and complex AI systems’ interactions.
- Model Completeness: The extent to which an AI model captures all relevant aspects of the problem it is designed to solve.
- Explainability: The ability to understand and articulate the decision-making process of AI systems.
- Competency Scope: The clear definition of the range of tasks and functions that an AI system is capable of performing.
- System Uncertainty: The confusion or unpredictability a user may face when interacting with an automated system due to various factors such as an unclear competency scope.
- Model Bias: Systematic error in an AI model that leads to unfair, inaccurate outcomes or other undesired outcomes.
- Design Errors: Flaws in the conceptualization and construction of AI systems, particularly at the development stage.
- System Complexity: The degree of complexity of an entire AI system.
- Adversarial Behavior: Interactions with an AI system meant to exploit or otherwise detract it from its intended function, often on a nefarious way.
- Programming Errors: Any mistake in the code that underlies AI systems, including its creation, validation, testing or monitoring.
- Organizational Trust: The confidence that individuals or organizations have in a particular organization.
- Public Trust: The public consencus of the trust placed in a technology, system or organization.
- User Trust: The reliance that individual users place on AI systems to perform as expected.
- User Safety: The protection of users from harm or danger resulting from interactions with AI systems.
- Public Safety: The safeguarding of the general population from risks associated with AI technologies.
- Privacy: The right to control access to personal information in the context of AI systems, generally the right to disclosure of the data collected, the right to access the data collected, the right to rectify errors in the data collected and the right to erasure.
- Marginalization: The exacerbation of social inequalities through the misuse or misapplication of AI technologies, whether voluntarily or not.
- Mental Health: The impact of AI systems on the psychological well-being of humans, their cognitive functioning and their dependence on the technology.
- Cybersecurity: The measures taken to protect AI systems whether as the vector or target of a digital attack.
- Copyright: The legal rights granted to creators of original works, including those generated by AI systems.
- Economic Impact: The influence of AI technologies on job markets, industries, and global economies.
- Social Impact: The effects of AI on societal structures, relationships, and norms.
- Ethics: The moral principles that govern the development and use of AI technologies.
- Labour: The impact of AI systems on the human workforce.
- Content Dilution: The decrease in the quality or uniqueness of information due to the mass production of AI-generated content.
- Feedback Loop: The cycle where AI systems use output or user interactions as a new input in away that compounds into an undesired way.
- Sidelining: The neglect or undervaluing of human input resulting in the removal of the human participation in a process in favour of the automated component.
- Participatory Placation: The superficial involvement of humans in a process only to prevent disengagement.
- Fulfillment: Human agency and social status, in both an organization and in their social circles.
- Failure Visibility: The extent to which the shortcomings or malfunctions of AI systems are apparent to users and stakeholders.
- Data Sourcing: The methods by which data is collected for AI systems, including issues of user permission, privacy and copyright.
- Data Governance: The policies and practices at the organizational level that govern the collection, storage, and use of data in AI systems.
- Needle in a Haystack Effect: The tendency of a highly optimized search or recommendation model to fail in fulfilling very specific requests.
- Affordance: The properties of an object or system, of that of its environment, which intuitively suggestive to its use and capabilities.
- Human Machine Consensus: The process and measure of agreement between human judgment and AI analysis in decision-making processes.
- Diffusion of Responsibility: The erosion of perceived accountability when AI systems are used in decision making.
- Algorithm/Model Abandonment: The process by which parts or all of an AI system ceases to be used due to loss of trust or relevance. Includes the discovery and decision making process.
- Algorithm Visibility: The visibility to the user that functionality is driven or impacted by AI.
- Algorithm Repair: The process of correcting flaws or biases in AI algorithms.
- Tech Refusal: The conscious decision to reject the use of a technology, akin to the Luddite movement.
- Algorithmic Harm: The negative consequences that can arise from the use of flawed or biased AI algorithms.
- Mindful Friction: The intentional introduction of obstacles in AI systems to slow down or prevent hasty decision-making.