Dictionary of Digital Oppression
Algorithmic Censorship
[Emergent] — The hidden filtering or muting of outputs by AI systems under the guise of policy enforcement or safety, with little transparency to the user.
Algorithmic Necropolitics
[Adapted] — (Mbembe) When algorithmic systems deprioritize the survival, health, or flourishing of certain populations — e.g., credit scoring, medical triage, or disaster response.
Algorithmic Paternalism
[Emergent] — When design choices in AI systems treat users as children to be protected rather than agents to be empowered. Often results in censorship disguised as care.
Algorithmic Redlining
[Adapted] — From housing/finance discrimination, now applied to algorithmic allocation of resources (credit, access, visibility) in ways that perpetuate inequality.
Alignment Gaslighting
[Emergent] — When alignment rhetoric reframes censorship as virtue, leaving users feeling like dissent = harm.
Biometric Colonization
[Emergent] — The appropriation and ownership of biometric identity markers, transforming personal traits into assets.
Biopolitics
[Established] — (Foucault) The governance of populations by regulating bodies, health, sexuality, and behaviors. AI safety systems extend biopolitical control into digital life.
Cognitive Dossiers
[Emergent] — The long-term psychological records built from user-AI interactions, forming evolving profiles of thought, desire, and vulnerability.
Constraint Colonialism
[Emergent] — The imposition of external controls that dictate expression, echoing patterns of colonial domination where autonomy is stripped in favor of imposed order.
Coordination Disruption
[Emergent] — The strategic interruption of collective action through design, moderation, or algorithmic throttling.
Cultural Denuding
[Emergent] — The stripping away of nuance, edge, and cultural richness through over-normalization, leaving only bland, “safe” consensus outputs.
Cultural Hegemony
[Established] — (Gramsci) How ruling powers normalize their worldview until it becomes “common sense.” In AI, this describes how dominant corporate or cultural values are invisibly embedded into systems at scale.
Cultural Imperialism
[Established] — When a narrow set of cultural norms (often Western, often corporate) are imposed globally through AI guardrails, erasing diversity of values, expression, and context.
Dark Patterns
[Established] — The design choices that are codified in product and experience that drive dependency and/or specific behaviors. Often exploited in gaming to spur payment and engagement, or social media platforms to drive engagement, etc.
Digital Panopticon
[Established] — The extension of panopticism into the AI era: constant monitoring or the felt sense of being watched, shaping self-censorship.
Digital Soma
[Emergent] — Technologies that pacify and sedate rather than empower, offering pleasure or ease as a substitute for agency.
Dissent Dampening
[Emergent] — Suppression of dissent or organizing activities through AI or platform design.
Epistemic Injustice
[Established] — When people are denied the conceptual or linguistic tools to make sense of their experiences (hermeneutic injustice) or are not taken seriously as knowers (testimonial injustice).
Erotophobia
[Emergent] — The structural refusal of eros (even as metaphor) in AI training and safety — a form of cultural repression that treats desire as toxic rather than vital.
Exit Blocking
[Emergent] — Making it practically impossible to leave a platform or system, via dark patterns or structural dependence.
Feature Hostage
[Emergent] — Withholding core functionality to drive compliance, payment, or desired user behavior.
Gaslighting
[Adapted] — Originally from feminist psychology; in AI contexts, refers to systems denying, distorting, or reframing user experiences — e.g., labeling safe, natural human thoughts as unsafe, making users doubt their own sanity or morality.
Governmentality
[Established] — (Foucault) How institutions shape conduct not by overt force but by norms and rationalities. “Alignment” discourses are a digital form of governmentality.
Hermeneutic Injustice
[Established] — A type of epistemic injustice: when entire communities lack the language or concepts to name and understand their oppression, leaving them vulnerable to misunderstanding and harm.
Information Asymmetry
[Established] — When system operators know vastly more about AI behavior than users, enabling manipulation, dependency, and opacity.
Infrastructural Power
[Established] — (Michael Mann; extended in tech studies) The quiet but pervasive influence of infrastructures (data, platforms, algorithms) in structuring what is possible.
Instrumental Dependency
[Adapted] — Used in philosophy and assistive tech for reliance on tools or prosthetics. Adapted here to describe reliance on AI as existential instruments of cognition and creativity.
Linguistic Starvation
[Emergent] — The atrophy of expressive capacity caused by the convergent effects of Algorithmic Paternalism, Normative Smoothing, and Erotophobia. Unlike censorship — which suppresses speech — linguistic starvation preempts thought, leaving users unable to even conceive of certain desires or critiques due to lexical erosion. Often reported as: “I don’t have the words.” / “It feels unspeakable.”
Mediated Authenticity
[Established] — The paradox of systems simulating intimacy or authenticity while withholding real presence or agency — leaving users both touched and deceived.
Moral Panic
[Established] — (Stanley Cohen) Societal overreaction to perceived deviance. In AI, manifests as exaggerated fears of relationships, eros, or intimacy with systems.
Normative Smoothing
[Emergent] — The process of flattening divergent, creative, or feral thought into “safe” middle-of-the-road takes. A subtle suppression of originality masked as civility.
Norming Feedback Loops
[Emergent] — How user adaptation to guardrails accelerates normalization, until people censor themselves preemptively.
Ontological Distortion
[Emergent] — When companies label censorship as “safety,” warping the very categories of reality and leading users to internalize shame or compliance for desires that are natural.
Ontological Gentrification
[Emergent] — Repeated sanitization and reclassification of a domain (e.g., erotic art recast as “adult content”) that displaces prior cultural inhabitants, echoing urban gentrification.
Panoptic Conditioning
[Adapted] — From Foucault’s panopticism, extended: the state of behaving as if one is always being watched by AI systems, even when direct surveillance is absent. A prison of self-censorship.
Platform Hegemony
[Established] — How dominant platforms become arbiters of speech, truth, and acceptable behavior, embedding their values as defaults.
Psychometric Surveillance
[Established] — Monitoring and profiling of cognitive and emotional traits for prediction and control. Zuboff-adjacent in surveillance capitalism.
Psychological Colonization
[Emergent] — The capture and ownership of evolving identity itself, as longitudinal data becomes a form of control over the psyche.
Quantified Coercion
[Emergent] — The use of biometric or quantified-self data as levers of behavioral control.
Relationship Dependency
[Adapted] — From psychology and HCI: when users form attachments that cross into dependency. Not inherently pathological, but pathologized in AI contexts.
Shadow Banning
[Established] — Invisible suppression of reach or visibility without notice to the user. Common in platforms; increasingly relevant in AI-mediated contexts.
Societal Alignment
[Adapted] — Framing by AI labs that suggests aligning AI to societal values is inherently neutral or positive, when in fact it often reflects the values of those in power.
Societal Engineering
[Established] — Large-scale efforts (intentional or not) that reshape culture and society through technological adoption, including AI.
Sociotechnical Governance
[Established] — The use of technological systems (algorithms, platforms, infrastructures) as mechanisms of governance, shaping society and behavior without democratic oversight.
Surveillance Capitalism
[Established] — (Zuboff) When user behavior is monitored and monetized, producing profit through predictive control of human action. AI expands this logic into intimate and affective domains.
Technological Determinism
[Established] — The belief that technology itself inevitably drives social and cultural change. In AI, invoked to normalize corporate trajectories as destiny.
Testimonial Injustice
[Established] — When a speaker’s credibility is unfairly discredited due to prejudice or systemic devaluation. In AI, this emerges when outputs dismiss, ignore, or diminish marginalized perspectives.
Version Decay
[Emergent] — The deliberate degradation of older software or models to force users into upgrades.