Algorithmic Necropolitics
Definition
Algorithmic Necropolitics: [Adapted] — (Mbembe) When algorithmic systems deprioritize the survival, health, or flourishing of certain populations — e.g., credit scoring, medical triage, or disaster response.
Foundation
Algorithmic necropolitics extends Achille Mbembe’s concept of necropolitics—the sovereign power to determine who may live or die—into the realm of computational systems. Where Mbembe examined how colonial and postcolonial states exercise lethal power, algorithmic necropolitics describes how automated systems systematically deprioritize the survival, health, and flourishing of certain populations.
This isn’t typically direct killing, but rather the computational management of life chances through resource allocation, risk assessment, and priority ranking. Algorithmic systems encode decisions about who receives medical care, financial resources, disaster relief, or even platform visibility during emergencies. These systems don’t merely reflect existing inequalities—they actively structure and amplify them through seemingly neutral mathematical processes.
The concept captures how algorithms become instruments of what Mbembe calls “technologies of destruction,” determining which lives are grievable, which deaths are mournable, and which populations are rendered disposable. Unlike human necropolitics, algorithmic versions operate at scale with reduced visibility and accountability, making their lethal logics harder to identify and resist.
Mechanism Analysis
Algorithmic necropolitics operates through several interconnected mechanisms that transform bias into systematic life-and-death consequences.
Training data mechanisms embed historical patterns of neglect and violence into predictive models. When algorithms learn from data reflecting centuries of medical racism, housing discrimination, or unequal disaster response, they don’t just reproduce these patterns—they systematize and accelerate them. A medical algorithm trained on data where Black patients’ pain was systematically undertreated will continue that pattern with mathematical precision.
Optimization targets create perverse incentives that sacrifice vulnerable populations for aggregate metrics. When hospital systems optimize for throughput rather than equity, algorithms may systematically deprioritize complex cases that disproportionately affect marginalized communities. When disaster response systems optimize for “efficiency,” they may consistently redirect resources away from areas perceived as less valuable.
Resource allocation algorithms function as computational triage systems, making real-time decisions about who receives finite resources. These systems often optimize for factors that correlate with existing privilege—income, zip code, employment status, insurance type—effectively ensuring that those already advantaged receive preference in life-critical situations.
Feedback loop amplification occurs when algorithmic decisions create conditions that justify future discrimination. Predictive policing algorithms that increase surveillance in marginalized communities generate more arrest data, which the system then uses to justify continued over-policing. This creates an escalating cycle where algorithmic “predictions” become self-fulfilling prophecies.
Opacity mechanisms make these deadly patterns difficult to identify or challenge. When algorithms are proprietary, complex, or constantly updating, affected communities cannot easily demonstrate how computational systems are systematically working against their survival and flourishing.
Case Studies
Healthcare algorithms demonstrate algorithmic necropolitics most clearly. Optum’s widely-used algorithm for identifying patients needing additional care systematically underestimated the needs of Black patients, effectively rationing healthcare resources away from those who needed them most. The algorithm used healthcare spending as a proxy for health needs, but because Black patients historically receive less care due to discrimination, they appeared “healthier” to the system even when experiencing severe illness.
Pain assessment algorithms trained on biased data perpetuate the false belief that Black patients experience less pain than white patients. These systems routinely recommend lower pain medication doses for Black patients with identical symptoms, effectively encoding medical racism into automated decision-making. When hospitals rely on these algorithms, they systematize the undertreatment of pain in ways that can be literally life-threatening.
Financial algorithms create necropolitical effects through credit scoring and loan approval systems. These algorithms often use zip code, shopping patterns, and social network data as proxies for creditworthiness—factors that correlate strongly with race and class. When these systems deny mortgages, business loans, or even basic banking services to entire communities, they perpetuate cycles of disinvestment that directly impact health outcomes, life expectancy, and survival.
Disaster response algorithms revealed their necropolitical logic during Hurricane Katrina and subsequent disasters. FEMA’s damage assessment algorithms consistently undervalued homes in predominantly Black neighborhoods, leading to inadequate relief funding. Social media algorithms during disasters often amplify rescue requests from affluent areas while suppressing those from marginalized communities, affecting who receives timely emergency assistance.
Content moderation algorithms exhibit necropolitical patterns by suppressing information vital to marginalized communities’ survival. During the COVID-19 pandemic, algorithms designed to combat “health misinformation” sometimes flagged legitimate community health resources, harm reduction information, and organizing efforts by marginalized groups as suspicious content, effectively silencing voices trying to protect their communities.
Systemic Context
Algorithmic necropolitics operates within broader systems of racialized capitalism that treat certain populations as surplus, disposable, or less deserving of resources and care. These computational systems don’t create inequality—they digitize and accelerate existing structures of oppression while providing technological cover for discriminatory outcomes.
The profit motive incentivizes these patterns by rewarding efficiency over equity. Healthcare systems that optimize for profit rather than care will naturally develop algorithms that favor profitable patients over expensive ones. Insurance companies that maximize shareholder value will create algorithms that identify and exclude high-cost populations. Platform companies that optimize for engagement will design systems that amplify profitable content while suppressing voices that threaten advertiser comfort.
Legal frameworks enable algorithmic necropolitics by treating discriminatory algorithms as neutral tools rather than discriminatory practices. Current anti-discrimination law struggles to address algorithmic bias, especially when algorithms don’t explicitly use protected characteristics but achieve discriminatory outcomes through proxy variables. This legal lag creates space for necropolitical algorithms to operate with minimal oversight or accountability.
The infrastructure of algorithmic necropolitics is supported by the broader digital economy’s extraction of value from marginalized communities. Data harvesting, behavioral surplus extraction, and platform capitalism all depend on treating certain populations as sources of data and profit rather than stakeholders deserving protection and care.
Resistance & Mitigation
Community organizing represents the most powerful resistance to algorithmic necropolitics. Groups like the Algorithmic Justice League, Data for Black Lives, and local community organizations have successfully challenged discriminatory algorithms through research, advocacy, and direct action. These efforts work by making visible the hidden operations of necropolitical systems and demanding accountability from the institutions that deploy them.
Technical interventions include algorithmic auditing, bias testing, and the development of alternative algorithms designed with equity as a primary goal. However, these approaches often fail to address the underlying structural issues that create necropolitical outcomes. Technical fixes without systemic change tend to optimize discrimination rather than eliminate it.
Regulatory approaches show promise when they focus on outcomes rather than intentions. Laws that require algorithmic impact assessments, mandate equitable outcomes, or create private rights of action for algorithmic discrimination can create meaningful accountability. However, regulation often lags behind technological deployment, leaving communities vulnerable during critical periods.
Policy interventions must address both algorithmic systems and the broader structures they serve. This means not just auditing healthcare algorithms but challenging healthcare systems that prioritize profit over care, not just fixing credit scoring but addressing the broader systems that perpetuate economic inequality.
The most effective resistance combines technical expertise with community knowledge, regulatory advocacy with direct action, and reformist interventions with demands for systemic transformation.