Bias In Risk Assessment Tools .
1. State v. Loomis (Wisconsin Supreme Court, 2016)
This is the most important modern case on algorithmic bias in sentencing tools.
Eric Loomis was sentenced using the COMPAS risk assessment tool, which rated him as high risk for recidivism. The judge relied partly on this score when deciding the sentence.
Key legal issue:
Whether using a proprietary algorithm (COMPAS) violates due process because:
- The defendant cannot see how the score is calculated
- The tool may contain racial or socioeconomic bias
- The defendant cannot effectively challenge it
Court’s holding:
The court allowed COMPAS to be used but imposed strict warnings:
- It cannot be the sole basis for sentencing
- It must be treated as one factor among many
- Judges must be cautioned about its limitations
Bias concern highlighted:
The court acknowledged:
- COMPAS uses variables correlated with race (like employment history, neighborhood, criminal history)
- The algorithm is proprietary, meaning no transparency
- Risk scores may reflect structural inequality, not individual danger
👉 This case is central because it shows courts accepting AI-like tools while simultaneously recognizing their bias risks.
2. State v. Malenchik (Indiana Supreme Court, 2010)
This case involved the use of risk assessment tools in sentencing reports (PSI reports).
The defendant challenged the use of a statistical risk assessment in determining probation or incarceration.
Key issue:
Whether actuarial risk tools violate due process or improperly replace judicial discretion.
Court’s ruling:
The Indiana Supreme Court allowed the tool, stating:
- Risk assessments can be useful for sentencing guidance
- They help evaluate rehabilitation and public safety
However, the court emphasized:
- They are not deterministic
- Judges must not treat them as “scientific truth”
Bias concern:
The court recognized that:
- The data used (prior arrests, demographics, employment history) may reflect systemic bias in policing
- Therefore, the output risk score may indirectly reproduce that bias
👉 This case shows early judicial acceptance of “statistical justice tools,” but with caution about fairness.
3. People v. Collins (California Supreme Court, 1968)
This is not about modern AI, but it is foundational for understanding bias in statistical reasoning in court.
A couple was convicted based on a mathematical probability argument presented by the prosecution, which claimed the chances of a couple matching eyewitness descriptions were extremely low.
Key issue:
Whether probabilistic evidence can be used to prove identity in criminal trials.
Court’s holding:
The conviction was overturned.
The court criticized:
- Misuse of statistics
- “Pseudo-scientific” probability reasoning
- The assumption that numbers automatically equal truth
Bias concern:
The court warned that:
- Statistical models can appear objective but be deeply misleading
- They can unfairly influence juries
- Poorly constructed models may reinforce false certainty
👉 This case is often used today as an early warning about algorithmic bias and “mathwashing” in legal decisions.
4. United States v. Salerno (U.S. Supreme Court, 1987)
This case upheld the federal Bail Reform Act, which allows preventive detention based on predictions of future dangerousness.
Key issue:
Whether detaining someone before trial based on predicted future behavior violates due process.
Court’s holding:
The Supreme Court ruled that:
- Preventive detention is constitutional in limited circumstances
- The government’s interest in public safety can justify risk prediction
Bias concern:
Even though this case does not involve software, it legitimizes the idea that:
- Courts can restrict liberty based on future risk predictions
- These predictions are often based on criminal history, which is itself affected by policing bias
👉 This becomes important today because modern risk tools (like COMPAS) build on the same legal logic established here.
5. Kansas v. Hendricks (U.S. Supreme Court, 1997)
This case dealt with civil commitment of sex offenders after they completed prison sentences.
Key issue:
Whether detaining individuals after sentence completion based on predicted dangerousness is constitutional.
Court’s holding:
The Court upheld the law, stating:
- Civil commitment is not punishment if framed as treatment and prevention
- States can detain individuals if they are deemed likely to reoffend
Bias concern:
The decision raises serious concerns about:
- Long-term detention based on predictive judgments rather than proven acts
- Heavy reliance on psychiatric and actuarial predictions, which can be inconsistent and biased
- Potential disproportionate impact on marginalized groups
👉 This case is often cited in discussions about how predictive systems can extend state control based on uncertain forecasts.
6. Broader Pattern Across These Cases
Across all these decisions, courts struggle with a common tension:
- Efficiency vs fairness
- Prediction vs proof
- Data-driven tools vs transparency
The bias problem in risk assessment tools usually comes from:
- Historical data that reflects unequal policing
- Socioeconomic variables acting as proxies for race
- Lack of transparency in proprietary algorithms
- Judicial overreliance on “scientific” outputs
Final takeaway
These cases show a legal evolution:
- Older cases (like Collins) warn against over-trusting statistics
- Mid-era cases (Salerno, Hendricks) accept predictive risk in principle
- Modern cases (Loomis, Malenchik) accept algorithmic tools but struggle with bias, opacity, and accountability

comments