In an era increasingly reliant on data-driven decision-making, algorithms are permeating various aspects of our lives, including the criminal justice system. One such algorithm, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), is used across the United States to assess a defendant's likelihood of recidivism – the tendency to re-offend. However, concerns have arisen regarding the fairness and potential bias of these algorithms, particularly concerning racial disparities.
This article delves into a comprehensive analysis conducted by ProPublica, an independent, non-profit newsroom, which investigated the COMPAS algorithm's accuracy and potential bias. Their findings shed light on the complexities and challenges of using algorithms in criminal justice and raise crucial questions about fairness and equity.
COMPAS is a risk assessment tool developed by Northpointe, Inc. It employs a questionnaire that defendants answer during the booking process. The responses are then fed into the software, generating scores that predict the "Risk of Recidivism" and "Risk of Violent Recidivism." These scores range from 1 to 10, categorized as "Low" (1-4), "Medium" (5-7), and "High" (8-10). Judges and probation officers use these scores to inform decisions regarding bail, sentencing, and parole.
ProPublica obtained two years' worth of COMPAS scores from Broward County, Florida, encompassing data from 18,610 individuals scored in 2013 and 2014. Focusing on pretrial assessments, they narrowed the dataset to 11,757 individuals. They then cross-referenced these scores with public criminal records from the Broward County Clerk's Office to determine each individual's criminal history before and after the COMPAS assessment.
Defining recidivism was critical to the analysis. ProPublica adopted Northpointe's definition, which includes "a finger-printable arrest involving a charge and a filing for any uniform crime reporting (UCR) code," occurring after the initial COMPAS scoring. They also adhered to the FBI's definition of violent crime for analyzing violent recidivism.
ProPublica's analysis revealed significant racial disparities in COMPAS's risk predictions:
To further examine the factors influencing COMPAS scores, ProPublica developed logistic regression models. These models revealed that age was a significant predictor, with younger defendants (under 25) being 2.5 times more likely to receive a higher risk score than middle-aged offenders, even after controlling for other variables. Furthermore, race proved to be a substantial factor, with black defendants being 45% more likely to receive a higher score than white defendants, even when accounting for differences in recidivism rates.
While COMPAS demonstrated some predictive value, its overall accuracy was lower than Northpointe's reported threshold for reliability. Moreover, the study highlighted that the algorithm performed differently across racial subgroups. Specifically, high-risk white defendants were more likely to recidivate than low-risk white defendants, whereas high-risk black defendants were only slightly more likely to recidivate than low-risk black defendants.
Contingency table analysis further revealed the discrepancies in the accuracy of predictions between white and Black defendants
ProPublica's investigation raises critical questions about the fairness and potential bias of algorithms used in the criminal justice system. These findings echo concerns voiced by policymakers and researchers alike, highlighting the need for further scrutiny and transparency in the development and implementation of risk assessment tools.
As Eric Holder, former U.S. Attorney General, stated, these measures, while created with good intentions, might "exacerbate unwarranted and unjust disparities" within the criminal justice system.
The use of algorithms in criminal justice is a complex issue with the potential for both benefits and risks. While these tools can offer valuable insights and improve efficiency, it is essential to acknowledge their limitations and potential for bias. Continuous evaluation, transparency, and a commitment to fairness are paramount to ensure that algorithms like COMPAS do not perpetuate or exacerbate existing inequalities within the criminal justice system.
By understanding the nuances of how these algorithms function and their impact on different populations, we can work toward a more equitable and just criminal justice system for all.