Wednesday, December 13, 2023

A Cautionary Tale on Science in the Courtroom

There has been a recent advent of scientific and technological implementations designed to further our pursuit of justice in a judicial context. Two such examples are the incorporation of neuroscientific evidence in the court of law and the use of predictive AI algorithms in criminal rehabilitation. In theory, both purport to introduce impartial third-party evidence into the judicial space, but is that the case in practice? This article will outline the lessons learned from the use of AI in criminal rehabilitation as a cautionary tale for the incorporation of neuroscience in the courtroom.

    In the article titled “Algorithms Were Supposed to Reduce Bias in Criminal Justice—Do They?” written by Molly Calahan, the expertise of Dr. Ngozi Okidegbe is drawn on in a critical analysis of the use of AI algorithms in many contexts, with a notable example in the context of criminal rehabilitation. The article describes a hypothetical in which an algorithm generates a recidivism risk score that is provided to a judge within a larger report on a convicted criminal. The risk score indicates a likelihood for the convicted criminal to be a repeat offender, so if the judge takes the score into account, it can influence the eventual sentence. Although this algorithmically generated score appears to be impartial third-party evidence, the algorithm, and thus the produced score, are built on a foundation of inequity and bias. For example, such algorithms have disproportionately miscategorized Black people compared to white people; these algorithms are trained on data sets that are laden with the historical inequity of our human society. Yet, given human nature, the judge may take an algorithmically generated score at face-value, neglecting the underlying bias; in fact, the article mentions that a judge may assign a disproportionately high amount of weight to this score due to its apparent impartiality. Thus, Dr. Okidegbe stresses caution, critical thinking, patience, and consultation with minority populations before rushing into implementation of technological advancements in the judicial context, amongst others (Callahan, 2023). Although proponents of such algorithms may suggest these changes will enhance objectivity, impartiality, and justice, it remains far too early to be seen whether such algorithms will mitigate, maintain, or even exacerbate existing biases.


The dangers of incorporating AI into judicial processes serves as a cautionary tale for the advent of neuroscience, as well as other sciences, in the court of law. In the article “Law and Neuroscience” written by Greely et al., the authors outline a growing role for neuroscience and neuroscientific evidence to be used in the courtroom (Jones et al., 2013). The authors offer a clear and well-reasoned justification for how, when, where, and why neuroscience can and should be used in the courtroom, and they also elucidate the ways in which neuroscience can be incorporated including buttressing, challenging, detecting, sorting, intervening, explaining, and predicting. However, heeding the advice of Dr. Okidegbe, it is extremely vital that judicial officials and neuroscientists carefully consider the neuroscientific theory that is being incorporated into the court of law. Just as algorithms are demonstrably biased by input data, neuroscientific data can be biased by non representative sampling of human subjects which can have implications that exacerbate biases in gender, sex, sexual orientation, race, etc. Several neuroscientific ideas have been later modified or disproven upon incorporation of a broader, more representative sample. Historically, many foundational studies in neuroscience use population samples that are disproportionately white and male, leaving the results of such findings to have concerns with generalizability and bias. To implement neuroscientific ideas into the courtroom, we must first carefully consider and mitigate any potential concerns that could arise with further exacerbating issues with bias.


References:


Callahan, M. (2023, February 23). Algorithms were supposed to reduce bias in criminal 

justice-do they?. Boston University. 

https://www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice/

 


Jones, O. D., Marois, R., Farah, M. J., & Greely, H. T. (2013). Law and neuroscience. The 

Journal of Neuroscience, 33(45), 17624–17630. 

https://doi.org/10.1523/jneurosci.3254-13.2013


No comments:

Post a Comment