Intelligent machines are becoming an ever-present reality in almost every aspect of our lives. From asking Alexa about the weather to asking Siri to call mom, these machines are streamlining decision-making for an increasing number of aspects of our lives. Streamlining promotes efficiency, which is a persistent policy consideration to determine how well the judicial system is functioning. We want our courts to be “speedy” and just. The efficiency offered by these smart machines has led several courts to integrate their use into much of the pre-trail aspects of cases. The AI in these cases produces a score that is representative of the defendant’s flight-risk. This score is the result of putting individual characteristics of the defendant—reduced to a quantitative value—through an algorithm of statistically relevant factors. Judges may then use this, among other considerations, in ultimately deciding whatever issue is currently before them.


Several concerns have already been raised about the use of this kind of technology within the courtroom. Critics have asserted that these algorithms might supplant the judges’ opinion—likely either through laziness or a misplaced belief in the algorithm’s authorityand possibly insulate biases behind the mask of a neutral form. Further, a specific concern has been raised in regard to the precedent set by a recent court case, Wisconsin v. Loomis, wherein the defendant was denied a challenge to his sentence that was based partially on an algorithm’s determination of him as “high risk.” The defendant wanted to be able to assess the validity of the algorithm, yet the court determined that knowledge of the output, not the process, of the algorithm was sufficient. The main worry raised by this case is the insulation of these algorithms both from defendant and judicial scrutiny. The Supreme Court ultimately denied a petition for cert, thus the Wisconsin Supreme Court’s decision stands.


A concern that seems yet to be raised is that of political pressure on elected judges possibly resulting in these algorithms becoming the per se decisionmaker. It is no secret that elected judges impose harsher sentences, particularly when an election is drawing near. Judges facing the pressures of elections that make use of these risk assessment algorithms will likely see the determination as more of a mandate than a recommendation, lest their political opponents have more fuel for attacks ads alleging the judges of being “soft on crime.” If this is the case, then the fate of defendants before these judges will be determined by a cold, uncaring machine, rather than a person who can take account of the nuances that make each case more than just a math equation.


If laziness or adherence to misplaced authority has not already established these algorithms as the per se decisionmaker, the threat of political pressure likely will.


– Chris Burkett



Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>