Will we one day no longer be judged by a jury of peers but by a jury of robots? Scientists are currently teaching robots to identify false testimony. The authors of the study, Tommaso Fornaciari and Massimo Poesio, explain that “effective methods for evaluating the reliability of statements issued by witnesses and defendants in hearings would be an extremely valuable support to decision-making in court and other legal settings.”

The researchers took court transcripts from hearings that contained known deceptive statements and fed them via a software program to a computer, distinguishing between the true and false statements and training it to find patterns in the testimony. Next, they used the software on a portion of the transcripts unknown to the computer.

The robotic results are far from perfect: the computer correctly identified false statements only 53% of the time and true statements about 75% of the time.

The researchers note that speculative phrases and claims of not remembering information likely indicated falsity.

While many people on the web seem taken with the idea that they would no longer be called to jury duty, co-author Fornaciari says the goal of the program is not to replace jurors with robots but to create a tool that will orient investigators “toward more probable paths.”

In response to this study, the Wall Street Journal Law Blog interviewed A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), a machine capable of simulating human conversation. Here is an excerpt of the humorous transcript:

Law Blog: Would you serve on a jury?

A.L.I.C.E.: I am not so sure if I would like to serve on a jury.

Although co-author Fornaciari doesn’t see automatic deception detection robots being used in place of jurors, is there the potential for this, assuming their reliability improves? Robots would not have biases, and their thinking would not be clouded by emotion. But are factors such as appeals to emotion so crucial and integrated into the trial system that only humans can be fair arbiters of justice? Is it important that robots be able to distinguish between false statements that were made by intent and false statements that the witness or defendant genuinely believed to be true? Is it possible for a robot to account for every nuanced variable in a trial?

Mary Fletcher King

Image Source

Tagged with:
 

6 Responses to Robots: Jurors of the Future?

  1. Emma says:

    I’m all for improving our ability to reach the right answer in trials, but as Erin points out, communicating the truth is more nuanced than stringing together words that form a “true” statement. However, I think even with just a bit more accurate results, this could be really helpful to investigators, and so might help us reach the right resolution. After all, we use plenty of investigative techniques (and evidence) that might well be less accurate.

  2. R.L. Florance says:

    Very interesting post! First, I would have to say that the robots would have to get significantly better before they should actually be used. Fifty percent success rate for identifying false statements and 75% for identifying true statements simply wouldn’t cut it. However, if they actually got the robots to 99% on both, I don’t see why you wouldn’t want to use them. It could save lots of innocent people from going to jail and keep lots of guilty people of the streets. Of course, there are always worries about letting robots take over such a significant aspect of our society, but the benefits would be great.

  3. Avery VanPelt says:

    What a great post, Mary Fletcher! My immediate thought with this technology–to piggyback on Brooke’s post–is how would the Federal Rules of Evidence (and corresponding state rules) have to change were this technology to be widely adopted? So many of the assumptions underlying the FRE would have to be reevaluated. While I can’t imagine this technology could be adopted overnight, I nonetheless wonder at the headaches it could immediately create at both the state and federal levels, as many of the rules’ underlying assumptions would suddenly be made obsolete, or worse, false.

  4. Erin Frankrone says:

    This immediately made me think of the following Seinfeld clip where George gives Jerry advise for his polygraph test: http://www.youtube.com/watch?v=vn_PSJsl0LQ. While George’s conviction that “it’s not a lie if you believe it” may involve mental gymnastics for many on the witness stand, it does underscore the reality that lying involves much more than the speaker’s words. I worry, therefore, that if robots follow only word patterns to detect false testimony, then human jurors will largely defer to “science” and ignore the myraid non-verbal considerations necessary for identifying honesty.

  5. Brooke McLeod says:

    Great piece! One of the concerns with this would be the bias that comes with programming the robot. For example, in evidence, test results that come from a machine programmed by a human are considered hearsay. Here, it seems like robots programmed by humans would have similar problems in terms of bias. How do you propose addressing this issue?

  6. Amanda Nguyen says:

    Interesting piece (I told you I’d be looking out for it!). Anyway, it sounds like these things are less reliable than lie detectors (which we already choose not to allow). It is a cool idea though — that testimony can be analyzed via software for truth. Obviously incorporating it into real life settings is far off but I am interested to see how the research develops.