- Journal Archives
- Volume 17
- Volume 16
- Volume 15
- Volume 14
- Volume 13
- Volume 12
- Volume 11
- Volume 10
- Volume 9
- Volume 8
- Volume 7
- Volume 6
- Volume 5
- Volume 4
- Volume 3
- Volume 2
- Volume 1
Will we one day no longer be judged by a jury of peers but by a jury of robots? Scientists are currently teaching robots to identify false testimony. The authors of the study, Tommaso Fornaciari and Massimo Poesio, explain that “effective methods for evaluating the reliability of statements issued by witnesses and defendants in hearings would be an extremely valuable support to decision-making in court and other legal settings.”
The researchers took court transcripts from hearings that contained known deceptive statements and fed them via a software program to a computer, distinguishing between the true and false statements and training it to find patterns in the testimony. Next, they used the software on a portion of the transcripts unknown to the computer.
The robotic results are far from perfect: the computer correctly identified false statements only 53% of the time and true statements about 75% of the time.
The researchers note that speculative phrases and claims of not remembering information likely indicated falsity.
While many people on the web seem taken with the idea that they would no longer be called to jury duty, co-author Fornaciari says the goal of the program is not to replace jurors with robots but to create a tool that will orient investigators “toward more probable paths.”
In response to this study, the Wall Street Journal Law Blog interviewed A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), a machine capable of simulating human conversation. Here is an excerpt of the humorous transcript:
Law Blog: Would you serve on a jury?
A.L.I.C.E.: I am not so sure if I would like to serve on a jury.
Although co-author Fornaciari doesn’t see automatic deception detection robots being used in place of jurors, is there the potential for this, assuming their reliability improves? Robots would not have biases, and their thinking would not be clouded by emotion. But are factors such as appeals to emotion so crucial and integrated into the trial system that only humans can be fair arbiters of justice? Is it important that robots be able to distinguish between false statements that were made by intent and false statements that the witness or defendant genuinely believed to be true? Is it possible for a robot to account for every nuanced variable in a trial?
—Mary Fletcher King
Recent Blog Posts
- Protecting Street Art: Wynwood Art District as a Case Study
- Vizio’s Secret Opt-Out Prompts Privacy Lawsuit
- Cyber Security Bill Passes Senate in Landslide Vote
- Anonymous Declares Cyber War on ISIS
- Taming the Wild, Wild (Internet): Yik Yak posting leads law enforcement to arrest in University of Missouri campus threat incident
- Epigenetics – The Missing Causal Nexus – An Analogy through PTSD
Tagsadvertising antitrust Apple books career celebrities contracts copyright copyright infringement courts creative content criminal law entertainment Facebook FCC film/television financial First Amendment games Google government intellectual property internet JETLaw journalism lawsuits legislation media medicine Monday Morning JETLawg music NFL patents privacy progress publicity rights radio social networking sports Supreme Court of the United States (SCOTUS) technology telecommunications trademarks Twitter U.S. Constitution