U.S. legal proceedings rely on evidence. The party who bears the burden of proof in a legal proceeding—whether civil or criminal—must meet that burden based on evidence presented. It’s no surprise, then, that deepfake technology is sending ripples of concern through the legal community. Our system of justice depends on evidence being reliable and on our ability to detect falsified evidence and either block it from admission or demonstrate to the judge or jury that it’s unreliable.
Historically, scientific and technological advances have provided opportunities for more evidence or have enhanced our ability to assess the reliability of evidence—but deepfake technology is different. The sophistication of this technology has made it difficult for attorneys, judges, and jurors to know whether to believe their own eyes.
What is Deepfake Technology?
Deepfake technology uses artificial intelligence (AI) to create highly realistic videos. In simple terms, the technology processes images and videos of a person, gathering data such as facial mapping. The software uses that information to digitally manipulate video, making it appear that person has said and done things he or she has not. This type of manipulation is more difficult to detect than simple face-swapping edits, and the technology continues to evolve and improve.
If you find it hard to believe that a faked video could be convincing enough to be admitted in court, take a look at this deepfake, in which Robert Downey Jr. is seamlessly swapped into Christopher Lloyd’s role in Back to the Future. If you didn’t catch it the first time, take a second look: That’s not Michael J. Fox, either.
Professors at the University of Washington created WhichFaceIsReal.com as a means of both testing the people’s ability to tell human faces from AI-generated ones and to educate them about how to make the distinction. Hundreds of thousands of visitors to the site were asked to differentiate between real and faked faces. They only got it right about 60% of the time, meaning the fakes fooled them about 40% of the time. With practice and experience, their ability to differentiate got stronger, but most peaked at about 75% accuracy.
Deepfakes in the Courtroom
It’s easy to see how the ability to generate realistic but entirely inaccurate video would pose a problem for attorneys, litigants, and the court system as a whole. Once upon a time, a client or witness producing video evidence might have clinched a case or triggered a quick settlement offer.
Now that many people have the technology and ability to create deepfake videos at home, more analysis will be required.
One common method for authenticating video evidence is to have the person who took the video, or someone else who was present, testify that the video is an accurate reflection of what took place. Evidence Rule 902(13) allows records “generated by an electronic process or system that produces an accurate result” to be authenticated if “shown by the certification of a qualified person.” This authentication process doesn’t wholly protect against the admission of deepfake video, but it would require someone to perjure himself or herself in the authentication process.
The greater complication arises when alternative authentication procedures are employed. For instance, in some states video evidence can also be authenticated by someone who is familiar with the person in the video and can testify to recognizing them on video as the person it is alleged to be (as he or she looked and sounded during the relevant time). This process could easily lead to honest but mistaken authentication of deepfake video.
With the increasing availability of security and surveillance video, many courts have adopted a “silent witness” approach to authentication of video. This allows for admission of video evidence where no human actually witnessed the events and location at the relevant time. One common example would be security footage showing a break-in at a store after hours, when no one was present to witness the crime. Obviously, admission of this type of evidence—previously considered reliable—allows for the prosecution of crimes and proof of civil allegations that might otherwise not be demonstrable. As deepfake technology improves and becomes more accessible, however, the reliability of that type of evidence is less certain.
So, what can attorneys do to ensure that the video evidence they’re submitting is authentic, or to challenge a possible deepfake video submitted by opposing counsel? There is no surefire answer today, but those involved in the legal process can take the following steps to safeguard against deepfake video evidence:
- 1. Ask hard questions. Since we now know that video evidence may not be what it appears, it is all the more important to question clients, witnesses, and opposing parties about the validity of a video and to follow up on anything that raises doubt.
- Make full use of the existing authentication process. Though the current authentication process wasn’t developed with this type of technology in mind, it still provides some safeguards and can help to reduce the likelihood of fake video being accepted as evidence.
- Employ experts. The difficulty of identifying a deepfake and persuading a judge that the evidence has been fabricated varies depending on the quality of the fake. Deepfake technology is constantly evolving, so there’s no certain process for determining authenticity, but detection software is evolving too, and there are certain markers that can give away a fake.
- Talk to the jury. If you fail to exclude a suspected deepfake video from evidence, you may still be able to persuade the jury that the video evidence is unreliable.
Deepfake technology presents a problem for attorneys and for the judicial system. Both technology and the rules of evidence in various jurisdictions are likely to evolve in response to this growing problem. In the interim, attorneys must remain vigilant about authentication and be prepared to test or employ experts where necessary.