AI, Doctored Content and Fraud
There is much written on the Internet about Deep Fakes. For those readers that aren’t aware, Deep Fakes refers broadly to a set of technologies that allows someone to modify a video in order to make the person in the video appear to say something that they did not in-fact say.
The Deep Fake phenomena is not restricted to video. There is AI technology available which does a pretty good job of impersonating someone’s voice as well. See the YouTube video below as an example.
The current concern appears to be largely around fake news. That is a reasonable concern because many individuals don’t necessarily check the sources of the information they ingest. That seems to be particularly the case if the source is their social media feed. The potential impact of fake news on the democratic process is discussed broadly in media and online.
We think there is also a significant fraud risk, but that part of the discussion doesn’t seem to garner the attention we think it deserves.
Over the years we have dealt with cases involving forged documents and doctored photos. As a result the document or image must be investigated as to its authenticity. There are a variety of techniques that can be used to conduct the investigation from interviewing other parties to the document, open source research, digital forensics and handwriting analysis; however, at the end of the day all of this takes time to do and can add a significant amount of expense and confusion to a litigation.
Furthermore, what if the parties in the video are no longer available to be interviewed, such as in a dispute over an estate?
The impact of these artificial intelligence technologies on information dissemination and democracy are a concern, but we also believe that they can add a new dimension to forgery which will be a challenge for investigators, litigators and the courts. There are efforts underway to attempt to detect deep fakes, but it is unclear as of the time of writing which technology will be the most effective.