
As of February 2020, Internet users were uploading an average of 500 hours of new video content per minute on YouTube alone.

The large volume of online video presents an opportunity for the United States Government to enhance its situational awareness on a global scale. In this blog post, I describe the technology underlying the creation and detection of deepfakes and assess current and future threat levels. The House Intelligence Committee discussed at length the rising risks presented by deepfakes in a public hearing on June 13, 2019. Evolutionary improvements in video-generation methods are enabling relatively low-budget adversaries to use off-the-shelf machine-learning software to generate fake content with increasing scale and realism. A report published this year estimated that there were more than 85,000 harmful deepfake videos detected up to December 2020, with the number doubling every six months since observations began in December 2018.ĭetermining the authenticity of video content can be an urgent priority when a video pertains to national-security concerns.

The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. This alteration typically takes the form of a “faceswap” where the identity of a source subject is transferred onto a destination subject.
A deepfake is a media file-image, video, or speech, typically representing a human subject-that has been altered deceptively using deep neural networks (DNNs) to alter a person’s identity.
