Disguise and deception are permanent fixtures in history.
Claiming another person’s identity is a practice that stretches all the way from ancient Rome to imperial Russia. Today’s technology, however, has introduced a bevy of tools that enhance and complicate duplicity online. One such tool has garnered significant interest from academia, industry, and the public, in recent years but especially in recent months: deepfakes. A portmanteau of “deep learning” and “fakes,” deepfakes are videos altered with the help of artificial intelligence (AI) algorithms, often to portray one individual performing words or behavior of the deepfake creator’s choice.
This paper explores the social and political ramifications of deepfakes. It documents their recent uses, surveys regulatory responses from both the private and public sectors, and explores the landscape of recommendations that have been made for their further regulation. In straying from a purely technical understanding of deepfakes and their effectiveness, this paper seeks to emphasize the value of non-regulatory responses to potentially malicious technologies. Deepfakes sit at the confluence of several consequential issues, including privacy, free speech, online identity, and who is in charge of defending these values. Their intersection produces an invaluable, and perhaps unprecedented, space for constructing and understanding information online. Finding a solution that can mitigate deepfakes’ negative use cases without hampering their positive assets will grant digital citizens a degree of freedom online that might be difficult to secure so effectively in any other environment.
Download: Seeing Is No Longer Believing: Deepfakes, Cheapfakes and the Limits of Deception (PDF)