AI Deepfakes Are Altering and Manipulating Our Reality

The manipulation and fabrication of digital images and videos is not a new phenomenon. However, recent developments in AI and deep neural networks have made the process of creating face-swapping deepfakes faster and easier. Deepfakes are being used to intimidate, demean, harass, undermine, and destabilize. This sophisticated synthetic media constitutes a new and unique challenge in the broader battle against online disinformation and the weaponization of technology.

Image courtesy of Markus Spiske.

How Are Deep Fakes Made?

Deep fakes use a type of neural network called an autoencoder. In order to create a face-swap video, all you need is to feed the autoencoder thousands of face shots of two people. The algorithm then finds and learns similarities between the two and reduces them to their common features — this would be the encoding part of the process. A decoder recovers the faces from these compressed images and reconstructs the faces for the “wrong” person. This way, person A can have the expressions and orientation of face B. In order to create a deepfake video, this process has to be executed on every frame.

Deepfake Hardware

Most deepfakes are created on standard computers, hence the risk of their proliferation. Although they require high-end desktops with powerful graphics cards, this type of equipment is common in, for example, gaming PCs. The processing of the images can take days or weeks, however, this time can be significantly reduced to just hours by using cloud computing power. About 250 photos of a person can be enough to create a deepfake video.

The Deepfake Business

For now, the cost of deepfakes is estimated in lost credibility and emotional harm as opposed to dollars. The number one targets are usually politicians and celebrities, although media companies that depend on reliability to maintain an audience can also be damaged by devalued stocks and boycotts. The technology is, however, particularly weaponized against women through severe violations to privacy.

Spotting Deepfakes

Poor quality deepfakes are easy to spot. In addition to strange blinking (the majority of images fed to an algorithm like the one mentioned above show people with their eyes open), the lip synching can be bad and the skin tone patchy. There can also be flickering around the edges of the faces and detectable artifacts in strands of hair. Jewelry and teeth tend to also be rendered less effectively, and illumination is usually inconsistent — in particular, the reflections on the iris.

The Risks of Deepfakes

Not all deepfakes are malicious. Some of them are entertaining and others are even helpful. For example, voice cloning can restore people’s voices when they lose them to disease. The same technology can also be used to improve the dubbing of films in foreign languages or, more controversially, resurrect dead actors.

The Legality of Deepfakes

Deepfakes are not illegal per se, but depending on the content, they can infringe copyright and breach data protection law, or be defamatory. There’s also the specific criminal offense of sharing sexual and private images without consent. Recently, California passed the AB 730 legislation, which makes it illegal to create and distribute deepfakes that feature politicians. California’s AB 602 also gives any resident the possibility to sue anyone who uses their image to create sexually explicit content.

Anthropologist & User Experience Designer. I write about science and technology. Robot whisperer. VR enthusiast. Gamer. @yisela_at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store