How to Deepfake

How to Deepfake

Have you also been dreaming about changing history? Below the Free Lunch Tutorial on how to do it. Good luck!

1. Find source material.

You don’t create the deepfake. An AI will. And the step after this one is training that AI for the job. Since the AI doesn’t know anything about the person you intend to fake, you need high-quality source material. Find a lot of high-resolution photos and videos of the real person you want to recreate. The process is much faster and easier if you train the AI on such detailed images.

Find the Source Material dude

2. TRAIN AI.

Github.com has most of the software you need to train an AI. Teach it for the technique you want to use, from faceswapping to mouth-manipulation. You can find tutorials on setting up this training, depending on the kind of deepfake you’re creating. If you’re new to such softwaredevelopment platforms, it might take some effort to get started. Once you’re used to the interface, though, you’ll understand why GitHub is the world’s largest dev platform.

After you have the software and the source material, you need to train the AI. Feed it the source material. As you do, the computer analyzes each image and figures out the details. Like a sculptor, the AI needs to examine the subject in detail, patching together an understanding of that subject. Once done, the AI has what it needs to create a fake. Do this training right, with the right amount and quality of source material, and the AI can then mimic the original face or person in a very precise way.

With the JFK deepfake we created for this issue, we had source material showing JFK only from the front. We didn’t need to show him from any other angle. So, the knowledge the AI extracted from our sources was enough.

 

Train AI

Generate Sound

3. Generate sound.

One tricky part of a deepfake is authentic sound. Visual techniques are more developed than audio ones, but the human mind looks for connections and fills in the blanks. Sound designers have known this fact for over half a century, so they use unusual tools to create noises that sound authentic. A sound designer might twist leather to create a wooden-stairway creak with each of an actor’s steps, for instance. It sounds right, in part, because the brain wants the combination of image and sound to make sense.  The “filling in” the human brain provides means you don’t need to match sound perfectly. You need only to be close. Use either a voice actor or a synthesized voice to generate the vocals you need. Finding and using an actor is simpler. Lacking that option, you need to train another AI for the voice. That process is as complex, if not more so, than training an AI for visuals. If you head down the cloned-voice road, you can find more tools on Github. Key to this process is adding sound to reflect the environment depicted. Failing to do so is a common mistake. Your sound needs to match the room and the implied time of the fake. It can’t seem too crisp, so ambient noise is important. In an outdoor video, adding the sound of intermittent breezes might make the scene more realistic. Without this careful matching, the fake is more likely to come across as… fake.

Generate sound

4. Match audio and image.

Let the AI listen to the audio and sync it with the video. Our perception of how sound matches images is forgiving, but poor lip-syncing is a quick way to spot a fake. So, it’s important to get this right. Slight mismatches can go unnoticed on a conscious level, but most observers are likely to sense something is wrong. That feeling of “this is odd” is a doorway to revealing the fake. Spend a lot of time making sure you have good sync.

Since you’re likely to have spent a lot of time looking at your fake at this point, bring in other eyes and ears to help. Ask for honest opinions. Correct based on these opinions. These perspectives can give you good signs of how “real” your fake has become. Such evaluations should help you avoid any slide into that uncanny valley most people can sense.

Match

5. Correct color and image.

A sharp image can look too good. Add grain or defects that mimic original material. (Depending on the time, you might add sounds those defects cause.) When doing final compositing, unite the whole image. Mismatches between subject and background are big clues for revealing fakes. Create an alpha mask for your deepfake so you can adjust it without a ecting the background. If that background is grainy or has visible compression, you need to match the flaws on the rendered fake. You have to do so without adding an e ect to the whole image. Adding graininess to the whole image, for example, means the background might end up with double the grain.

Such flaws, again, might not be obvious. However, they do help create the feeling that something about the fake is off. Someone skilled in imaging might be able to spot these details. And such a person knows how to analyze the image to learn whether it’s a composite.

Correct

6. Get a second opinion.

Ask someone who wasn’t involved in making  the deepfake at all, including in previous steps, to look at it. It’s too easy to convince yourself everything works. If you can persuade a fresh audience, your fake does its job.

We can’t stress this step enough. It’s better hearing your deepfake looks off before you launch it, since you have only one shot. Make sure that it’s as good as you can get it before releasing it. The magic is in the audience members believing what they see. If people spot the fake right away, you lose that magic. You’ll have to create a new fake to get it back. Don’t pay too much attention to the comments to measure your success, though. A percentage of observers shout “fake” no matter what. Post a real video of a traffc light, and someone is bound to doubt it. Don’t become discouraged, especially if someone claiming a fake can’t point to a technical flaw. That’s just how the internet works.

EYES

7. Mark your work as fake.

Watermark or otherwise reveal your work is a fake. Misleading people and spreading disinformation is for assholes. Don’t be one. Really. It’s even worse if your deepfake can or does do harm. Don’t let it. We’re all in this together, we all share the duty of using technology responsibly. Long ago, Sweden called on its people to help fund national television. When asked what to do about people who refused to chip in, a girl suggested any evader should have a snail put on their eye. But harmful deepfakes are worse than shirking your part in the collective good. Their misuse is collective harm, like a toxic snail in everyone’s eyes.

Mark

MEMO 01 - JULY 2020
Copyright 2020 TFLC
Ideas for change