Sam and Shaan discuss deepfake technology — what it is, how it works, and where it’s going. They cover the bullying and political misinformation risks, the subreddit that made deepfakes notorious, and then explore the proposed solution: cryptographic seals baked into device cameras by Apple and Samsung to certify that footage hasn’t been edited.

Speakers: Sam Parr (host), Shaan Puri (host)

What Is a Deepfake [00:00:00]

Sam: This is also part of the deepfakes problem. If you don’t know what a deepfake is — a deepfake is basically where people can use technology to make a video look real. You can make anything look real.

For example, what they do is they’ll take Obama — Obama’s talking — but I can just say any sentence. I could just be like, “Hey, my name’s Barack and I’m the worst president of all time,” and it’ll make it look like his lips are moving to my sentence. Because they basically feed in a bunch of footage of Obama talking normally, and the algorithm learns: okay, here’s how Obama’s face moves when he says certain things. Then I can input any audio and it’ll make it look like he’s saying it.

Right now the technology is where it looks a little off. But it’s not that far off. It’s just getting better month over month.

Shaan: So they just showed — a paper just came out with a new one, and it’s literally just taking a still photo. They took the Mona Lisa, and then you do the same thing — I can say a sentence and it will make the Mona Lisa say it.

Sam: Like the voice and everything?

Shaan: It’s pretty crazy that they can do that off a still photo. You don’t even need hours and hours of raw footage of the person anymore.

Sam: So I downloaded an app that does that with the Mona Lisa. I’ll tell you what it’s called — it’s really neat and I think everyone should download it. I paid five bucks for it. It’s called — no, I don’t remember the name of it.

Shaan: No worries.

Deepfakes as a Trust Problem [00:02:30]

Sam: What do you think about this whole deepfake thing? Because there’s a real problem here. Your sister’s a lawyer — she was a DA before, right?

Shaan: Yeah, and then she became a public defender in the Bronx.

Sam: So if you can’t trust what you see… We have photos that can be Photoshopped, audio that can be faked. There’s even that thing — I think it’s called Lyrebird — where you just train the system on your voice and you can type anything and it’ll say it in your voice. Or it’ll say it in Donald Trump’s voice.

So if you can’t believe what you see, what you hear, or what’s in a video — how does that mess up the world? Fake news is going to become a bigger problem.

Shaan: Yeah, it’s gonna be a problem. But why are we defaulting to that? Why are we only worrying about the bad stuff? Let’s think about all the amazing things. Like, there are gonna be fake actors. Your dance video — it’s going to make movies amazing. Brad Pitt won’t exist. It’ll be some AI-generated person. And I think that’s awesome.

Here’s a future world: you’re just looking at your computer, and instead of having to be an animator or a movie editor or an artist, you’re just going to be able to say things and it’ll create what you’re describing. You could say, “What if Donald Trump was talking to Barack Obama, and then Donald jumped in the air and said this?” — and it literally just animates that. That’s how far this is going to go. And that’s going to unlock the ability to create stuff for people who don’t have the technical skills to do it today.

Sam: I’m not too worried about how they’re gonna regulate it because if you have a political ad now, anyone can run an ad on Facebook or TV that says, “I’m Donald Trump and I hate Mexicans.” Anyone can do that. But there are ways to say that’s not allowed, and it has to say “I’m Donald Trump and I support this message” — there are ways to regulate it.

Shaan: But man, like if you wrote a political TV ad right now it has to say who pays for it. It doesn’t even have to be a political thing. This can affect anyone — someone in high school could just bully somebody else.

How Deepfakes Got Famous [00:06:00]

Sam: I think deepfakes got famous because people were putting other people’s faces on porn actors. You could take a photo of someone and put it into a porno and it would look like that person was in it. People were doing this with celebrities, they were doing this with their classmates.

The subreddit for this got huge — it was r/deepfakes. That’s how it got popular. It was this incredible bullying tool. Forget political ads — it was just slander left and right.

Shaan: Did that subreddit get taken down?

Sam: I think it’s banned from Reddit now, because it was becoming a real problem. But that didn’t stop the actual problem — it stopped the subreddit where people were sharing it.

So I think there is going to be a massive problem for evidence. It’s going to become a massive problem for slander and spin. I could take a video of you and make it say that you hate a certain group of people — it’s not Trump — and people would believe it, because they’d see your face and say, “I saw him say that.”

Shaan: Yeah. Yeah, I don’t know.

The Cryptographic Seal Solution [00:09:00]

Sam: So I’ll tell you what some people are trying to do to combat this. There’s a whole bunch of programmers working on it — programmers in general have a high bar for truth. They want things to be true, logical, reliable. And when you take away evidence — which used to be a source of truth, like, “I have video of you doing that thing” — now it’s like, well, this video could be anything. This video could be fake.

So they’re trying to solve it. The theory is: anything you make to validate stuff, the con artists will always be one step ahead. There’s too big a payoff to be able to fake this stuff. It’s like a counterfeiter — it’s always a cat-and-mouse game, which is not a winning solution.

So the solution they believe in is that the phone makers themselves — the device makers — will need to put a cryptographic seal on video when it’s taken. It’s like a tamper-proof seal, like we have with medicine. If this seal is broken, that means the video’s been edited in some way.

At some point, people will only trust videos or photos that have this cryptographic seal on them that says, “This has not been edited, because it’s on the device itself from when it was captured — it gets instantly implemented.”

Shaan: Who’s working on that?

Sam: The problem is a startup can’t do this. It’s actually Apple that needs to implement it. It’s actually Samsung that needs to implement it. And luckily Apple is a pretty privacy-conscious company — they know that if their tools are being used for evil, they usually actually do something about it. So hopefully Apple is working on this.

I have a friend who was doing a startup trying to do this and he kept running into this problem: the people who need to do this are all the camera makers. The security camera itself needs to do this. And that’s really the only decent solution.

Some technical people will say there’s nothing that’s actually tamper-proof — you can still get around it. But it’s better. And there are things where it mostly works.

Shaan: Exactly. I think that’s the way the world is going to work later. You’re gonna need to see some little icon — like the gluten-free icon, the organic icon — except this one says, “This is legitimate.” There’s going to be a legitimacy icon on any footage.

Sam: That’s the icon business. That’s a good business. JD Power.