Deep Fake: When Seeing is Not Believing

Deep fake images have already been created and distributed showing Donald Trump getting arrested.  These images are not real and the event did not happen.  These images are synthetic yet it appears Trump got arrested.

This, mind you, was made to show how far the technology has gone.  It was done intentionally by someone who wanted to inform and warn the public.  

What about when it’s made to deceive?  

The recent preview of Open AI’s Sora increased the urgency on this issue. It created videos from text prompts with synthetic humans that are indistinguishable from actual humans. This dramatically expands the ability and power of deep fakes. It also makes them fairly simple to create.   

The picture above is not a real person. It’s an entirely synthetic image created by Open AI’s Sora

What type of impact will deep fakes have on voters when anything can appear possible? Most Americans have a negative opinion of politicians. After these videos have been seen, will they continue to influence the voter subconsciously?  Even after a voter learns the video was entirely fake, will it still leave an impression?  

What deep fakes are being created right now about Joe Biden, Kamala Harris and Nancy Pelosi?  A robocall sounding like Joe Biden was used in January in New Hampshire to discourage people from voting.  It was created by a Democrat who said he was testing the system.

In the last weeks of the campaign it will hardly matter.  Claims, counterclaims, denials and accusations will flow into a noise bubble where few things are certain.     

Big tech companies have pledged to proactively work against this and they will as they know it is in their interest to avoid onerous regulation that may follow. 

But how much can they control? The hardware can be rented or leased. Can those who rent out the hardware spot bad actors and refuse to let them access the equipment?  What would they look for?  Once they know what to look for, how easy would it be for a bad actor to avoid detection?  Could the salesperson be held liable considering how novel the technology is and how random the buyers can make themselves appear?

At the end of the month or quarter would the hardware salesperson be judged by how much revenue they generated or how many deep fake proliferations they blocked? 

Beyond the election what will this mean for trust?  When an individual can no longer trust what they see on a screen, it will alter how they see the world?  With trust in institutions, like the media, near all-time lows will it go lower?  

What about social trust? One friend emails another with a video of them or posts it on Facebook.  Is it real, altered or entirely synthetic?   

The stakes are high and the technology is moving far faster than regulation. The technology is remarkable, while the concerns, in the short and long term, are real.