Fakes Are Ruining the Internet

By now, you've probably seen "This Person Does Not Exist.”

It's a simple web interface that uses AI to generate human faces. It's also the tool that is used routinely to fool journalists into quoting sources who don't exist.

Meet “the team”

The photos above were created by a generative adversarial network (GAN). The red arrows indicate flaws in the rendering. Nevertheless, these “deepfake” personalities have been quoted in prominent media publications.

The photos above were created by a generative adversarial network (GAN). The red arrows indicate flaws in the rendering. Nevertheless, these “deepfake” personalities have been quoted in prominent media publications.

The screenshot above is taken from the masthead of a fast-growing affiliate website (i.e. a website that earns money by referring customers to make purchases on another website).

These seem like real people at first glance, but they aren't. Someone who is familiar with generative adversarial networks can spot the “tells” that give the technology away. For example, look at the distorted backgrounds and mismatched earrings.

These so-called "experts" might not exist, but the spammers behind them were able to fool reporters at publications like New York Magazine, Woman's Day, Business.com, Inverse, Reader's Digest, Lifehacker, The Simple Dollar, Score, Fatherly, Legal Zoom, Business News Daily and Cheapism.

To make matters more concerning, these non-existent people were quoted in stories that discussed sensitive topics like parenting, mental health and COVID-19.

Help a reporter out

When a reporter is writing a story that requires a source that he or she does not have, that reporter will likely turn to HARO, a service that "connects journalists seeking expertise to include in their content with sources who have that expertise."

It's no secret that search engine optimization specialists use the service to build links to content that profits them, but the rise of “deepfake” technology has made it easier than ever to exploit reporters.

Now, shady SEOs hide behind fake photos and personalities. The latest black hat search-engine optimization trend is to respond to Help-a-Reporter-Out (HARO) inquiries pretending to be a person of whichever gender/ethnicity the journalist is seeking comment from.

To combat this fraud, newsrooms must quickly adopt new methods for verifying sources.

Trust, but verify

GAN renderings are realistic and those that are retouched in Photoshop are nearly perfect, but they are not readily extensible (yet). The upshot: Always ask for two photographs of your source.

Take those photographs and plug them into a reverse image lookup service like Tineye (or even Google Images). Have they appeared on the web before? Does the context make sense?

Ask for links to social media profiles. How long have the accounts been active? Do they tell a consistent story?

Deepfake technology will only get more advanced and prevalent. To rely on Big Tech to solve this problem is like relying on Big Tobacco to cure cancer. It’s up to each of us to fight for the truth.


End Note: The likeness of the woman who appears in the lead image was created by This Person Does Not Exist and then made to appear older by using Adobe Photoshop’s AI-powered neural filters.