Google isn’t actually tackling ‘fake news’ content on its ad network

Posted by

Why are my Google display campaigns running on “XYZ Hyperpartisan Site” with less-than-accurate or altogether false articles? That’s the polite version of a question I’ve heard in various forms over the past several weeks.

Isn’t Google taking steps against fake news on the Display Network? they ask. Why are sites that spread misinformation still able to earn ad revenue through Google’s AdSense publisher network? they wonder. I’ve heard these questions over and over again recently. In a nutshell, the answer comes down to semantics, namely the difference between “misrepresentation” and “misinformation.”

Last fall, Google earned a lot of press, including on this site, for updating its AdSense “Misrepresentative content” policy to ostensibly “take aim at fake news,” as The New York Times put it. In its most recent Bad Ads Report, Google said it kicked out 200 sites — out of some 2 million — from the network for violations including misrepresentation. There has been a trend to capitalize on hyperpartisanship — because people are clicking.

Google continues to profit from ads served on hundreds if not thousands of sites promoting propaganda, conspiracy theories, hoaxes and flat-out lies. Some are fairly well-known publishers; others popped up during the election cycle and appear to exist solely to earn money from ads.

Here’s what advertisers should understand about what Google’s “Misrepresentative content” policy means and doesn’t mean.

Expectations versus reality

The “Misrepresentative content” policy states:

“Users don’t want to be misled by the content they engage with online. For this reason, Google ads may not be placed on pages that misrepresent, misstate, or conceal information about [the publisher], [its] content or the primary purpose of [the] web property.”

Last fall, the general interpretation was that, as part of this update, Google would stop allowing ads to be served alongside fake news stories (i.e., misinformation). That was largely because Google stated in one of the policy examples that sites that were “deceptively presenting fake news articles as real” would be in violation. Satire was safe, misinformation was not, went the thinking.

But that’s not what Google meant. And, as Media Matters reported in January, Google quietly removed its reference to fake news (underlined below) at some point in late December or early January.

Google lists some examples of violations with every policy; these are not all-encompassing and are intended to help publishers understand the spirit of the policy. It’s also not uncommon for Google to tweak policy examples. This is how the examples for the Misrepresentative content policy now read:

The part about addressing “fake news” confused the issue — leading the press, advertisers, and perhaps publishers to think it was going after fake news content — so Google removed it. After getting all that positive press it didn’t correct.

What the policy really means

Marketing Land has confirmed with Google that the policy was never intended to address editorial veracity. Google doesn’t look at whether an article is true or not; it looks at whether the publisher is misrepresenting itself.

Google says the information was meant to address the proliferation of deceptive information online and not to get into editorial decisions about facts. The Misrepresentative content policy and others are purposely designed to be narrow. This is in part because its ad policies need to be defensible and enforcement intentional.

Some examples

Let’s take two of the most well-known fake news cases of last fall to better understand how this policy is applied: Macedonians who built sites filled with false hyper-partisan articles to rake in money from ads; and Pizzagate, the bizarre fake story involving the Clintons that proliferated across far right-wing sites and culminated in a real-life incident. Would these violate the policy?

YES: The Macedonian sites would be in violation, but not because the content was made up. Those sites would have fallen under the policy for concealing information about who the publishers really were.

NO: The Pizzagate stories wouldn’t fall under the policy just for being fake.

The policy hinges on the publisher, not the content itself. Google says the change provides the ability to go after bad sites that were impersonating or pretending to be affiliated with national and local news outlets. Many of these sites use a bait and switch to lure users in with sensationalized headlines that lead to content that’s actually promoting diet pills or some other product. As with the Macedonian example, a news site that presents itself as operating out of New York City but is in fact based in Europe would be in violation.

The policy has no bearing on whether publishers known to have extreme bias and publish hoaxes, conspiracy theories, overt propaganda or disinformation can carry Google-sold and served ads on their pages.

That’s, in part, why you’ll find hundreds of far-left-wing and far-right-wing sites like Breitbart.com, AnnCoulter.com, ClashDaily.com, TruthRevolt.org, LeftLiberal.com, LiberalMountain.com, LiberalPlug.com, Milo.Yiannopoulos.net, TheProudLiberal.og, and TruthDivision.com on Google’s Display Network.

Some of these sites have been around for years. Many others, like “DonaldTrumpPotus45,” popped up during the election season.

The review process

Google has hundreds of manual reviewers for AdSense. When publishers sign up for AdSense, the sites undergo a manual review. In most cases, bad actors present sites that adhere to the policies in order to evade detection and then pivot after the sites have been approved.

For the Misrepresentation policy, there is a human review process. Teams are aided by technology that lets reviewers do things like look at an entire site’s domain, identify rings of sites operated by one owner, and evaluate multiple sites using similar tactics. Red flags are double-checked manually.

Do these sites violate other policies?

Many of theses sites are chock-full of ads, from Google and other networks, and most rely heavily on content recommendation networks. Content recommendation ads do count as ads under Google’s Valuable content policy that covers ad-to-edit ratios. It’s not clear if Google also counts ads from other ad networks under this policy. We have asked and will update here when we hear back.

The most hyperpartisan of these sites are usually very clever and careful not to violate Google’s AdSense hate speech policy that prohibits content that advocates against an individual or group or organization — often walking right up to the line between free speech and hate speech. Google says it keeps a close eye on many of these sites.

With the sheer volume of sites, videos and ads, the company also relies on users and advertisers to report policy violations. This is partiularly true of content on YouTube where the volume of new content being uploaded everyday is massive.

Brand safety questions

The argument over whether Google should be an arbiter of content is obviously a thorny one. Policing fake news is far from easy, as Danny Sullivan covered in “Why Google might not be able to stop ‘fake news.” For now, it is profiting from and providing a revenue source for publishers of this type of content. Google’s Misrepresentation policy is aimed at scammers, not ideologues and not opportunists who pedal propaganda and fake news for ad clicks.

From interest targeting to retargeting, there are many ways brands running campaigns on the Google Display Network can find their ads running on hyperpartisan sites without realizing it. For more, see our companion piece, “Brand safety: Avoiding fake & hyperpartisan news on the Google Display Network.”


About The Author

As Third Door Media’s paid media reporter, Ginny Marvin writes about paid online marketing topics including paid search, paid social, display and retargeting for Search Engine Land and Marketing Land. With more than 15 years of marketing experience, Ginny has held both in-house and agency management positions. She provides search marketing and demand generation advice for ecommerce companies and can be found on Twitter as @ginnymarvin.


 

Leave a Reply

Your email address will not be published. Required fields are marked *