This Hidden Trick in Campaign Experiments Will Blow Your Mind

T
This is part 10 of 50 in 'The Science of PPC' blog series
The Science of PPC is a 50-part series by Andrew Goodman of Page Zero Media with new content published every Thursday. The goal of the series is to expose readers to enough insights, tips, tricks, and examinations of real-world methodologies to walk away with some feeling of mastery and confidence in their own powers of PPC analysis. Learn more

Remember Deep Blue? In 1997, IBM’s supercomputer beat world chess champ Garry Kasparov in competition. Brute force computing power was starting to reign supreme over human intelligence, at least in a competitive game with a large, but more or less finite, set of possible moves. Computing an optimal single move to do at least as well as an opponent is one thing, but “thinking ten moves ahead” requires potentially enormous computing power (and, ideally, cunning – the ability to play tricks on and exploit tendencies in a known opponent, something chess computers at that point didn’t really have). This power – originally the exclusive purview of the most powerful supercomputers – can now be unleashed on your laptop, as Kasparov later noted.

More recently, a computer program called AlphaGo (from DeepMind, now owned by Google) was able to defeat a human champ in the exponentially more complex game of Go.

The trend seems clear: the bots are getting better.
The trend seems clear: the bots are getting better.  They’re nearly always better than humans at large-scale, brute force computation. (That being said – computers aren’t a lot of things humans are, in case that isn’t already patently obvious. For example, unless you explain it in depth to a computer, it’ll be highly unlikely to grasp the notion of “black swan events” or “tail risk,” when these literally represent patterns or outcomes the computer has never seen before and hasn’t decided to go looking for in any type of simulation. Most humans are just as stupid as computers in that sense.)

Someday, we might be warranted in referring to newer approaches to machine “thinking” as artificial intelligence.

So it sounds like we all agree on one thing: we should continue to set up competitions between computers and humans in games of probability and strategy so we can understand the state of today’s technology, and in what contexts humans are outmatched.

Going Pound for Pound: Manual vs. Smart Bidding

In Part 9 , I referred to an example of a Google Ads Campaign Experiment – now popular in our industry in part because Google heavily promotes it to advertisers – whereby the performance of so-called manual bidding goes up against the performance of Google’s Smart Bidding system. If the human’s job is more or less to hit the target ROAS with as much transaction volume or revenue as possible given that target – with, perhaps, some ancillary objectives coming along for the ride – then it stands to reason that, given enough data to work with and modern machine-learning algorithms able to detect patterns that support that objective, the bot should win.

Here are the typical ground rules:

  1. Set up an Experiment group. Set Bid Strategy to a Target ROAS bidding strategy.
  2. The control group will be set to Manual CPC Bidding or Manual Bidding with Enhanced CPC.
  3. Leave both groups alone for a month or two; watch the Experiments Dashboard to see which side is winning on your core metrics such as total revenue and ROAS.

Fair test, right?

Sure, about as fair as if – in the match between Kasparov and Deep Blue – Garry waltzed out of the room, to be replaced by a beginner. Or to take the analogy to its logical conclusion: a stuffed dummy. The black chess pieces wouldn’t be moving at all.

Or, you could let Deep Blue play happily away, and put Garry in shackles. Or feed him gin and tonics with a strong sleeping potion. Sweet dreams, human!

Yep – that would be a test rigged in favor of the computer. The computer gets to play to its heart’s content and the human gets to sit this one out.

Not only should the computer have to face a real human, and not a passive one who isn’t allowed to play, but in a professional field, the computer should be able to beat the best in the business. Companies don’t (or shouldn’t) go out looking for really crappy specialists in any challenging field. Let a PPC ninja face off against Deep GOOG.

Caveat emptor

When someone challenges you to sign off on a “true test” of some methodology or technology, consider that they may have inside knowledge on just how “true” that test is.
Sometimes a “fair” test is not fair.

Maybe a fairer test, then, would be to let the human optimize the Control campaign the entire time, responding to changing conditions, and dealing with essentially the same seasonality and shifting behavior patterns the bot is attempting to come to terms with (sorry, I anthropomorphized just there).

It’s probably fair, though, to stipulate what kinds of changes the human shouldn’t be allowed to make. Since the computer won’t add new ads and ad extensions, the human shouldn’t be allowed to do that. The human also shouldn’t be able to significantly alter campaign settings during the Experiment. But I believe the human’s role in campaign management while faced off against the bot probably should encompass a bit more than obvious bid adjustments. Since the bot’s approach to bidding can address probabilities governing virtually any aspect or characteristic of a potential user session (keyword query that might generate an ad impression), its human opponent should be able to do the same, albeit using the limited levers at human disposal. The person should be allowed to add negative keywords, to refine the mix of keywords (assuming it’s not so dramatic that it leads to cannibalization), enter geographic exclusions, and alter bid adjustments of various kinds. Arguably, the human should be allowed to tinker with and even add audiences and adjust bids to those audiences. (Why not? The computer does. We just don’t see the Change History of how it does that. Wouldn’t that be interesting.)

Why? Because Google’s systems have access to all the same probability data, and are potentially acting on it when making bidding decisions (remember, a bid can be set so low that eligibility for an ad impression drops to near zero on a given query or potential user session). Indeed, Google’s systems still get access to levers (user behavior patterns, audiences, browser, device, and other information) that don’t show up in the Google Ads interface for humans to tweak (nor are they allowable for bid adjustments through the API).

I’d like to thank Matt Van Wagner, Technical Editor for this Science of PPC series, for asking probing questions that led to the above two paragraphs. Matt will be reviewing a number of future parts of this series, as well. Heinous assumptions, conjectures, and conclusions remain mine alone.

Next time, when someone challenges you to sign off on a “true test” of some methodology or technology, consider that they may have inside knowledge on just how “true” that test is. Get used to saying: “I don’t accept the premise.”

Don’t leave your beverage unattended.

The Science of PPC will now take a much-needed break for the holidays. All the best to you and yours, and see you bright and early next year!

Read Part 11: A Focus Group of One: What Consumers Really Think of Your Google Ads

About the author

Andrew Goodman

Andrew Goodman is Founder & President of Page Zero Media. His accomplishments include writing the first-ever full-length book about Google AdWords, heading up this Google Ads Premier Partner agency, maintaining a string of 48 consecutive speaking engagements at Search Engine Strategies in North America, co-founding a startup called HomeStars, and wearing the dickens out of a lab coat at the SMX Advanced session called Mad Scientists. His active lifestyle requires increasingly elaborate bowls of yogurt. He works from the Toronto office as well as a home office in Fredericton, NB.

Latest Entries

Tags

About the Author

Andrew Goodman is Founder & President of Page Zero Media. His accomplishments include writing the first-ever full-length book about Google AdWords, heading up this Google Ads Premier Partner agency, maintaining a string of 48 consecutive speaking engagements at Search Engine Strategies in North America, co-founding a startup called HomeStars, and wearing the dickens out of a lab coat at the SMX Advanced session called Mad Scientists. His active lifestyle requires increasingly elaborate bowls of yogurt. He works from the Toronto office as well as a home office in Fredericton, NB.

PPC Management Services

Looking to work with our search marketing agency, Page Zero Media? We’d love to hear from you. Just fill out our form to get a quote

🙂