Automation has always made sense for managing aspects of complex PPC accounts. Our industry has lived through many iterations of excessive manual machinations, followed by equally perverse third-party automations devised to relieve us of toil. Eventually, either the platform (Google Ads) comes up with an elegant native solution, or all parties settle on a common-sense approach to what used to be superfluous busywork and gamesmanship.
Take, for example, auction dynamics. Before Google AdWords ruled the roost, a PPC platform called Overture made advertisers’ bids visible. (Even if it didn’t, some clever advertisers might work on means of guessing at other advertisers’ bids.) Advertisers became attuned to the scourge of “bid gaps” (bidding more than you might have to in order to win a given keyword auction, i.e. to appear higher on the page than the next-highest-bidding advertiser). At first, advertisers attempted to close these gaps by simply watching closely and adjusting bids as necessary. Talk about an unnecessary extra step!
More diabolical than watching out for bid gaps was the process of “bid jamming” – trying to drain a rival’s funds by making sure your bids were frequently just one penny below that advertiser’s bids – especially if they were bidding unreasonably high.
Third-party software soon came along to allow advertisers to automate their execution of these bid maneuvers. Some of the tricks were even more diabolical than “bid jamming.” One tool had a setting called “punish.” I have no idea what it did – I just hope it restricted its scope to PPC bidding.
Imagine if PPC advertising had been born in the 1980s. Depeche Mode would have written a song about it.
Automation to the Rescue?
In 2002, Google rolled out AdWords Select. Bid gaps were automatically eliminated in the click pricing (since that day, you automatically pay one cent more than you would have to pay to beat the next advertiser’s Max Bid) and bids were not published. Rank (and today, eligibility) in the auction was determined by a formula incorporating Max Bid X CTR (relevance), so you couldn’t assume a higher rank automatically represented a higher bid.
Poof! Pointless bid jockeying eliminated. Focus could move on to every other thing. And there remained many other things.
Fast-forward to today’s more complex auction (Quality Score includes Expected CTR, Landing Page Experience, and Ad Relevance, for example; Sitelinks, device performance, etc. are also taken into account). Google, as a matter of course, rolls out native platform features that can automate many headaches out of existence, eliminating both pointless busywork and the need for awkward (and costly) third-party automations.
I don’t generally believe in grand-scale third-party automation solutions to help run Google Ads. Those companies spend heavily on R&D only to find themselves sitting on expensive, overwrought white elephants. I do believe in nimble tools that can fill certain gaps in Google’s offering. That’s the beauty of the API ecosystem: it offers choice. A nice compromise would be like Optmyzr: a swiss-army-knife kit full of nimble PPC automation tools for a reasonable price, sans smarmy salespeople constantly wasting your time and disrupting your workflow. There are a number of other niche, specialized, and nimble tools in the marketplace. It just depends on your needs. An increasing number of advertisers also make use of Google Ads Scripts, or even create original, niche software that interacts with Google Ads via the API. (Our version of this at Page Zero is called Zerobot.)
Smart Bidding
In the past, though, third-party machines have made serious errors. Page Zero has audited and taken over accounts where the very basic rule set wasn’t up to the job. The fact of the matter is, accurate bidding tends to live in a range where the outcome (aggressiveness level, ad position, impression share, etc.) works out about the same if you’re falling within that correct range. This isn’t the world of Flash Boys, where minuscule advantages add up to millions or tens of millions of dollars as the efficiencies are repeated.
Google’s Smart Bidding is undoubtedly better than many of those old third-party solutions. The machine learning often works well so that mistakes are overcome and patterns are recognized. Google has more access to complex behavioral data, so the tools might be able to bid in a more sophisticated way on each user session using a regression analysis. (Whether it’s fair for Google and Google alone to be able to leverage these behavioral points can be a question someone else takes up with regulators.)
So Who’s Smarter?
Using bid rules or filters (which require some brain work, but not fingers-falling-off, eyes-falling-out drudgery), you can sweep through hundreds or thousands of keywords meeting certain parameters and bid them accordingly. You can also adjust seasonally (or for other reasons) quite nimbly when it comes to general aggressiveness levels. In fast-moving seasons, there is often too much at stake to cede control to a “still learning” algorithm based on promises and models. (Nassim Taleb has choice words for those who risk others’ money and health with blind acceptance of models: “Intellectual Yet Idiot” (IYI)).
Sure, I’ll agree that Smart Bidding has worked for us in specific cases, but it’s no panacea. When we are able to turn it on, yes, our powers of analysis will be freed up for other things. And as I mentioned earlier, “there are many other things.”
In fact, those other things are so important, and so impossible for the machines to initiate or “think about” like people, that you’re probably not going to have good outcomes at all with Smart Bidding unless you’re quite professional – indeed, much better than your competitors – at planning and executing on them.
Here are a few:
- Account structure. How any bid strategy works depends on how you set things up. Bid strategies work on aggregates, not on tiny bits of data, since they aim for aggregate targets. Well, how big are the aggregates? How do you know if there would be any way to improve on the performance of a given ad group or campaign? What campaign types are being used? How many campaigns?
- Keywords. What is the mix of general and more specific keywords? When will you add more keywords or deploy negative keywords? How do you deal with the tug of war (in terms of being mapped to queries) of keywords using the available match types? Where do you set initial keyword bids, and what’s your plan to adjust them based on performance data? When it comes to keywords, I’ve noticed a quirk of performance in Smart Bidding. The ones you’d expect to perform weakly – your most diffuse, but still useful, keywords – might persistently underperform instead of normalizing to the target. With a human manager, you’d actively bring them in line, rather than slowly learning about them (or never learning, but continuing to run the experiment forever). How is this optimal? Let me suggest an explanation: when you’re hitting the aggregate target, say ROAS 7.5, Smart Bidding has “done its job.” There’s no rule that suggests that every inefficient (or too efficient) segment needs further tweaking. And Google promises nothing with regard to volume. So Smart Bidding will hit the target satisfactorily, but not with max effort. To make Smart Bidding smarter, you might be advised to actually pause or delete some of the keywords, so automated bidding effort is funneled through more effective keywords.
- Ad creative. At a minimum, it typically takes months of testing to arrive at ads that consistently outperform. That’s not only based on the size of the dataset, but based on the pace of creativity, collaboration, and behavioral feedback. We make inferences from other campaigns and accounts. Our spouse or child inspires us. A principle of Human-Computer Interaction (for example, “Reduce Cognitive Load”) leads to yet another testing parameter: try shorter ads, or more words at a Grade Six reading level. “Several months” of creative iteration is, in fact, optimistic in many cases. At some point, it might make sense to let the machines take over. But behavior and performance are different at different maturity levels of the human iteration process, so at what point should they take over? There may well be a point, but in the majority of accounts, what they’re “taking over” isn’t good enough yet (garbage in, garbage out) – regardless of whether a machine or a human is picking the winning ad versions.
- A multitude of settings. If you’re advanced, you might use a setting (in ad creative setup) that sends mobile traffic to a different landing page (one that you’ve optimized for mobile). The machines won’t tell you what strategies to come up with, obviously.
- Demographics (to take just one example). Personally, I find that it can be highly effective to incorporate demographic bid adjustments (such as gender) into bidding. Supposedly, the machines are doing this. But humans are better at it in many cases. Fred Vallaeys, in his excellent book Digital Marketing in an AI World, gives the example of an insurance account that should never advertise to anyone under 18, because they wouldn’t legally be able to buy insurance. The machines may waste a lot of money coming to that conclusion. They’ll also potentially never fully stop that type of traffic, resulting in unnecessary waste. (Fred’s example doesn’t exactly square with the settings in Google Ads, since “Under 18” isn’t an available category of user. Those users, I gather, are buried under “Unknown.” BTW, that might be a nice explanation of why “Unknown” performs terribly in some cases, but actually performs better in others, even though the reasons for “Unknown” may be myriad.)
- Audiences. Don’t even get me started. (I’ll cover Audiences in future sections.) Suffice to say, some audiences perform extremely well, but it’s because they’re remarketing audiences. Black-box audience types like Smart Audience (Google loves names like this) may well be pulling in remarketing audiences in a nontransparent fashion. We prefer to have more control over how different audiences are handled, and whether they’re remarketing audiences or not.
Machine Learning Helps, But Human Learning Still Prevails
Machine-learning-based bid strategies have been around for some time. They may well improve terrible, lazy campaign and ad group setups, and they might also do a slightly better job of bidding in cases where the setup is very professional.
That leaves you and I in a conundrum. At what point do we turn the automation on? Doesn’t it usually take a fair bit of iteration to reach the point where the automation can consistently lead to superior business outcomes?
This may be less of an issue in a realm like Shopping, where individual bid decisions can be relentless, and where “setup” involves a (really important) process of feeding all the relevant data into a (large, ideally) feed.
Perhaps that’s my point. Once some aspect of rote work is eliminated by machines, there are other things. Many other things. Things that 95%+ of account managers have no aptitude for, or interest in. Smart Bidding won’t fix that. Great accounts are going to do pretty well either way: with conscientious human effort around bid levels, or smart bid automation (if you watch it like a hawk for bias, cannibalization, etc.). But how do you get to a great account? How do you keep it great? That’s a non-trivial question.
Read Part 7: How Google Ads Exploits Cognition to Inflate CPCs