Does Smart Bidding Make Life Simpler? That’s a Definite Maybe

D
This is part 22 of 50 in 'The Science of PPC' blog series
The Science of PPC is a 50-part series by Andrew Goodman of Page Zero Media with new content published every Thursday. The goal of the series is to expose readers to enough insights, tips, tricks, and examinations of real-world methodologies to walk away with some feeling of mastery and confidence in their own powers of PPC analysis. Learn more

Online advertising scales, if you let it. Once something scales, complexity threatens to drown us. That’s why we must use Occam’s Razor at times.

Recognizing this, Google has packaged Smart Bidding using shorthand terminology and easy implementation in a bid (pun unintentional) to reduce the number of decisions facing human account managers. That could help you. On the other hand, if you’re a more advanced practitioner, you may be excused for thinking that complexity doesn’t disappear just because you sweep it under the carpet. You might chafe at losing control, as I’ve discussed previously.

In previous parts of this series, I’ve looked at some of the complicated moving parts involved in bid automation. Most recently, I warned that Maximize Conversions and other “Maximize” settings among Google’s Smart Bidding options are tantamount to handing Google a blank check to make your daily budget into the only really actionable and meaningful KPI.

Target CPA and Target ROAS are more mainstream Smart Bidding choices. How they work, in principle, is breathtakingly simple. In the case of Search campaigns, under Smart Bidding, the amount you bid (your Max CPC) on any given user session (paid ad impression on a Search Engine Results Page mapping to one of the keywords in your campaign) is determined by your target value multiplied by your predicted conversion rate. So if you are eligible to show up for a query that maps to your keyword +exotic +grass +seed, you’ve set a target CPA of $20, and the predicted conversion rate for that given user session is 2.0%, then the math works as follows:

Target ($20) x expected conversion rate for the current user session (0.02) = your CPC bid (or more precisely, how much you’re charged for this click if it occurs… sorry for the curveball).

So, in this case, Smart Bidding will bid 40 cents… or more precisely, it will bid as if you’re likely to pay 40 cents by bidding slightly higher than 40 cents. Simple, right?

That’s what a rational human would do, as well… but the human might not have as much processing power and certainly not the ability to recognize multivariate patterns nearly instantly with each user session (then again, it’s worth asking yourself under what conditions humans do as well or better than the bots).

Accounting for user intent

You don’t get the click every time, of course. More often than not, that user will click on something else, or nothing at all.

In some cases, for very similar queries, the user might not be as likely to convert at this point in time. Assuming that it has data consistent with a lower-intent user session, Smart Bidding will bid less aggressively and you’ll see less volume (sometimes volume might be so low that you are not eligible to show an ad on that given session). The expected cost per acquisition should be identical for each user session, regardless. (It’s not clear that this is precisely how it works, of course. Google’s technology could be permitted to “dog it” and “do worse” some of the time as long as it’s satisfying your aggregate goal. And even then, it might simply stink at this for an extended period. You’re still paying.)

The actual CPC on the less-aggressively-bid Smart Bidding sessions would be lower, as they might be if you had manually bid down a device type, geographic location, or time of day. Reporting only provides aggregates, but in microcosm, the idea is that a “less aggressive” session would be less aggressively bid resulting in a lower CPC (if it received a click), and if the prediction is laser accurate, you’d wind up still paying around $20 per conversion for less favorable, worse-converting clicks of this type.

Simple!

Target ROAS adds an additional wrinkle by factoring in (in addition to expected Conversion Rate) the predicted Value/Conversion (also known as average order size or cart size) in order to arrive at an expected Return on Ad Spend for a given user session. So if you set your Target ROAS to 5.0 or 500%, it would work about the same as the above Target CPA logic, except that it would learn to predict order sizes as well to target ROAS as your KPI instead of CPA.

All of the above is fairly straightforward.

In theory, the machine learning inherent in Smart Bidding should improve accuracy over time as the system gains confidence in the factors (including the keywords and queries themselves, along with your ads and landing pages reflecting different products and/or services – don’t forget those key drivers!) that seem to correlate with a conversion.

The complexity, of course, comes in the methodology behind making an accurate prediction. In theory, the machine learning inherent in Smart Bidding should improve accuracy over time as the system gains confidence in the factors (including the keywords and queries themselves, along with your ads and landing pages reflecting different products and/or services – don’t forget those key drivers!) that seem to correlate with a conversion. I reviewed those factors in Part 6.

  • As always, more data is better. The machine learning Google promises to engage to bid accurately on every eligible user query should become more and more accurate with time. As it does so, it may unlock more volume as it gets better at hitting targets. That’s the way it works when humans run the show as well.
  • Hearsay from Google reps suggests, too, that you should try to keep settings stable. Raising or lowering your tCPA target more than 5-10% (it’s hearsay, so I can’t tell you exactly how this works) is likely to lead to a “reset,” with some or all of the accumulated history lost. The system likes to deal with a predictable universe of potential user sessions, since there are so many of these.
  • In a more advanced vein, Smart Bidding will have varying degrees of success depending on the playing field you provide for it. Suitability of Smart Bidding in fact varies wildly depending on campaign characteristics. If a campaign has many disparate types of intent, it might perform more erratically than if it had a relatively consistent set of behaviors.

Your overall financial performance would also almost surely be worse if your campaign structure is lazy and amalgamated, such that some very valuable acquisitions are treated the same as less valuable ones. You might have different targets for segments of the business based on profitability, shipping costs, lifetime value, inventory levels, etc. In lead-based businesses or Conversion Goal setups that focus on a cart adds as opposed to (less frequent) sales, the conversion rates from one part of the sales funnel to the next could vary by product or by country. One size may not fit all.

So campaign structure matters.

Put Smart Bidding to the test

You’re going to lose control of most every means of optimizing campaigns once you turn on Smart Bidding, remember. You won’t be able to force a more even ad rotation, you won’t be able to control any keyword bids or bid adjustments such as geography or demographics, etc. You can control only the broad outlines of your target geography, and you can employ negative keywords, but you can’t do much else.

You’re going to lose control of most every means of optimizing campaigns once you turn on Smart Bidding, remember. You won’t be able to force a more even ad rotation, you won’t be able to control any keyword bids or bid adjustments such as geography or demographics, etc. You can control only the broad outlines of your target geography, and you can employ negative keywords, but you can’t do much else.

I always recommend that advertisers test the efficacy of Smart Bidding periodically by setting up a Campaign Experiment pitting Smart Bidding against a different bid strategy such as Manual CPC. This isn’t foolproof (what if the Manual CPC campaign had been neglected in the past? You’d be testing automated effort against little to no human effort). It’s a handy antidote to the “just trust us” approach that Google seems to prefer. Treat Google advice as interesting hearsay, but do your own research and testing.

Target ROAS receives mixed reviews from both Google reps and advertisers. It should work on larger volumes, but it may behave erratically in some cases. By all accounts, tCPA is more predictable.

I’ve long advised advertisers to be wary of “spiky” ROAS figures (high standard deviation is common) when gauging ad tests, for example. With a certain amount of data, the Conversion Rate of an ad version could be a more reliable criterion for long term success than ROAS. An ad version (especially one that is not vastly different from the other ads in rotation) might “get lucky” with a large order or two, skewing the test. The ad version might not have caused the large order size in this case, although you should test ways to shape ad copy to encourage larger order sizes if you can. It’s possible that tROAS might be fooled into foreshortening a full and patient test of all ad versions. (Don’t even get me started on Google’s bias to CTR, due to its direct impact on Google’s revenues.)

Despite some shortcomings, ROAS is often pursued by our e-commerce clients as the main KPI, so Target ROAS would be a logical setting when it comes to setting up Smart Bidding. Nobody wants the bot to bid according to which user sessions can generate a bunch of $12 purchases, unless that’s actually your strategy.

Tapping the types of targeting that might encourage substantially higher orders is a practice that may well be in its infancy.

In my view, tapping the types of targeting that might encourage substantially higher orders is a practice that may well be in its infancy. There should be a lot of deep predictors that help us with holding out for that chance (it seems a bit like a lottery ticket) that someone will order a beautiful bed and matching furniture for a total order of $3,000, as opposed to a cute accessory for $125. But does Google have access to them all? Does anyone? Certainly, this may vary subtly depending on the business.

You’ll hear a lot of speculation along those lines. The magic mystery strategy du jour at one point a few years ago seemed to home in on query length. Should we expect a great deal from searches with a greater number of words in the query? I think it’s safe to say you can’t build a business on such insights. There are some practical difficulties in adapting tactics to pursue that strategy (if true), such as using an inordinate number of long-query exact matches. In any case, technologies such as Google Shopping sort of take care of that type of thing. Some long queries will be well recognized by the system. If they’re truly a source of insight, then Smart Bidding (someday) might catch onto patterns like this.

You can improve the performance of Smart Bidding by giving it more diverse ad creative to work with. Unfortunately, when it comes to positioning and creative in keyword ads, a lot of account managers are just mailing it in. Google making available multivariate ad testing capabilities (RSA’s) is one thing; advertisers taking lots of care with designing the experiment seems to be quite another. Many of us are just grabbing handfuls of spaghetti and throwing it at the wall. And sometimes in the SERP, it really shows. This must be frustrating to consumers… or at the very least, a bit dreary.

What about big orders and bulk buying?

Back to the big-order puzzle. One interesting use case for using Big Data to capture big-order patterns could be one that smartly predicts which buyers will buy things in bulk. Or buyers who aren’t end customers, but rather professionals or vendors of some sort. Perhaps it’s a “near wholesale” purchase by a small business or an influencer who is buying for customers and not themselves.
“Spiky” purchase patterns – for example in scenarios where a typical homeowner might seek a set door knobs worth $150 in total as a “one-off,” whereas a pro might buy $2,000 worth – can lend themselves to more accurate bidding in the hands of a bot, provided there are enough patterns to notice what’s different about the set of attributes typically associated with pro purchasers. (Some of this remains up to you, human: for example, in how you organize your campaigns, how you design messaging, and how you run remarketing and multi-channel marketing campaigns.)

A friend of mine is currently running a direct-to-consumer campaign that sells a specific type of high-tech equipment (I’ll refer to it as massage therapy for short). The product is resonating with some purchasers, but physical therapists can afford more and bigger units. The units are constantly in use, so they’re actually realizing a return on their investment – it isn’t just a consumer purchase. And for these professionals, of course, it’s a tax writeoff.

That phenomenon, of course, also applies to office chairs, chainsaws, and printers. 😊

In a better (if not perfect) world, the bots would see who these buyers were, and bid accordingly.

As I’ve mentioned before, Smart Bidding won’t make a bad account or campaign into a good one. It won’t write ad copy or decide on a whole range of factors that drive business success. In the right circumstances, it might save you some effort in bidding, and it might help you bid more accurately.

Unless you prove to me otherwise, there’s always a chance that Smart Bidding will be selectively “dogging it” to enhance Google’s profitability. If you can live with that, you might be able to use Smart Bidding to your advantage.

But Smart Bidding is of course single-minded. It doesn’t engage in the kind of gamesmanship that informs human approaches to dynamic, multiplayer scenarios. (Chess is a piece of cake for a computer. The real world isn’t.) For example, a human has the power to change its mind selectively and deal cagily with exceptions. The human can drive even harder for a bargain (bargain CPC’s, I mean) in pieces of a campaign where it might be possible to be extra frugal. The computer, by definition, is what Herbert Simon referred to as a “satisficer.” Especially because it works for Google, the bot will accept mediocre performance if it can get away with it. Unless you prove to me otherwise, there’s always a chance that Smart Bidding will be selectively “dogging it” to enhance Google’s profitability. If you can live with that, you might be able to use Smart Bidding to your advantage.

Some tricky phenomena to watch, as previously warned:

  • Cherry-picking: Bidding up excessively on certain audiences (especially Remarketing audiences), showing strong results based in part on “easier conversions.” There are some sophisticated means of checking on this, but I don’t believe Google is obligated to display Audience statistics consistently or accurately, so we really are dealing with a black box here. But you could monitor statistics (use Google Analytics too) related to any audiences you have explicitly set up and be suspicious of absurdly high CPC’s on this traffic. Also, avoid commingling impressions and clicks from Display and Search, especially if remarketing audiences could be involved. (Set campaigns to either Search or Display.)
  • Cannibalization: Taking impressions, clicks, and conversions away from special-purpose campaigns (such as Brand or DSA) you’ve built around a technique to force higher ROI, lower CPC’s, etc., in order to “make its numbers” in the campaign in question. Your primary defence against this may be negative keywords. But you might end up back with Manual CPA if Smart Bidding isn’t behaving from this standpoint.
  • Match types now mean nothing: If you mix and match match types within ad groups, your metrics will look different at the Keyword level. The range of keywords that your ads are mapped to may shift, and it could be frustrating to watch. It might tempt you to pause keywords that aren’t performing as well. Blanket advice isn’t warranted here, as the patterns vary significantly from account to account. But personally, this is one aspect of Smart Bidding that often drives me back to Manual bidding: the fluid nature of how Smart Bidding prioritizes some keywords over others as compared with the previous manual approach. Who’s on first?

Target CPA and Target ROAS might drive amazing results for you. Certainly, being relieved of the effort of optimizing various elements of campaigns may be an advantage. You might consider designating some parts of an account for Smart Bidding experiments while working with others manually. That way, you’re staying sharp, building, learning, testing, and all of those fine human things.

Read Part 23: On Lifetime Value & the Overreach of Predictive Analytics

About the author

Andrew Goodman

Andrew Goodman is Founder & President of Page Zero Media. His accomplishments include writing the first-ever full-length book about Google AdWords, heading up this Google Ads Premier Partner agency, maintaining a string of 48 consecutive speaking engagements at Search Engine Strategies in North America, co-founding a startup called HomeStars, and wearing the dickens out of a lab coat at the SMX Advanced session called Mad Scientists. His active lifestyle requires increasingly elaborate bowls of yogurt. He works from the Toronto office as well as a home office in Fredericton, NB.

Latest Entries

Tags

About the Author

Andrew Goodman is Founder & President of Page Zero Media. His accomplishments include writing the first-ever full-length book about Google AdWords, heading up this Google Ads Premier Partner agency, maintaining a string of 48 consecutive speaking engagements at Search Engine Strategies in North America, co-founding a startup called HomeStars, and wearing the dickens out of a lab coat at the SMX Advanced session called Mad Scientists. His active lifestyle requires increasingly elaborate bowls of yogurt. He works from the Toronto office as well as a home office in Fredericton, NB.

PPC Management Services

Looking to work with our search marketing agency, Page Zero Media? We’d love to hear from you. Just fill out our form to get a quote

🙂