Google’s recent decision to cease reporting on low-impression search queries – a minor nuisance, at first glance – appears to unleash more complexity than one might think. It has certainly produced strong opinions.
The long and the short of it is: if, as an advertiser, you were accustomed to a reporting standard that would show you something like 10,000 unique queries (mapped to the keywords you’re bidding on) in a campaign in a given month – you would do that by selecting Keywords > Search Terms – you might now expect to see that list shrink to only 4,000, or 3,000, or even 2,000. This typical scenario is confirmed by credible sources across the industry. It also syncs with our spot checks of a number of key accounts here at Page Zero. A 70% reduction in query detail (or greater) is more common than not. On the face of it, this represents a dramatic reduction in search query detail.
This stands to reason, of course. To this day, 20-25% of search queries are entirely unique, never before searched, and possibly, never to be searched for again. It’s perhaps the most compelling example of the Long Tail phenomenon, which is why regulars in our industry began referring to head (very frequently searched, just a few exact search terms, representing a large spend), tail (very infrequently searched, very numerous, and potentially representing a large spend), and torso terms (in between head and tail; a longish list of exact searches of interest to marketers, not too hard to identify unless you’re lazy, important to cover and to optimize around, and again, potentially representing a healthy percentage of one’s spend).
This loss of insight-spawning search query reporting is why analysts like Ginny Marvin have recently referred to this as “invisible spend.” Hmmm. That doesn’t sound fair. Indeed, over at Google’s competitor in the search space, Microsoft, Christi Olsen has recently thrown major shade at the big G for this move.
In the opinion of intermediate to advanced PPC practitioners and tool vendors, if one were to distill it down to a composite complaint:
- Google’s claim that the move has to do with user privacy is not to be trusted;
- It is a revenue grab (it’s not entirely clear how, but if it were one, it could be because more low-quality query inventory gets sold at a higher price on average, since it’s an auction);
- It is a way of confiscating valuable business intelligence from paying advertisers, stowing this in Google’s black box (advantage, Google);
- It means that advertisers will apply insights from Bing queries to their Google campaigns, rather than the other way around (disadvantage, Microsoft);
- It serves as another stick (not carrot) to herd advertisers onto Google’s automated bidding solutions and away from the option of having a human analyst make judgment calls on what consumer/prospect intent to pay for, and what not to pay for (or, as I emphasized last week, simply pay less for).
No doubt some of the above is true. But after some reflection, my own response to the recent change makes more of the long-term context and less of the short-term revenue impact. The evolution of the Google Ads platform needs to be placed in a larger context, which I attempt to get into below.
Minor nuisance for some, major problems for others
What, if anything, changes in our workflow or in terms of account performance in the wake of the change (as opposed to our philosophical stance on transparency and data hoarding)? After all, we’re still seeing a lot of query data. Personally, I don’t scroll down to the 26th page of query data to decide what to negative out.
For me and my colleagues, the decision is worrisome, but perhaps not catastrophic. But from what I’ve heard, it’s causing some major problems for some advertisers.
I’ve recently had the opportunity to discuss the issue with two of the most highly advanced PPC experts on the planet. One remains anonymous. The other is Optmyzr Founding Partner Frederick Vallaeys. Fred is also an Ex-Googler who served as Google AdWords Evangelist.
There isn’t a consensus, in particular, across the three of our viewpoints. If you’ve ever talked PPC in-depth with any of us, you won’t be surprised by that. We rarely disagree outright, but we do emphasize different aspects of similar problems.
Our anonymous analyst highlights a range of painful declines in advertiser performance that he can track as a result of the change. Take this scenario: what if there were a large swath of weird matches that cropped up in an area vital to your big-budget spend in a highly idiosyncratic vertical? What if those matches were low-frequency and invisible? What if ROAS dropped by 35% instantly? The scenario isn’t hypothetical. Our analyst is stating it as fact, something that has happened in the past 30 days. Business owners and their service providers, given all the relevant query data, may have very little problem correcting such issues and thus maintaining profitability. Without it, they’re basically required to take on board a large dose of non-converting traffic they cannot diagnose. The choice is to either accept lower volume (say, a sales decline of 20%), or swallow a financial loss. And that can be a tough pill to swallow when it’s not due to competition or a weak business model, but, rather, due to inappropriate keyword/query matching at Google’s end.
In keeping with much of the tone of his book Digital Marketing in an AI World, Vallaeys asserts that advanced human analysts and campaign managers are becoming more valuable than ever. After all, much of the routine of search advertising in days gone by never got past intermediate-level difficulty, though you wouldn’t have been able to break that to the self-back-patters at conferences like SMX Advanced (my quip, not Fred’s). Today is truly an advanced course, with layers and prerequisites. Dabblers and newbies will be hopeless in dealing with the many layers and building blocks that go into campaign outperformance, including dealing with today’s (Google-provided) automations, third-party tools, and various hacks. Beginners even struggle with reporting.
Consider campaign structure
Although it would be an oversimplification to say “I tend to side with Fred,” it is the case that the hot take coming from many experienced practitioners is missing one key piece of perspective – a perspective that I perhaps unwelcomely tried to bring to a conference session or two. The subject matter: campaign structure. How do you build an account for best results? Departing from the common sense that ruled earlier generations in our industry in the 2002-2010 era, weird, proprietary, show-offy, complex, and extremely granular account structures had begun growing in popularity by 2013. Usually, in my view, these were based on a cognitive blind spot: the confidence that a motivated human being would have the capacity to go in and deal with all that mess – to control it. Nothing wrong with that in theory, but (as tool providers like Vallaeys take pains to remind us) it’s basically beyond human capacity to properly manage an unwieldy, hyper-granular campaign.
It goes without saying that in recent years, Google willfully broke that model (analogy: a farmer “breaks” a pack mule) by introducing a new wrinkle: flagrantly disregarding the rules around keyword match types. All of them! Some practitioners (what Vallaeys refers to as “anyone clinging to old PPC best practices”) merely redoubled their efforts to police search query reports to keep their meticulously overthought structures from breaking down.
An extreme example of the control-freak campaign style was the now-legendary SKAG’s (single-keyword ad groups – and yes, this technique really did get its own acronym!). I never “got” this trend. And never will.
From time to time, Google’s product evolutions provide us with a clear editorial and even epistemological stance on strategy, workflow, and scale in advertising. For example, Google rolled out something that now has no name but was originally called the Enhanced campaign architecture. With that architecture, many of us no longer had to break out large numbers of campaigns to account for different geographies and device types. Google’s rebuke of unnecessarily overwrought workflows was heard loud and clear at that time.
Granularity and accountability were some of the incredible traits of this kind of advertising that pulled us in when it was invented. Even as the sands shift under our feet, these haven’t gone away. But, around the original routines of our industry, they have grown up a much busier workday; Google thinks long and hard about how to relieve advertisers from unsustainable levels of complexity. I think it’s possible for streamlined workflows and sometimes freakish levels of control to coexist, as things stand today.
“Helping the little guy” or hurting the big one?
Related to that, a theme was raised by our anonymous analyst: the sometimes-facile Google bias towards “making things easier for the little guy.” It’s true: even advanced practitioners and midsized companies can get stretched for resources. And very small advertisers are hopelessly lost unless they can get away with spending less time on setting up and optimizing campaigns (and thus, training-wheels variants of the platform may help more than they hurt for such resource-strapped advertisers).
Indeed for reasons of business privacy, despite heavy costs, some enterprise-sized advertisers have turned to third parties like Adobe or Salesforce for analytics and budget allocation solutions because they’re unhappy with Google’s and Facebook’s privacy overreaches (this according to an anonymous source at Salesforce). Could it be, then, that curtailing available query data is, in part, a directly anti-competitive move against large competitors in its ecosystem? Those Google competitors and their customers (large enterprises) are, in this view, more concerned about consumer privacy than Google is. And large enterprises don’t want Google to see as much of their private data as they currently can. As such, Google’s move is retaliatory against directly competing tech companies (eg. $CRM, $ADBE, and of course dozens of other large-cap data-rich tech companies) and their clients in a way that isn’t immediately apparent. This doesn’t strike me as far-fetched speculation, given that Adobe and Salesforce alone reach nearly half of Google’s current market capitalization. It’s unthinkable to Google that they be surpassed by such players lurking in plain sight, so they’ve made moves to protect a formerly vulnerable flank.
Although many of us are coping fine, there is some major pain being felt right now by some advertisers. Some types of campaigns are needier of query control than others – for example, in B2B. Shopping campaigns are rife with weird query matching, and it doesn’t seem fair to hide any of it, let alone a large swath of it. And this pales in comparison to the assault on transparency inherent in the newest campaign types, such as Smart Shopping. Knowledge and control over both queries and placements is not only a matter of workflow, it’s a matter of integrity in these channels. So although I think we can continue to drive performance with less query data, just as Facebook advertisers might now have to do as that company slides away from granularity in Audience definition, it’s a worrisome trend. When it comes to search query reports, I join other advertisers in the plea to “put ‘em back the way they wuz.”
Read Part 28: Taxonomy is Sexy: The Power of Labels for PPC