In the many friendly sales conversations I’ve had with prospects over the years, I’ve occasionally been asked to cite three things that make our agency stand out from others. In my mind, I’m thinking: “Three? How about fifty? How long do you have?”
And now, I’ve just said that out loud.
It’s put-up-or-shut-up time.
So, how do real pros stand out from the wannabes?
I’ll reach into my gym bag of sports analogies early and often.
Have you noticed that something strange happens when a position player has to come in off the bench to pitch in the 15th inning of a baseball game because he’s the only eligible player left to take the mound? They’re usually able to lob a few lame tosses over the plate and get the other guys out eventually.
At first, one player in uniform throwing a ball towards a batter looks more or less the same as a professional pitcher.
The level of skill – to say nothing of understanding – is nowhere close to the same, however. Leave the non-pitcher in for long enough, and they’ll be giving up walk after walk, hit after hit.
It’s the “looking pretty much the same” part that can fool us in digital marketing, as well. Everyone’s out there trying. A tiny minority are really nailing it.
What’s the difference?
In my experience, the difference between good and great PPC results (i.e. great account management leading to sustained success) comes down to the comprehensiveness and depth of the analysis. That comes from practicing one’s craft for years, collaborating with great colleagues, and working in real-world pressure situations. It also helps if some people on the team are creative in applying models and analogies from various industries or from their formal educations. You’re probably not going to pick that up reading the forums.
Great PPC professionals, ideally, are part of great teams, amplifying the effect.
Safety goggles not required
Google Ads is a testing lab. If I can help you find your way around that lab, I’ll have done my job. Or you might get fed up trying and hire our agency for full-service PPC campaign management. That’s my job, too. 🙂
I mean, there is even a feature called Campaign Experiments in Google Ads that is explicitly positioned as a kind of laboratory – the Experiment group in a campaign experiment is labeled with a cute little beaker icon!
But of course, the whole process of performance marketing is rife with experimentation. You’re tripping over various kinds of forensic evidence everywhere you turn inside the Google Ads platform. That’s what makes it so addictive to some of us, frankly.
It’s disappointing when, as experts, we’re asked to remain at the beginner level. You shouldn’t put up with this.
Look deeper
Recently, I looked over a keyword report sent over by a helpful person at a large company that controls our universe. The “dump” was pretty boilerplate, and much of the implied advice (bid up or down) was misleading. It was most certainly incomplete. That’s because those keywords always come up as part of a more comprehensive method of optimizing accounts by ad group. Blips occur in small datasets. Various anomalies and optimization opportunities are interrelated. Match types should be “stacked.” Broad keywords shouldn’t monopolize all the queries. There is seasonality. Did I mention blips? Randomness is a huge problem with any simplistic, shoot-from-the-hip report.
The other problem with the helpful report in question is that it is biased. Any tiny loss in impression share was treated by this helpful report as a screaming opportunity to bid higher. Date ranges were a tad tight, leading to frequent overreactions in response to limited datasets. Further, no one bothered to check if someone was already optimizing in their own way, using similar parameters.
Opinions about account management aren’t in short supply these days.
But oftentimes, that’s all they are, even though they appear in spreadsheet form and include “data.”
My goal is that this 50-part series will expose readers to enough odd (and some screamingly obvious) insights, tips, tricks, and counterintuitive deep dives into methodology that you will come away feeling some degree of mastery and confidence in your own powers of PPC analysis. You’ll feel like Bianca Andreescu staring down the court at Serena Williams, thinking “how am I going to go toe-to-toe, match shot for shot, with the world’s best, when no one else can?,” and then realizing there is no rule against you using your superior skill to pound a forehand winner just a millimeter from the line.
Either that, or you’ll hire our agency.
Yes, this series is biased too. But I hope it helps you, entertains me, and gets us all paid in the end. It’s got to be more rewarding than just living surface-level all the time, surely.
Theories runneth over
If you follow digital marketing, you might notice a high noise-to-signal ratio. Surface-level analysis passing as insight. Is it because everyone is selling something? Because everyone is too busy to slow down and discuss how features actually work? Because of the power imbalance that results in marketers and authors of pieces in trade publications merely chasing after Google or Facebook’s next announcement? The opaque language (“Smart,” “Responsive”) used to describe many complex features inside the ad platforms? It’s a bit of all of these.
On the SEO (“organic search”) side of the equation (which we won’t cover in this series at all), some of you may be aware of a very popular study of organic search ranking factors. The study is based on a survey of what many leading marketers believe are the top ranking factors in Google’s organic ranking algorithm. It’s one thing to ask survey participants if, to the best of their knowledge, they ate a bowl of cornflakes this week. But to ask a professional search marketer what she thinks Google might be doing is likely a pretty unreliable guide to what Google might actually be doing.
Science is inherently hard, though. Often we don’t have full powers of observation. We can’t see all of the ocean floor without wrecking the ocean floor.
Some facts can be checked. When there is a screaming consensus around certain tendencies, myths tend to be easier to combat.
Take the example of how Daily Budgets work in Google Ads. I was the first to write about Google AdWords for wide consumption. In my hastily penned ebook, initially released in 2002, I implored readers to heed the following basic optimization approach: don’t use your daily budget to limit spend if your ROI is poor. Optimize by lowering bids instead (of course, still feel free to set budget caps as tightly as you would like for safety). This is basic math. It can be easily checked. It’s so important (and myths around this still so prevalent), it’s a frequent question in Google’s advertising certification exams nowadays.
(Fun fact: when we were talking about online advertising ROI back in 2002, Google Analytics certainly didn’t exist. Google AdWords didn’t have its own tracking pixel. Detailed performance measurement of Google AdWords keywords was only possible via third-party conversion tracking / analytics software. And even at that, the early architecture of Google AdWords made it difficult to track conversions down to the keyword level, so most of us just labeled ad groups, and tagged them by tagging various ad versions within the group. It wasn’t too complicated. And it worked just fine.)
Have I mentioned that improper use of Daily Budgeting, and generally inefficient resource allocation, are typical low-hanging-fruit issues we discover when we audit a PPC account? In accounts spending, say, $40,000/month, it’s not uncommon for us to discover $10-15,000 of waste. (That’s egregious, basic-error waste, as opposed to us being picky and unfair just to try to win an account. I don’t believe in “ambush audits.” I respect others’ efforts when they’re professionals just doing their jobs, even when they’re competitors.) When we go to work on an account like that, our fee is actually paid for by the near-term improvement in results… and then some.
Say “no” to showboating
As in (possibly more important) fields like medicine, anthropology, and atmospheric sciences, lively debate is important to achieve breakthroughs. If this series stirs up some productive debates, that’s an improvement. In my opinion, what we don’t want is bland consensus around questionable norms (single-keyword ad groups? oh my!).
But what we also don’t want is showboating. Imagine if an oncologist tried to “impress the boss” or “get noticed in the industry” by deviating inexplicably from the methodologies that offer the most optimal tradeoff between risk and improved health.
That puts experts in a tricky spot. (Not as tricky as a cancer doctor, fortunately.) To improve over time, we must be open to experimentation and trying new things. But not all new things are better. They may be trivial as a means of improving performance.
Why don’t we start out by understanding how things actually work, feature-wise, supported by useful, actionable, and relevant data? From there, we’ll move on to some more advanced concepts. Sound good?
I’ll do this in no particular order. Some parts will be short. Others longer.
Let’s put “good” aside. Let’s go for great. To borrow an on-court cheer from Bianca (or was it Serena? They’re both great, so it scarcely matters): “COME ON!!!”
Read Part 2: Search Engine Advertising: A Brief History