Become Google's Trendsetter

How I Increased Organic Revenue By €850,000+ By Predicting A Keyword's Future Latent Intent

How I Increased Organic Revenue By €850,000+ By Predicting Keywords' Future Latent Intent

Become Google's Trendsetter

How I Increased Organic Revenue By €850,000+ By Predicting A Keyword's Future Latent Intent

How I Increased Organic Revenue By €850,000+ By Predicting Keywords' Future Latent Intent

Difficulty: Expert

Category: SEO

Reading Time: 35 Minutes

This Will Change The Way
You Look At Your Onsite SEO

This Will Change The Way You Look At Your Onsite SEO

Google will frequently - and often dramatically - change the ranking order and appearance of a search engine result page (SERP).

In this post, you'll see how I was able to predict the way that Google was about to change the makeup of their SERP for short-tail, commercial keywords, and capitalise on it in a big way.


This had nothing to do with seasonality. This was a fundamental shift in the perceived sentiment and latent intent for those terms.


It resulted in a client improving some of its most competitive, short-tail keywords' rankings from positions 11+ to 3 or better.


Which, in turn, saw YoY organic traffic increase by over 105%, and incremental, attributable organic revenue by 57%.


In real terms, that was an incremental ARR increase of €850,000+.


All without building a single backlink.


And you know me, I *love* building backlinks.

TL;DR

  • Google is constantly adjusting SERPs according to what it thinks is the current latent intent for the keyword
  • It does this by monitoring and measuring ongoing sentiment
  • check
    This isn't limited to seasonality and brand, it applies to commercial, short-tail keywords as well
  • check
    I've been able to find several ways to find out which sentiment Google is most likely to take notice of
  • I have had success in both reacting to changing sentiment by adjusting my onsite SEO, as well as predicting where future sentiment is going before anyone else - including Google
  • check
    In doing so, my clients and personal sites have captured the latent intent ahead of keywords, to the tune of €850,000 incremental organic ARR. 
Don't Have Time To Read It All?

Yeah, this post is a biggie. If you fill out the form and sign up to the newsletter, you'll get the article sent to you as a PDF - plus access to part 2 before anyone else.

The Premise

All keywords have a user intent - but some intent is more important than others.

This post works on the assumption that Google will change the ranking order and appearance of a search engine result page (SERP) based on what it thinks a user will expect to see.


There is a decent amount of precedent around this, which has been nicely illustrated by the likes of JR Oakes and the team at Screaming Frog:

Source: @jroakes

What both JR and Dan are showing, over different time frames, is how a site's visibility for a particular keyword can go from nowhere, to highly visible, and back to nowhere in the space of a month.


In these particular examples, this is to do with seasonality (Christmas and Halloween). With the holiday season approaching, Google has given what it believes to be trusted domains more visibility, in order to better match the user intent.

In What Way Could The User Intent Change?

We need to consider that the keywords being used in the examples above are generic, short-tail keywords.


In e-commerce terms, these could be something like "loans", "sneakers" etc. - very high volume terms, but with a multitude of possible searcher intents. For example, someone searching for "sneakers" could be looking for new ones to buy, for images of sneakers, for news about brands and so on.


The primary intent is not explicitly defined in the search term.


For this reason, and along with their super-competitive nature, they're not usually the kind of keywords you want to deliberately optimise for.


Under normal conditions for these keyword types, Google will typically show a SERP that caters for as many potential user intents as possible (if Google itself cannot ascertain the primary intent).


That's why you'll often see results with featured snippets, news stories, images, and video all appearing on one mega SERP.

What Dan and JR's data argues, however, is that as we approach a key shopping season for a particular product (and/or associated brand), Google will give trusted e-commerce sites more visibility for these generic terms.


In this example, Google believes that more people are in the market to buy, and so will push more e-commerce stores to the top of its generic keyword SERPs, in order to match that intent.


Seasonality makes the primary intent a "purchasing" intent, and Google reacts to it.

Here's The Thing:
This Is *Not* Just About Seasonality

This section in 30 seconds or less:

Google isn't just reacting to when a product or industry is popular during the year.

It is setting the tone itself by interpreting prevailing sentiment.

Tesla's Turmoil Reveals How Google Is Constantly Monitoring Sentiment

Turns out libel and defamation isn't good for business.
Who knew?

At the time of writing, $TSLA had lost 23.9% of its value from its all-time high, immediately after the market manipulation speculation that Elon Musk was to take the company private at $420 per share (ayyyyy).


Following that market peak, Musk has seen the share price tumble following several ambien-fuelled rants, including accusing a British cave driver of being a "paedo guy". Something he now faces a lawsuit over.

But that's none of my business. 🙄


Suffice to say, things have not gone well for the company in the last few weeks. And if you were to Google "TSLA" (the NASDAQ code for the company), it looks like everyone is jumping on the bandwagon:

TSLA Example

Search taken from Google.com - US settings - 17th September 2018.
Share price chart removed to save space.

In this SERP, negative sentiment about the company and its stock is clearly prevailing, with negative coverage featuring prominently in the top of the results page, with only 2 definitively positive results. What's also quite startling was the Seeking Alpha article promoted to position #2 of the "traditional" results.


I believe this is evidence that Google will adjust other search results based on perceived user sentiment for other SERP categories, and it is not only limited to user intent changing with seasonality.

Is This Google Making A Choice - Or Is It Overwhelmed By Sheer Numbers?

A legitimate counter-argument to this claim would be that Google is just reacting to what it's seeing and crawling.


It's undeniable that there is a strong negative tone surrounding Tesla across the media, which will result in more negative articles, tweets, and videos being produced.


The counter-argument would be that this negative coverage outweighs the positive, and so Google is merely reflecting that.


I have to admit that was my first thought as well, but the data revealed something different.

Over the last 30 days (until 17th Sept) the number of article with a positive sentiment about Tesla was greater than the number of articles with a negative sentiment.

I was able to extract URLs from Google News and Buzzsumo and collect all of the articles I could find about Tesla (36,321 to be precise) for the last 30 days, with the only filter being that they had "Tesla" in the title.


It seemed like a pretty tasty time to do the analysis, according to the data in Buzzsumo's yearly trend:

The blue bars reflect the number of articles written, while the line is the number of engagements (which we'll get back to). August was clearly peak Tesla-mania.


Next came the sentiment analysis. I used two tools here: Semantria and radian6.


Both have their advantages over the other for particular features, and no sentiment analysis tool is ever 100% correct, but by using both tools - which are some of the best in the market - I could be reasonably confident I'd get accurate results.


With a bit of automation, I was able to get all of the 36,321 articles analysed.


Here are the results:

Semantria

Semantria Sentiment Summary

radian6

radian6 Sentiment Summary

With Semantria, neutral sentiment leads the way but then positive sentiment outweighs negative.


With radian6, "full-blown" negative sentiment articles outscore positive ones, but when combined with "somewhat negative/positive" articles, the results even out.  There is even the slightest of biases towards positive sentiment overall.


In any event, if Google was ascertaining sentiment on the sheer number of articles on a particular topic alone, it would not be adjusting its SERP to show a majority negative sentiment - at least based on this data.

Google Appears To Give Sentiment "Weighting" Based On Multiple Factors

This section in 30 seconds or less:

Psst - say it with me now:

Just. 👏 Like. 👏 Backlinks. 👏

Backlinks, Traffic, Social Shares? Do They All Count Towards Sentiment "Weight"?

This is where I'd coin the term "SentimentRank"
You know...if I was "that" guy.

So, we have a situation where there are more "positive" articles about Tesla than there are "negative". Only marginally so, but importantly, there is definitely no clear majority towards "negative" coverage by numbers alone.


And yet, in the SERPs being returned, there are overwhelmingly negative.


Therefore, there has to be some reason, credence, and/or logic behind Google showing it that way.


I refuse to believe that Google would choose it to be this way "at random".


I wouldn't put it past them deliberately manipulating a SERP in order to push their own agenda and propaganda (which doubles as my favourite excuse as to why a site of mine doesn't rank: "it's a conspiracy against me"), but leaving it to chance?


No chance.


There has to be something more at stake, and when I looked at some easy to find metrics, something was very, very clear.


The pages that had more social shares, more backlinks, and more estimated traffic (Ahrefs/SEMRush estimates) were the pages with negative sentiment.


I know, I know - correlation does not equal causation.


However, in this example, it looks as though Google has given some merit behind those metrics.


Let's start with social shares:

Tesla Top Content With Weed

Uh..right, OK. Let's do that without the word "Weed":

Tesla Top Content Minus Weed

Oh for fuck's sake.

Even with all of this marijuanamania (like that's the worst thing he's done?) you can see how negative sentiment is prevailing: the stock is "tanking", the workers are rebelling, shares are described as "crashing" and so on.


Of the top 100 pages by social shares, my research suggests that 56% of articles had a negative sentiment - so in this case, even outnumbering "neutral" coverage.


When it came to backlinks - using both Ahrefs and Majestic - it was even more definitive.


Looking at the URL metrics only (IE, how many linking root domains the article had), 62% of the top 100 articles by links had a negative sentiment.


And using the Ahrefs API once more, I was able to use the traffic estimate tool to find that 64% of the top 100 articles by traffic had a negative sentiment.


Without distracting too much from what this article wants to cover, I will add that in my findings so far, of 100s of keywords across projects that I openly admit is still a small sample size, the sentiment of the SERP matched the sentiment of the articles with the most shares, backlinks, and visibility every time.


In many ways, it's similar to what Google has described as the "strongest" kind of link building for some time now.


Earning links from numerous sources is, of course, beneficial, but also earning those links that are more likely to attract clicks and traffic (related to the Reasonable Surfer patent) - those have been championed by Google for a long time now.


It's worth diving into on a grander scale - and that might be something I'll return to in a bigger case study in the future - but I am reasonably confident in saying:

It's not the sheer volume of sentiment being seen - it's what users are signalling as their preferred sentiment of a topic, leading to Google to reflect this in their SERPs.

Won't This Result In A Minority Of Sites Dictating The Majority Of Sentiment?

A concern you might have here is that, if this is true, those websites with access to the largest social media followings, and those that are able to attract the most backlinks through whatever means will be the ones who will be able to dictate the perceived sentiment.


Worth noting - the backlinks nor the social shares do not have to support the sentiment or the point being made in the article to be "effective". 1000 backlinks saying "this article is BS" is still going to provide a benefit to it.)


And similarly, as large publishers can control the narrative, won't they be able to push their own agendas and incentives into what the perceived sentiment should be?


The answer to that is yes - and welcome to SEO.


Hell, welcome to media for the last 300 years.


Those at the top get to command visibility and fuck you if you want to interfere with that.


The biggest publishers will typically be able to get the most links, shares, and traffic - and so whatever sentiment they push is going to be weighted more favourably.


Just like if a big publisher enters a particular SERP, niche, or industry - for example, literally every major affiliate niche under the sun - if they do it right, they can dominate rankings. The New York Times didn't buy The Wirecutter for shits and giggles, you know?


...Usually.


They don't always get it right. And there are ways to beat them.


We'll come back to that.

OK, That's Brands And News -
How Does This Translate To Short-Tail, Commercial Keywords?

This section in 30 seconds or less:

We're starting to get into the meaty bit of the post now.

Feel Like Taking A Break?

Remember, you can fill out this form and sign up to the newsletter, and I'll send you the article as a PDF for you to read whenever- plus you'll get access to part 2 before anyone else.

Google Is Also Adjusting High-Value Keywords Based On Sentiment

If you thought this research had no money-making applications...think again.

Right, this is what we're really here for.


If you're wondering whether Google is up to this sort of behaviour for commercial keywords, short-tail keywords, and keywords you'd want to actually rank for...


The answer is a resounding yes.


The eagle-eye observers out there might have spotted these tweets from master B2B marketer John-Henry Scherck back in March 2018:

JHT Tweet 1
JHT Tweet 2

Source: @JHTScherck

That tweet was 6 months ago, and I was already probably a good 6-12 months into this frame of mind, with active tests for clients. Seeing him tweet that flagship example around a year into this really gave me confidence that I was onto something.


Turns out, I was.


John-Henry's tweet highlights that, as we're probably creeping towards a recession in some major Western economies (US included), "credit cards for bad credit" was the #2 result for the general "credit cards" keyword.


The assumption here is that the general latent intent is that people with poor credit or financial hardship are looking at finance options, and are the majority of searches for credit cards at this time.


If you go and do that search now (September 2018), you won't see that result in position #2 - which suggests that Google are always testing what should be shown.


However, what you'll see appear near the top for related keywords and what people also search for are:

Once again, we have a lot of visibility for terms that indicate that people are struggling with credit and affordability. One might even argue that students searching for credit cards suggests this as well - when you factor in the student debt crisis in the US.


Here's the important takeaway, as I see it:


Over the last year, my findings have led me to believe that it is not just branded and news-driven terms that are effected in this way, but short-tail and long-tail, commercial keyword terms as well.


Not just around credit cards, but hundreds of commercial keywords.

And the judgement on what Google believes is the latent intent behind a term isn't limited to social media sentiment.

It could be market analysis, it could be earnings reports, it could be the latest weather forecast, it could be levels of crime data, it could be political unrest...


It's every possible bit of data out there.


Yes - Google is using thousands of data points to determine latent intent on short-tail, commercial keywords.


Now do you see why Google has released a specific search engine for datasets. They've been at this for years.


And you can take advantage of this while your competition sits on its arse.

How I Took A Financial Product & Services Website To The Top

This section in 30 seconds or less:

And before you ask: yes, I still work with this client.
So no, I can't reveal them. It sucks, I know, but come on, man. I gotta eat.

The Case Study: Ensuring Organic Search Improvement Was Solely Down To Intent Optimisation

And the judgement on what Google believes is the latent intent behind a term isn't limited to social media sentiment.

Below I'm going to share an example from a financial product and service website, operating in a couple of different countries, where I put this theory to the test.


And it came up a winner.


I have since tested and implemented this for 2 other clients and one personal project, and I'm pleased to say that I've seen similar results. I've chosen to include this case study as they were:

  1. The first
  2. The ones that saw the biggest impact in terms of revenue
  3. I can't write them all up as I'll be stuck in writer's purgatory for years

By reading into this sentiment data and making appropriate changes to the content silo, I was able to take pages ranking at the bottom of page 1/top of page 2 for short-tail terms, and get them to rank in the top 3.


Without a single link being built.


It almost pains me to type that.


Before I did all that, I wanted to make sure that this wasn't just down to your 'typical' onsite SEO. For example, I wanted to see if merely updating the content and metadata on the page wouldn't result in the pages being propelled into the top of the SERPs.


My client for this test had a pretty under-optimised (multi-language) website to begin with. For people who have worked with B2B and B2C financial clients that are probably a bit too old-hand to be considered "FinTech" - you'll know what I mean.


It was even running Wix!

I'm kidding, of course. No self-respecting human being would use Wix on a website and a plague on both your houses if you even consider it.


I just wanted to check you were still reading.


In any case, I had to carry out the usual onsite tweaks first, because if I made some improvements there was a good chance there would be some initial uplift.


The other great variable in this test would, of course, be links. Fortunately, at least for this test, with the client being in the financial sector, and with a particularly stringent legal team, link building wasn't an option at the time.


Note: that's not to say that you can't do content marketing or link building for FCA, FCC, MiFID etc. regulated industries - you absolutely can.


However, in this instance, with regulation being reviewed and with the client already under scrutiny for a past indiscretion, the new legal team in place had locked down a lot of that work until they were happy to open it up again.

What Effect Did The Rudimentary Work Have?

Finding what onsite changes to make for the initial test was fairly straightforward.


I did a bit of reverse engineering on content that was ranking at the top of the SERPs, including not just the content itself, but the metadata, and the site architecture/silo setups.


What was missing would be familiar to many of you who conduct this work.


There was an absence of keywords (specific ones we were optimising for, plus LSI keywords) throughout the content on the page, there was some poor internal linking, and a lot of cannibalisation in a botched attempt at creating content clusters for their products and services.


The work resulted in some quick wins.

Onsite Ranking Improvement 1
Onsite Ranking Improvement 2
Onsite Ranking Improvement 3

A quick aside: When you're measuring the impact of your changes, take rankings/traffic screenshots etc regularly and as they happen. It's going to make any reporting you do for your work a lot more impactful if you can show your client or boss an improvement just when it happens, using a shorter window, rather than having to fit it into a 60/90 day window where it might not look as impressive.


The screenshots above show a 30 day view, where the last of the onsite changes getting implemented had the final push at roughly the same time.


This brought the majority of short-tail keywords (and several long-tail) to the bottom of page 1, and from there they consolidated their position for another 30 days.


By tightening up the onsite optimisation for the pages using 'traditional' methods, rankings improved by an average of around 3-4 positions.


A decent improvement; enough to suggest the site can improve and do so further, while not enough to be considered groundbreaking.

Enter: Sentiment Analysis

Earlier in this article, I spoke about how I used radian6, Semantria, Google Trends and other tools to be able to detect the prevailing sentiment of content out there. By seeing what was emerging as the prevailing sentiment, I've witnessed Google change its SERPs accordingly to match the latent intent.


In some cases, that could be enough to give you a strategic advantage over your competition, give you enough time to optimise your organic landing pages to capture this sentiment, and reap the benefits.


However, in this particular industry, I thought this would be too reactionary.


By the time the sentiment went mainstream, my client - a respected brand in the space, but dwarfed in size and link profile - would be one of many making the adjustments, and thus be lost in the noise.

I had to find proactive ways of predicting the way sentiment was going, before it was widely written about and before it was being considered by Google

I became a "hipster of latent intent" and I admit it killed me a little inside.


Here's where the industry obviously had its advantages. There's no shortage of data, instruments, signals or reports in the financial industry - nor analysts to make sense of them and subservient graduates to assist.


Following an initiative on their end to "Explore New Opportunities In The European Markets", otherwise known as "Project Fang" (no, seriously, that was what the project plan was called. It was apparently named after an employee but to this date, I've never seen, heard, met, or even emailed a Mr. or Mrs. Fang 🤷‍ ), an opportunity came to team up with the analyst team and scour the market as a whole.


I'm not saying this turned into some sort of Big Short reenactment and I'm the next Dr. Michael Burry (if I am, it's only in social awkwardness) - all of the information we found was achievable by a simpleton like me, along with a bit of guidance and a few Google dorks.


The breadth of our research, however, was definitely a bit beyond what you might consider to be typical "content analysis".


The data sets we looked at included, but were not limited to:

  • Purchasing Managers' Index for manufacturing
  • Purchasing Managers' Index for services
  • Capacity Utilisation
  • New Orders
  • Bankruptcies
  • Employment trends and records
  • Earning reports of relevant companies
  • Changes in Inventories
  • Analysis of Government GDP makeup
  • Market Fund composition and restructures in the last 18 months
  • 3-star reviews on Amazon within the last 90 days for relevant books
    (Seriously - do this. Do. This.)

And other stuff that made my copy of Excel weep, despite running 32GB of RAM.


I worked together with my client's analyst team on this. We shared our data findings, and eventually we saw a propensity towards a re-shaping of a popular financial product.


Which I know sounds dull, and not particularly groundbreaking, but it gets a bit more exciting when you factor in that it would be the first of its kind.


It was to be a completely new offering of this particular instrument.


The client worked on creating this new product, while I worked on how it could be marketed.


With it being a new product that would capitalise on this new movement in the market, there wasn't any existing search volume.


So Facebook, LinkedIn, and some programmatic marketing formed the basis of the campaign, in order to promote the product itself to relevant demographics, and to do some account based marketing.


But I didn't want to stop there. I saw this as the perfect opportunity to test this prevailing sentiment/latent intent theory.


If we were confident enough to create a new product for this movement, and saw this as the future of the industry - which, after some very tough questions, we thought we were - then why wouldn't we want to promote this on our main hub page, as the main selling point of their business?


So we did it.

So Here's What We Did

This section in 30 seconds or less:

We're In The Home Stretch Now

The implementation, the results, and the ramifications are just around the corner. But if you're flagging, you can fill out this form and sign up to the newsletter, and I'll send you the article as a PDF for you to read whenever- plus you'll get access to part 2 before anyone else.

"You Know, If This Doesn't Work, We're Going To Look Like Bellends?"

In order to be different and better, you have to risk being different and worse.

I might be slightly romanticising the conversation the client CMO and I had, but it did go something along those lines.


I'll leave it to you to decide who uttered which line!


In the previous round of onsite optimisation, we'd put the site into a series of "Hub and Spoke" pages for each thematic silo.


If you're unfamiliar with the hub and spoke model, it's an evolution of the content silo approach, which has been brilliantly and succinctly described by Jimmy Daly:

Hub and Spoke Jimmy Daly

If you've never thought about setting up your silos in this way - please do so.


You've already seen some of the modest results it earned this client earlier in the post, but I could cite hundreds of examples across a whole range of industries and website types - from publishers to e-commerce stores - and where the effect has been far greater.


In relation to this case study, that hub in the middle describes the financial service that gave birth to this new product line.


From the first round of onsite SEO, it became very similar - in structure at least - to what was ranking near the top of the SERPs.


The content gap analysis made sure that the main keywords, long-tail and LSI keywords were mentioned in the content and metadata, and a page that catered for multiple user intents was the result.


This time, we were about to do something very different.


We edited the hub page so that the primary topic was this new product: who it was for, why it was needed, and what we were offering - all based on what we thought the future latent intent for the market would be (and not what it currently was).

How Radical Was The Change?

I appreciate it can be hard to get this across without sharing the client and the competition, or when you aren't as familiar with the market and industry.


To give you an idea, once the onsite changes were made, I ran a content analysis for the two largest-volume, short-tail keywords on this topic for all the sites in the top 10 positions, including my client's.


Basically, this analysis looked at whether the keyword was present and prominent on the pages.


This is how that looked after the content changes were made:

Content Analysis KW 1

No prizes for guessing which was the client page in both SERPs!


And by contrast, the client's new landing page had the product name, description, and what would become the recognisable phrase for the market, added to the hub page's title, content and so on.


No other page in the top 10 used this terminology, or anything remotely close to it.

We looked like freaks. Here were 9 pages talking about, for example, lemons - and then we were there saying stuff like "Bloody hell, these kumquats are good".

It wasn't just radical from an SEO approach. The actual product itself targeted a demographic that had pretty much been ignored (but one we thought was emerging)


In addition, the product's pricing structure was significantly different - and not necessarily in a good way. It was by no means a drastically cheaper option, nor one that would make prospective buyers to take a chance on price alone.

How Was This Sold To The Client?

Buy-in for this kind of stuff is never straightforward. I was helped by having a CMO who was willing to test and learn, but sometimes there's nothing more effective than the cold hard truth.


If you've improved a site's keyword rankings from #14 to #8 - you can be pretty pleased with yourself.


But the reality is, if you're ranking in position #8, you're still getting shit traffic.


Ultimately, that's what came through strongest in the pitch - are you happy copying the competition and being beaten on links, and accepting that fate?


Or should we take a risk, when we don't really have anything to lose?

A Risk Worth Taking - The Results Are In

This section in 30 seconds or less:

I love it when a plan comes together....eventually.

The Results

  • Ranking gains to Positions #1, #2, and #3 for short-tail keywords
  • Organic entries to the hub page up by 105%
  • Attributable incremental ARR of €850,000

Rankings are fine. Revenue is better.

However, to insinuate that I always had complete faith in the test would be A DOWNRIGHT FILTHY LIE.


I knew Google would take some time to analyse and evaluate the new page and the new product - and, of course, there had to be a waiting period for what we thought would be the new wave of sentiment to pick up and become more "mainstream".


Then there had to be the period of Google picking up on that sentiment and ascertaining that it was now, in fact, the primary latent intent for the targeted short-tail keywords.


And before that happened, we would probably drop a few positions on those short-tail terms because, unless our predictions came to fruition, our new hub page would be talking a lot of nonsense.


That's a lot of waiting. And a lot of thinking to yourself:


"Bollocks - is this going to even work?"


Here's the effect it had on organic traffic for the hub page, over a lengthy time period, complete with annotations and commentary:

That's a long time to wait and a lot of late-night emails asking if we should revert the change, and a lot of later-night moments spent, as my dear old Nana used to say, "shitting the metaphorical bed".


The surprising thing I found in this case study was that the long-tail keywords the page ranked for were not as badly affected, and when they were, it was nowhere near as fast as the main keywords dropping.


And then when things started picking up and the main, short-tail keywords started ranking at the top of the SERPs, it took the longer-tail keywords more time to follow suit and rank there as well.


That suggested to me that this analysis of latent intent per keyword, and the analysis of what the primary sentiment is, is staggered and isn't applied at an even pace across all keywords.


Curiously, it's the bigger volume terms that are tested and applied on first, but I suppose those are the ones that have more possible user intents and interpretations, given their short-tail nature.

And The Competitors Started Following Suit

Imitation is an artless flattery - there's nothing "sincere" about it.

Given the especially competitive nature of the financial industry, it didn't take long before my client's competitors were producing their own versions of the product designed to capture users with this new sentiment.


Inevitably, the respective landing pages for those keywords were adjusted as well.


Using the same content analysis as before, I analysed how often the phrase that we coined, which was to become the canonical product type for this area, was now being used on the top 10 sites, for the same short-tail keywords I analysed before.


Remember how we were the odd ones out?

Content Analysis KW 3

Now the majority of the competition is using the terminology we pioneered - and even the phrases that we came up with.


That little asterisk is the client's ranking - something that it has maintained to date.


I've mentioned at various intervals how the client had a significantly inferior link profile to the competition.


Once the competitors in position 2 and 3 started editing their pages to capture this latent intent, I expected the client to be knocked off its perch.


However, it has remained there for several months.


I believe that there is some form of "First Mover Advantage" at play here.  With the client having had the content and information on its page there for months, much longer than the competition, I believe that it has been identified as the authority, the definitive brand, in the space. Google is rewarding the site because of that.

If you're going to take the punt and be the first to write and cater for this sentiment, it looks like Google is going to reward you for it long after it becomes the latent intent.

The Bonus Round - Showing The Content To Googlebot, But Not The User

This section in 30 seconds or less:

Playing Up To Google - But Not The User

Before I begin this section, I'll say this: I would not recommend you take this approach in normal circumstances.


This was a further test of optimising for upcoming sentiment by attempting to remove even more variables.


We all know that Google likes to say there are about a gazillion (I believe that's the technical term) ranking factors for SEO.


Some potential factors that are getting a growing amount of credence is a website's click-through rate (CTR) relative to its search position - and if users are then performing a fast and/or hard bounce back to the SERP page to find a different result.


Basically, are users finding your result more interesting, and clicking it more than Google expects?


And when they get to your page, are they finding what they need, or do they think the page is crap and they return to the results to pick another site instead?


For the purposes of this test, I assumed that CTR is a positive ranking factor (making it better = higher rankings) and the bounce-back to SERP (be it 5, 10, 30 seconds - I believe this time window changes as the keyword changes) is a negative ranking factor.

If you're messing around with the content on a landing page and making it significantly different, or targeting a significantly different audience, there's a good chance you'll affect these two metrics.

So, what if we didn't change the user experience at all?


On several pages on one of my own affiliate websites, I adjusted the content and layout of the page to try and capture a similar, emerging latent intent, based on some sentiment analysis I'd carried out.


But this time, I used cloaking to show the new content layout to Googlebot, but the original version to users.


The hypothesis here was that I would see a rankings boost and traffic go up, but metrics like bounce rate and time on page would remain the same as, well, to a user, it has remained the same.

Did I See Similar Results?

In a word, yes:

This affiliate site isn't a typical "Amazon Style" site, and has no seasonality peaks or troughs of note.


What was consistent across all the pages I changed as a gradual improvement in traffic, coming from improved short-tail keyword performance, while bounce rate and time on page did remain the same.


In an admittedly small sample size, this supports the idea that Google will adjust a site's organic visibility based on its ability to match sentiment and latent intent, and will do so independently of any improved "user metric ranking factors" that, in turn, might generate.


But remember kids: cloaking is bad.


Incidentally, I did eventually remove the cloaking and then made the revised content and structure visible to both users and bots. What I would have loved to have seen is a significant difference in user behaviour once the new pages were live - but they in fact remained largely unchanged.


Alas, you can't have it all.

That Was Part One - But Wait, There's More

This section in 30 seconds or less:

We ain't done with this yet.

Trust Me - The Best Is Yet To Come

I really hope this has whetted your appetite.


When I was testing this out for the first time, I was incredibly invigorated. It felt like something new and exciting for SEO - something I hadn't experienced for awhile.

Don't get me wrong; this is tough, time-consuming, and temperamental. But it's also relatively untouched.

If you're willing to put the graft in, you could earn a massive advantage over your competition. I'll go out on a limb and say very, very few people are capitalising on this right now.


I've described a couple of ways that you can be both reactive and proactive in reading sentiment, and use that to get your content ready for the upcoming latent intent for the keywords you want to rank for.


And this is one of those incredibly rare situations where Google are even giving you the tools to get the information you need.


But if you thought this was interesting, just wait until you see what's covered in part 2.


Not only will I be diving into more practical, actionable advice for your SEO tactics and how reading into latent intent can get your results, we'll go beyond just SEO and look at applications outside of Search - and into your wallet.


Here's a little taste:

If Sentiment Can Alter SERPs, Can SERPs Alter Sentiment?

Over the last 18 months, I have seen dozens of examples where social media and organic search results haven't followed sentiment, but have dictated it.


Often times this occurred with brands, and usually surrounded a breaking, negative news story that has come out of the blue.


And some of those brands are listed on the NASDAQ, FTSE, and S&P500...

Wouldn't it be easier if you *Knew* where the sentiment was going?

To do that, surely you'd have to dictate it?


Surely a search engine couldn't be manipulated in such a way?


See you in Part 2.

Don't Miss It When It Comes

Sign up to the Raynernomics newsletter and get access to part 2 before anyone else

The Resources Mentioned In This Post

  1. JR Oakes - a great technical SEO to follow
  2. Screaming Frog - a brilliant SEO agency and creator of the famous SEO Spider
  3. Semantria - from Lexalytics
  4. radian6 - part of the Salesforce Social Studio
  5. Buzzsumo - a content marketing tool without parallel
  6. John-Henry Scherck - an unbelievably astute marketer
  7. Keywords Everywhere - a must-have, free tool in every SEO's arsenal
  8. Jimmy Daly - co-Founder of Animalz.co and a fantastic strategic mind

Hungry For More?

Read the Raynernomics Guides
Check Out The Guides
Read My Latest Posts
Read The Latest Posts
Hire Me
Let's Work Together!
If you Enjoyed That, Feel free to share it with your internet pals:
Got something to add? Leave a comment below
>