Become *Everyone's* Trendsetter
How the billion dollar Business of persona management Influences Sentiment,
how memes really do rule the world -
and what you can learn from it
How I Increased Organic Revenue By €850,000+ By Predicting Keywords' Future Latent Intent
Become *Everyone's* Trendsetter
How I Increased Organic Revenue By €850,000+ By Predicting A Keyword's Future Latent Intent
How The Billion Dollar Business Of Persona Management
How Memes Rule The World -
And What You Can Learn From It
Category: SEO & Social Media
Reading Time: 42 Minutes
Emotion is the hook. Facts are a side dish.
We're About To Go Deeper
We're About To Go Deeper
This is part two of a series around customer sentiment and user intent. You can find part one over here.
Thank you for your patience. This post was a long time coming.
I think it's gone through about 5 or 6 different iterations, as well as dozens of drafts and re-drafts.
...I'd forgotten how crap I am at blogging.
Last time out, we looked at how Google might adjust the way it shows results in its SERP in a number of ways: primarily through the shifting latent intent from the searcher.
But what if you could go beyond Google? What if you could dictate the discourse and the sentiment of a topic, product, or brand - on virtually any medium of traffic you can think of?
In this post, we'll look at another way Google might ascertain intent, which I
completely forgot about in my own hubris saved for the second part. But we'll also look beyond that.
We'll look at the reverse - not how you can get ahead of Google and predict the upcoming latent intent of a search, but how Google has a history of, unwittingly, completely dictating the sentiment of a topic, and creating a new one altogether.
We'll also see how Google doesn't really give a shit whether it gets the sentiment - and the truth - spectacularly wrong. So long as it can make money from it.
And finally, we'll move onto the zenith of intent influencing - "social" media, persona management, and their digital marketing applications.
Ready for your daily dose of scepticism and self-loathing?
Don't Have Time To Read It All?
Yeah, this post is a biggie. If you fill out the form and sign up to the newsletter, you'll get the article sent to you as a PDF - plus early access to all my new content.
You trendy bastard, you.
Part 1: A Review - And What I Missed
Well, that went better than expected.
Part 1 of the post seemed to land pretty well. But, of course, it wasn't all plain sailing. Some criticisms included:
I mean, that's fair.
I can totally understand that the post can seem light on the details if I can't share the client and/or the keyword. It's just the nature of the beast in this case.
Hopefully, in upcoming guides/run-throughs etc, I'll be able use my own sites as evidence and to give a bit more substance.
One thing that was definitely a glaring omission, however, was talking about "clickstream" data.
Introducing: A Smarter Man Than Me
I had a great back and forth with AJ Kohn on this issue.
Hopefully, you've come across some of his writings and SEO analysis before, but if not, I'd highly recommend taking some time to read his Algorithm Analysis In The Age Of Embeddings article (Nov 2018).
It's a brilliant piece of work.
And, at the very least, it should help put to bed some of that ridiculous "analysis" some of the E-A-T gurus have been spouting.
Over in a slack group, AJ offered the following feedback, and example, on how clickstream data could be at play in some of the SERP examples I mentioned:
The argument here (paraphrasing) is that Google, using the data it has, will chop and change the results until it starts to get "long clicks" (IE - useful results for users) on some of its results - then aim to get more of those results in the SERP.
I think it's an extremely valid point.
However, rather than paraphrase, I decided to reach out to the man himself. Here's what AJ had to say:
One of the by products of understanding language is understanding intent. Today you can watch as Google changes the relevance matching for a query class. And based on their measurement of success, they can determine whether that new relevance match was better or worse.
Did users mean 'repair manuals' or 'repair shops' when searching for 'Acura MDX Repair'? With neural embeddings you can determine how relevant those next words are from a corpus perspective. Google's edge is that they can test to see how that matches up with real world queries.
We're the hamsters in the maze.
This means Google might be optimising a local maxima instead of a global maxima. This is most likely to happen when Google believes that the query has been satisfied when, in fact, it may have just been abandoned or led to a sub par result.
The problem? Google doesn't measure satisfaction across sessions. So it may be optimising for our penchant for easy answers. There are a number of head terms where a secondary product is now the primary content. The reason? It's cheap or free and most people figure they can get something good enough.
Mind you, they probably get it, it sucks and they wind up going back and having to do something different. But en masse, enough people make that 'easy' decision, which could just be giving up, and Google just follows along.
Google doesn't care about a person. It cares about people.
I'd even go so far as to say that the clickstream data element has arguably the "strongest weighting" in this chicken/egg scenario right now.
I think Google is getting better at 'predicting' or 'reading' what the upcoming sentiment might be, given the massive datasets at its disposal, and will adjust SERPs accordingly. But it still relies, heavily, on clickstream data to do so as well.
That should have been mentioned more in my original post - although, hopefully, the main takeaway from that was the work you could put in to get *ahead* of that intent prediction and clickstream data (so when Google gets round to shaking up SERP, your new content is there front and centre to win the day).
Here's AJ again:
You often hear people complain that big brands are getting a break with Google. But the thing is, Google just follows the user - they follow the herd.
My background is in marketing and advertising. I know and believe in saturation marketing. So you better believe that users will gravitate toward what they know in a SERP. From a psychological perspective, humans crave cognitive ease.
So brands likely get a higher than average CTR than they traditionally would deserve. And they may produce a higher percentage of long clicks too. Because users trust them more. They could click and buy from that EMD site but hell, they know Target so they'll just get it there.
Once again, it shows how sentiment can be measured through click behaviour.
Now imagine that you see a SERP with negative reviews for that brand. Suddenly you have more to process. That's the opposite of cognitive ease. Things don't go well when that happens. As Steve Krug famously said, 'Don't Make Me Think'.
Users might abandon that SERP or they might gravitate toward another brand. The CTR advantage is likely gone. But we know CTR alone isn't the problem. However, that seed of doubt probably reduces long clicks too. Users are suddenly less likely to transact. They're hesitant to pull the trigger.
A big thank you to AJ for his time in providing the above feedback and quote.
A Reminder About The "Echo Chamber"
One other bit of feedback that came up in the Slack and in AJ's second contribution, which I thought was worth mentioning, was a comment Kevin Indig made about the risk of an "echo chamber".
This is something alluded to, albeit briefly, in the original post.
It's definitely a real risk.
When you look at how sites like Facebook, Twitter and Instagram are specifically programmed to create these inward facing communities, I don't see any reason why it would be different for Google and search engine results.
The evidence I've seen, in fact, suggests that this is already the case.
So What If We Can't Dictate To Google -
And Google Dictates To *Us*?
This is one of the big questions I wanted to explore in part 2 - is there precedent, or potential, for a Google SERP to dictate how a topic's sentiment flows?
How a company is perceived?
How a stock subsequently performs?
Turns out, this has already happened more than once.
And in a very high profile way.
What Happens When
The Roles Are Reversed
This section in 30 seconds or less:
Never forget - when you're dealing with Google and working on SEO...
Google is an algorithm. Algorithms can get it wrong, can be prejudicial, and can be exploited.
Fact Checking The "Fact Check" Feature
Everything that Google does is with the intent to scale.
Healthy scepticism was poured all over their much vaunted "fact check" feature in Q1 2017. Launched as Google's solution to good old "Fake News" - it was, in truth, a half-arsed attempt from a company who's vested interests lie in letting Fake News run wild (I'll come back to that claim later).
This isn't a crack team of fact checkers sitting in a basement somewhere in Mountain View (despite what Google says). And it's certainly not an outsourced group of quality raters eating (pun intended) their way through news results.
It launched using that rigorously tested and reliable and totally-not-exploitable markup: Schema.
As Google put it, all publishers had to do "...to be included in this feature, they must be using the Schema.org ClaimReview markup on the specific pages where they fact check public statements".
It would then supplement that with data it had courtesy of entities like the Duke Reporter’s Lab to determine which fact-checking organisations it can trust.
Who's on the trusted list? ?♀️
How are entities vetted? ?♀️
How does Google ensure QA standards are met? ?♀️
As with everything else in Google, it's a black box. But don't worry, it can scale and can't be abused.
Except for when it's accused of bias, when the algorithm erroneously linked a Washington Post fact check to a Daily Caller article, and when it was generally so shit that it was suspended less than a year later. (See: "Google suspends fact-checking feature over quality concerns" - Poynter).
The old Scale 'n' Fail.
Google has plenty of previous in this regard - with much more devastating effects.
Cast Your Mind Back To The Days When United Airlines Wasn't Dragging People Off Planes
I know, right? Was there ever such a time?
One of the more infamous moments in Google's history was the impact it had on the stock of United Airlines back in 2008.
The culprit, this time, was the "highly cited" feature in Google News.
Google, vaguely as always, describes this feature as:
"Highly Cited: the article that appears to be most frequently referenced by other articles in this story."
Now, what counts as "referenced" - is it when its explicitly linked to?
When it's mentioned?
When it's tweeted about?
Does lexical co-occurrence come into play?
With this feature being around for over 10 years now, we have a bit more data available to us to see which factor contributes to the tag. Based on my own studies, it is primarily links that contributes to the tag being seen or not.
Same as it ever was!
Just as with links in its main algorithm, Google is a lot better at knowing which links are "good links" to take into account, versus when it was in 2008 - although it's not perfect.
And it used to be a lot worse. Just ask United.
$1billion Wiped Off The Charts Faster Than You Can Say "But, Will It Scale?"
On September 7, 2008, the South Florida Sun-Sentinel pushed an article that talked about the imminent bankruptcy of United Airlines.
The web crawlers used by Google News to find articles came across the story and catapulted it to prominence, resulting in a one billion dollar market loss for United Airlines.
There was just one problem. United wasn't going bust.
The article was published in 2002.
It was a truly remarkable fuck up - which was covered brilliantly by the New York Times as the story was breaking. The NYT described the cause of the issue:
Source: New York Times
If this sort of incident can happen with Google News, then is it likely that it can happen with the fact check system, or any other system for that matter, especially if it automated, just as the rest of the aggregator is?
Google News might not be as primitive as it once was, and isn't circulating stories from the global financial crash in today's news cycle. But if the algorithm continues to get stuff wrong - that means it can be manipulated.
The Problem Of Scale Goes Well Beyond Google News - Local Search Is (Still) A Hotbed Of Crap
Lawyer SEO aficionado and fellow lover of all-things-links, Gyi Tsakalakis, knows a thing or two about encountering spam at scale.
He recently referred me to two very interesting links.
One was a post from Joe Youngblood on how, to put it simply, keyword stuffing (of all things) was proving to be hugely effective in earning local map visibility.
The second link was also pretty wild. It's a thread (which is still ongoing) that documents millions of 4-star, no-comment ratings being dropped across businesses - from profiles with around 64,000 reviews each.
Talk about a power user.
But the reviews are sticking - and working - despite this obvious manipulation.
This is another set of examples of Google's algorithm getting it spectacularly wrong - and more to the point, the extent to which some people will attempt to exploit it as a result.
Google's such a behemoth that things will always slip through the cracks. Whichever way you look at it, however, the impact this could have on your website, and business, could be huge.
How can you measure that?
How To Measure The Impact Of Google's Profligacy
This section in 30 seconds or less:
Stock price is one way -
but how can you measure it for *your* business?
Setup Your Own
Automated Sentiment Checker
Google and its algorithms have been, and currently are being, gamed to promote fake reviews, fake coverage and other disseminating information. All in the pursuit of influencing purchasing decisions and sentiment.
Logic dictates that if there are more negative reviews and news stories about your brand, company, or product, which are visible to searchers and other people with a purchasing intent, it's going to affect your ability to convert.
I think everyone can take that as gospel - but a common complaint I've heard, and certainly experienced when I was in-house and also when consulting, is that its hard to get a boss, exec, or ops committee to take early or preventative action.
And that comes from a lack of data, as well as not understanding the impact the shift in brand sentiment has - IE, how much "more" negative sentiment there is, and how does it correlate with rankings, traffic, CTR and conversions.
I had a recent example where one of my small Shopify brands was plodding along nicely - until I saw a drop in sales and organic traffic.
Organic rankings hadn't changed, there was no change in how the SERP was presented (eg - a featured snippet appeared), there was no new PPC activity, and seasonality wasn't a factor (at least it hadn't been the previous year).
The traffic and sales drop came from the traffic entering via the homepage - so I immediately thought it was brand related.
Sure enough, when searching for "Brand" and "Brand Reviews" type variants, I saw a couple of results like this:
I hadn't setup rank tracking or ORM tracking for this brand (shame on me), so I had to go into Ahrefs to get a rough idea of when those sites got those rankings.
Sure enough, I saw a clear correlation between those negative reviews gaining visibility, and my CTR and traffic dropping off for brand (homepage) terms.
Search Console is great for showing that data - but if you want to kick it up a notch, take a look at Ontail. It makes the GSC reports a lot easier to navigate and filter.
Using the app, I could filter the CTR report to only look at the homepage, which resulted in this:
"bUT wHeRe'S tHe Y aXiS?"
Note: if you're waiting for me to say the reviews were unjustified, and to launch into a Dumas-type revenge - I'm sorry to disappoint. Frankly, the reviews were accurate.
Over the next couple of months, I subsequently measured the ongoing sentiment, plus the types of questions asked about the brand, on Reddit, Quora, and YouTube.
In an admittedly small sample size of 220 results, I saw the number of comments/topics etc that carried a negative sentiment go from around ~2% to over 12%.
Now, this might not have been a result of the negative reviews being more visible, but because of the issues that (justifiably) led to the reviews.
However, what was interesting to me was that the complaints mentioned in the 220 results were often different to those mentioned in the reviews that were now ranking - and in fact were issues the brand had previously praised for.
Suddenly, the pricing for our products was now "too high" and that the products themselves seemed "cheaply made", whereas the aforementioned reviews primarily focused on shipping times and customer service.
A number of the complaints on Reddit were accompanied by the "I've been thinking this for awhile" type spiel, which I thought was interesting.
This could well have been a competitor playing out their evil plan, but my gut feeling was that the visibility of the negative reviews on organic served as a catalyst for more negative content to be posted on all other mediums.
That then snowballed into a bigger hit on my organic CTA.
While dicking around and trying to find these correlations, it dawned on me that this analysis can be systematised, automated, and could result in a great bit of evidence to get business owners to act and respond faster.
Let's talk about how you can do that.
I'll show you how you can automate your data collection, and then present it in an easy-to-digest dashboard.
Get Ready To Collect Your Data And Make Your Case
Before The Defamation Takes Place
I'm not trying to make rhyming marketing slogans I swear.
Get Your Baseline
Sticking all this data into Google Data Studio seems like a no-brainer for me. It allows you to import timestamped data from Search Console directly in there, while the import function from Google Sheets will pretty much take care of the rest.
Again though, I have to profess my love for Ontail. One feature in the tool allows you to create your designated "landing pages" - IE those your pushing to rank. It then provides all of the GSC metrics (clicks, impressions, CTR etc) segmented just for that URL. It's a little bit quicker, simpler, and prettier than doing it in GSC and Data Studio.
In any case, you'll want to get your core metrics here (clicks, impressions, CTR) for all of your core organic pages with a decent time range - 30 days is OK, but 60+ is better.
You'll then want to extract the current brand/product reviews on 3rd party sites, plus other brand mentions.
We can't use Google Alerts here, or other tools, as they don't work retrospectively. So we'll need to do a bit of scraping.
The first stage should be straight forward: you'll want to do some brand searches and review searches ("brand reviews", "brand report", "brand vs competitor" etc.) and save all of the URLs that mention your brand or product in any way.
But you shouldn't stop there. Take a look at other sites that could be mentioning your brand - especially those that do not depend on organic rankings for people to see them.
The obvious candidates are Reddit and Quora - but if you know there is a popular industry forum or website in your niche, then the following method should be applied to those sites too.
Conduct a "site:" search on these websites, and limit the results to the last month.
For industry forums and for sites like Reddit, limiting by content in the last month is important, as only the freshest content will still be generating clicks - and therefore exerting any influence over your brand's sentiment.
Depending on the size of your brand, you might get hundreds of results, and you'll want a way of scraping all of the URLs quickly. If you don't have a tool like Scrapebox or Screaming Frog to hand, here's how you could do that.
Append the "&num=100" parameter to the end of the URL generated from your last search (without the quotation marks).
Then add the bookmarklet in this still-valuable-after-all-these-years post from Chris Ainsworth, which describes how you can get the results out of a SERP quickly.
Add that to your master list of URLs, and now it's time to assess their sentiment.
Again, depending on the size of your brand and the number of results you might need to go through, you could do this manually/have a VA do it, or use a tool like Rapidminer or Semantria to do the donkey work for you.
Ultimately, you should end up with your typical CTR for that page, plus the current % breakdown of your 3rd party coverage and their sentiment - be it positive, negative, or neutral. This is your base.
Create Alerts For All New Brand Mentions, Questions, And Reviews
From herein, you'll want to analyse brand mentions/reviews as they come in via the same sources.
For pages that might rank in Google, traditional rank trackers won't work - as you don't know which website might write about you next.
That's where a tool like SerpWoo comes in. SerpWoo will monitor the top 20 results for any keyword you give it - and allow you to setup alerts if any new page enters the top 5, 10 or 20, depending on the criteria you set.
If you set this up for your brand and core "brand review" terms, you'll get an instant alert to check any new ranking URLs sentiment, therefore giving you the exact cause of any influence on your baseline CTR, clicks and impressions.
For your Reddits, Quoras, and Niche Sites - Google Alerts should work just fine.
The key is to limit the search to each site specifically on separate alerts, like so:
As the results come in, grab the URL, analyse the sentiment, and update your master list. Make sure you timestamp each URL analysis to ensure you can correlate it with any CTR fluctuations. Here's how one of my reports looks in Google Sheets:
You should compare recent activity with the ongoing trend, and see whether there's anything exceptional happening - IE, more negative reviews, more positive reviews etc.
For automation lovers, I believe there's even a Zapier integration with Semantria.
Making The Report Work For You
As a simple ORM (Online Reputation Management) setup, this is pretty nice.
I have a working copy myself, which pulls in data from Google Sheets, GSC, Ontail, and some ranking data too. If there's enough interest, I'll see about releasing a template to the mailing list.
I know, right - blatant blackmail to get you to sign up.
You'll now have instant alerts monitoring Google and other sites that mention your brand, and give you a system (that can be automated) to tell you whether the sentiment is positive or negative.
This will also allow you to see if an influx of reviews has an affect on your CTR, traffic etc. and give you that information in real-time - giving you plenty of ammunition to come up with a strategy to amplify or dampen the sentiment shift to impress your boss.
And finally, it will give you a better appreciation of the impact Google can have on your traffic and leads even when your own rankings don't change. And should any negative coverage be fake or libellous, you'll get that information immediately, allowing you to get in touch with the publisher, or The Goog itself, and see about getting it removed.
That is, of course, if Google listens.
Unfortunately, your cries may fall on deaf ears.
Does Google Actually Give A Shit?
This section in 30 seconds or less:
Turns out Google doesn't hate SEOs after all - provided they run Adsense.
Fancy Getting That Template?
If there's decent interest, I'll send out the Data Studio template to my mailing list - so don't miss out. And remember, you can fill out this form and sign up to the newsletter, and I'll send you the article as a PDF for you to read whenever you like.
Google Makes Millions Off Of
Ben Gomes, VP of Engineering at Google, made a big song and dance about how Google would go out of its way to "demote" and "remove the incentive" for websites to publish "low-quality" and "misleading" content, in order to generate revenue.
Source: "Our latest quality improvements for Search" Apr 25, 2017
So what happened?
Er, well...the opposite.
A fascinating study by the Campaign For Accountability has found that Google is making money off fake and misleading stories more than ever.
You might be asking - "Is Tom just including this to shit on Google?"
No. Well, yes - but that's not the only reason.
If you're an SEO, or any kind of digital marketer for that matter, you *need* to know the nature of the beast that you're dealing with.
Google isn't this friendly Uncle that'll help you out and nudge you in the right direction.
They're not even that Uncle Knobhead that you try and avoid at family gatherings.
They're a monopoly who's primary purpose is to generate revenue for themselves - with a "fuck the consequences" type attitude.
They won't give you anything. As far as they're concerned, they don't owe you anything.
If you think Google is going to help you out the next time you're in a bind - don't be so naive.
You have to take matters into your own hands - and that sentiment checker is just one piece of the puzzle.
All of which begs the question:
If Google is responsible for swaying elections, or causing businesses to lose millions because of a mistake in its automation, can it be forced to make these entities whole again? The answer is, almost unequivocally, no.
To put it another way mate - fuck 'em.
But if you think that's bad - we've only just begun.
Squelching Through The Cesspit That Is Twitter
This section in 30 seconds or less:
Facebook's "influence" and subsequent notoriety has been covered to death.
What about it's crazy cousin?
How Influencing Sentiment On Twitter Has Evolved
Remember Cynk? Didn't Think So.
It was only a few years ago where, on Twitter, you could create a few thousand "price action" bots and have your very own Stratton Oakmont going on.
You know - if you were an absolute shithead.
That was the case with Cynk, which managed to achieve a $5billion market cap, despite having the quite serious handicaps of zero assets, zero proprietary technology, and only one employee.
Basically - like 99% of crypto projects out there, but back in 2014.
Whether this was a couple of rogues with bots, or a group effort, it was an early sign of things to come.
Nowadays, if you want to discredit a person or company, pump a stock or, you know, influence an election, you need to be a lot more coordinated.
Let's start with some Failure Therapy.
The Game Has "Evolved" - And Not Everyone Gets It Right.
Communism Is When You Buy Everything From The State. Capitalism Is When You Buy Everything From Amazon.
Amazon - you might be assimilating yourselves into our daily lives at an alarming rate, but your social media "influence" attempts are hilariously bad.
After facing criticism for, among other things that make Amazon seem like a cartoon villain, providing its underpaid staff poor working conditions, "Amazon Ambassadors" showed up in force on Twitter.
Quick to refute claims from The Guardian, among others, that they can't use the bathroom unless they ask (what is this, The Shawshank Redemption?), heroes like "Misty" stepped in.
Dat Ratio Tho
Well, shut me up, Misty.
In fact, reading through "Misty's" tweets, I sometimes struggle between deciding if it really is just a terrible attempt to change sentiment and discourse, or if it's a brilliantly put together piece of satire.
The thing is, Misty and her army...this will never work.
Nowadays, in the Twitter echo chamber, you'll never change the discourse by simply "arguing back".
Even if the claims and opinions were true (they aren't) and Amazon was a caring and considerate employer (they aren't), once the group-think has made up its mind, merely tweeting rebuttals is swimming against the tide. Of shit.
But When Companies *Do* Get It Right, It Can Be Spectacularly Effective
This section in 30 seconds or less:
So, How Do They Do It?
If, like me, you're the crafty and morally onerous digital marketer, you'll be wondering just how companies get this right, and how lucrative it can be.
But if you're flagging, you can fill out this form and sign up to the newsletter, and I'll send you the article as a PDF for you to read whenever you like.
It's Not Just Persona Management - It's Also A Healthy Dose Of Subterfuge
What's The German Word For "When All This Scary Market Technology Actually Originated From Government Projects?"
The most successful attempts to influence sentiment and discourse over Twitter resemble a two-fold attack.
The first is to create online personas, over months and years, that can withstand a great deal of scrutiny in a way that a typical Twitter bot could not. These groups of personas, amongst their regular behaviour, will 'react' to news and events immediately in order to shape the discourse.
The second way is a lot more 'cloak and daggers'.
It will be using these personas to infiltrate already existing groups, incels, activists etc. and attempt to influence within.
It isn't just a case of "Hey, we should say or do this, that's the way to do it.".
It's often times the opposite - to go in there and make people sabotage themselves.
It can be a lot easier to get people to spectacularly fail, discredit, and destroy their own platform that "goes against your brand" than it is to go in and actively persuade people to think otherwise.
How Personas Are Created -
And How They Thrive
When it comes to "inventing" tech, the US Government has a few notable credits to its name - the internet (kinda) and TOR are chief among them.
Well, how's this for a feel-good story to add to that list?
The founding technology in which the biggest PR and Social Media Firms use to manage personas and influence Twitter's major discourse was a direct descendent of the tech used in America's operations against Al-Qaeda in the Middle East.
The United States Central Command (Centcom) worked with private corporations to develop 'sock puppet' software, with the aim of creating fake online identities to spread pro-American propaganda, and "communicate critical messages and to counter the propaganda of our adversaries" (source).
This software became the Godfather of the persona management technology being used en masse, versus the masses, today.
As a quick aside, want to know which company Centcom awarded the $2.76million contract to?
Who, at the time at writing, had an expired SSL certificate.
And who, up until mid 2018, was offering B2C, public VPN services, competing with the likes of ExpressVPN, Hide My Ass et al.
I Feel Sorry For The Person Who Was Using A VPN Service That Lets Its SSL Expire And Is Funded By The US Government.
Good luck with that.
This Persona Management Software has evolved so that now, in its current form, it facilitates a single operator being able to maintain conversations across made-up characters online.
The software facilitates language translation, and even maintains a particular cadence across conversations.
Those "characters", or personas, are not just created fresh for any campaign. They're typically bought from online marketplaces, "aged", and then given a biography, a history.
Sounds a bit like a William Gibson novel, doesn't it? A work of fiction, perhaps?
The truth is: this is a global marketplace.
From Corporations To Politicians - via Russia
I'm not even going to touch the whole Russia-US Election-Trump thing.
Just no. I have to keep my sanity.
However, there's plenty of evidence of politicians and campaign groups in other countries using persona management in the same way.
Mexico being a prime example.
Erin Gallagher is a self-described multi-media artist who creates "Data Visualizations & Social Media Analysis of Twitter Networks".
In this astonishing Twitter thread, she goes into the depth that Mexican officials have gone to use persona management to sway public sentiment - and backs it up with evidence.
Something that flew under the radar because, as she puts it: "People aren't interested in those bots because Americans don't care about news from Mexico (or anywhere outside of USA)".
Sorry Jared - looks like you have competition.
Within that thread, Erin references a Buzzfeed report that visited "The King of Fake News" in Mexico - a company called Victory Lab.
While rumours of Victory Lab's influence, specifically, may have been greatly exaggerated (what, you expected Buzzfeed - a company that was literally created as a clickbait farm - to fact check?), there will undoubtedly be serious agencies doing this - across the globe.
One insight from that video I found particularly interesting was Victory Lab's preference to create several smaller websites to "share its news" - rather than creating 1 or 2 mega sites.
A Big Site Becomes A Brand.
And A Brand Always Has A Big Target On Its Back.
The "larger" one of these fake media outlets becomes, the greater the risk that a "real" media website will look into it, do a bit of digging, and expose it for what it is.
Keeping a lower profile, and across several websites, would allow the discourse and sentiment to achieve a greater reach.
And often, that reach is provided by the "real" media outlets themselves.
"Real" media websites will, frequently, and without fact-checking or doing due diligence, cite "fake" outlets and amplify their message for them.
More websites means more "sources" for them to use.
Which is such a sad indictment of our news cycle in today's world. Our news is so event driven now - no story is ever incremental, nothing is an issue that then develops.
It's all reactionary.
And so the ability to manufacture events and spread the message or reaction is what news has become. That, as we have repeatedly seen, is exploitable to a huge degree.
Bringing It Back To Google News Corruption
Look - if politicians are looking to take advantage of this bastard child software of the US military for their own game - you can be damn sure that companies and corporations are doing the exact same thing.
That is your competition.
With an army of personas - be it Twitter characters or websites - you can then infiltrate Google News as well.
You spread the message around enough so that it gets picked up by a major publisher, which is featured on Google News.
You then make the story "highly cited" and "trending" by linking to it and amplifying the story with your sock puppet sites and personas.
My tests have shown that the "Highly Cited" tag is awarded when an article is linked to a lot, even if those sites are not in Google News themselves.
But the odds are, because our news cycle is this event-driven mess, other major publishers that are featured in Google News will begin to pick up the story - because they don't want to be seen as out of the loop.
...In theory, of course. *Cough*.
Memes, Not-So-Glorious Memes
This section in 30 seconds or less:
Serious warning: some of the upcoming sources can be difficult to read.
Learning From The Language Of The Saboteurs
When The Great Political Orators Of The Day Are 4chan Users, In /Pol/, Creating Fake Personas, Posting Memes, And Shaping Political Discourse - You Know Things Are... Well, Fucked.
Emotion is the hook. Facts are a side dish.
Following the 2016 US Presidential Election, a series of Pastebin leaks have been posted that show the depth of the operations going on behind the
At the heart of all that...Memes.
Who doesn't love a meme? A warm, cuddly, harmless meme. Everyone loves them. One of my favourite SEO accounts just posts Lord Of The Rings GIFs to contextualise day to day SEO life.
So you probably won't thank me when I tell you memes were at the heart of spreading lies, hate, and divisions throughout 2016.
This Pastebin leak makes for some pretty brutal reading, while also giving insight as to how people are using Twitter to effectively influence sentiment and discourse.
It plays up to some extremely emotive, stereotypical beliefs users will have, and exploits them to the fullest.
It gives a play-by-play on how users should setup their personas so as not to look "too obvious".
Erin Gallagher put together a Medium Series that describes the staggering amount of detail 4chan and other users went to in order to influence the 2016 election.
It's scarcely believable - but you look at who the president is now and how, for lack of a better term, fucking crazy 2018 was, and it then starts to make sense.
A prime example of this activity has been preserved - almost "perfectly" - in a Twitter account that has since been abandoned after the election - @nubianawakening.
It's the epitome of what the pastebin and 4chan leaks talk about producing:
- It has a decent sized following, but it's not too big to draw unwanted attention
- It hooks into emotive messaging religiously, often using crudely put together memes
- It aims to discredit, stain, and lambaste one side of the discourse - rather than attempt to promote the other side.
"Bernie, Jill, then Trump. No Hillary."
This is the kind of messaging on social media that proves to be most effective, when influencing sentiment.
One side is denounced so heavily, so mercilessly, so that it becomes "bad", "evil", and "red" - so that literally any alternative, in comparison, will be seen as "good", "innocent" and "blue".
It Was Never About Being Pro-Trump - It Was About Being So Vehemently Anti-Hillary That An Alternative *Had* To Be Found.
And the key to success? Using memes.
In other Pastebin leaks, there are content marketing plans more detailed than many I've seen in big corporate environments.
I won't quote the "content angles" here, because its absolutely brutal reading. But go see it yourself via the link above, and start to put the election year into context after you've read it.
As difficult, and frankly, downright devastating, it is to read, political movements then and what we've seen since - Brexit, Le Pen, The Caravan, 5 Star Movement, Cambridge Analytica, Soros and Anti-Semitism, - show just how effective this emotive messaging has been. And will continue to be.
Social media is the swamp in which this kind of messaging can flourish.
Is there any reason to think that this same kind of messaging wouldn't be as effective in a digital marketing campaign?
I can't think of any.
This kind of stuff isn't going to make it into your Facebook ad copy (although, with their level of governance lately - or lack thereof - it would probably get approved).
But the ability to play on people's emotions, and to whittle away at the credibility of the competition in what seems like a harmless, meme-based approach...
...I know for a fact that companies are using persona management for this exact purpose right now.
Our Pals At The Government Are At It Again
This section in 30 seconds or less:
Learn how a US Government funded project egobaited activists into breaking the law by leaking information that wasn't even accurate.
Taking Discourse Down From The Inside
Team Themis was a consortium made up of the companies HBGary, Palantir, and Berico that was set up in order to provide offensive intelligence capabilities to private clients.
Among its many operations was the infiltration of its operatives into activist groups, and other organisations, its clients didn't care for.
The goal here was not to publicly discredit the organisation, but to infiltrate it to change its discourse from within.
In fact, it went further than that. It was to provide deliberately false "information" and "leaks" to the organisation that they would make public - which would almost immediately be ridiculed as fake and inaccurate, thus tarnishing the legitimacy and message of said organisation.
Oh, and just for good measure, Team Themis would then tip-off the authorities and alert them about the leaking of classified information (ignoring the fact said information was fake), and threaten to charge them with the crime - unless they cease & desist.
And they would have got away with it too if it wasn't for that meddling Barrett Brown.
So Er...What Was The Marketing Application For This One?
Put it this way, I'm not saying getting a VA to infiltrate a competitor to spy on their JIRA workflow and to make crap code development in their user stories is going to be an effective (OR LEGAL) use of your time.
But what about Facebook or Slack groups made for your target demographic?
Are you going to be the plucky, friendly, but-gets-on-everyone's-nerves "customer representative", hooking people up with free trials?
Or are you going to drop a persona or two in there - whose "growing influence" in the group can start recommending providers and services that just so happen to be affiliated with you, while throwing shade at the competition in an irreverent, "meme-ified" sort of way?
But Don't Ask Me - I'm Just A Digital Marketer
This section in 30 seconds or less:
A marketer who takes months to finish his guide!
That Wasn't Exactly Light Reading,
Marketing is about challenging yourself.
I don't mean "challenging yourself" to become a 4chan trolling, meme posting, absolute shit of a human being on Twitter.
But "challenging yourself" to look at the wider applications of the marketing mediums we work in - and look at the marketing implications that can come from that.
That's what I've attempted to do in this 2 part series. I think part 1 achieved that - I'm curious to see how part 2 will go down.
I've tried to give an honest - and at times it's been brutally honest - assessment of the landscape, and how marketing is going to continue to evolve.
You Can't Rely On Monopolies To "Do The Right Thing". You Have To Seize The Initiative For Yourself.
Do you use this information to protect yourself, your website, your brand, from these kinds of nefarious action?
Or do you look at carrying out some of this guerrilla marketing for yourself?
That choice will come down to you and what you're comfortable with. But I'll tell you this:
For the most part, Google, Facebook, Twitter -
they won't give a fuck as long as they get paid.
Make sure you get paid too.
Don't Miss It When It Comes
Sign up to the Raynernomics newsletter
and get access to the next guide before anyone else