John Muller was left squirming for 45 minutes in front of a packed audience in auditorium 1, main stage at Brighton’s Bi-Annual SEO conference April 12th 2019.

It was a good thing, as well. Quickly after the interview finished SEMrush posted a graphic representation of ‘John Mueller Q&A takeaways’ (find it at the bottom of this post) that was as unaware and sparse as their keyword volume database. They missed the point by a country mile. It’s what Mueller didn’t say that was remarkable.

John Mueller is Google’s webmaster public liaison based in Switzerland. He offers support to SEOs and developers in all access forums such as webmaster hangouts.

Recording of the live stream of John Mueller & Hannah Smith’s April 2019 Q&A Keynote (full transcript linked to at the end of article)

Deeper takeaways from the Brighton 2019 keynote

Mueller is largely a huge support to the community, a well-meaning guy who genuinely wants to help webmasters align the functionality and usability of their website with its ability to generate organic traffic from Google via making it crawlable and indexable.

The last time Mueller conducted a Q&A at Brighton – April 2018 – it was with Aleyda Solis (@aleyda) International SEO Consultant, speaker and author who in 2018 won European Search Personality of the year.

Her 2018 keynote was as tepid and tired as the beer from Deep Crawl bar. I won’t go into detail, but there are certain members of the SEO community who don’t like to rock the boat with Google.

They tend to be the ‘white hat only’ preachers who are always on the public speaking circuit and amass strong personal brands. They’re generally in the Moz and SEL bubble of influence and act as an echo chamber for the sort of ‘make your content awesome’ sentiment that Mueller frequently espouses.

When Kelvin introduced the keynote in 2019 he said the question master was someone that always made him feel comfortable to speak with and I feared the worst. I needn’t have been worried: The format of Hannah Smith’s 2019’s Q&A was leaps and bounds ahead of Solis’ 2018 attempt.

This isn’t a blog post to summarise everything discussed during the keynote. I just want to outline the finer detail of what Mueller said (and didn’t say) as a way of unpicking what’s going on at Google.

Google Using User Behaviour As A Signal

Smith asked Muller directly if Google uses user data as part of their search algorithm.

Hannah: Surely at that point, John, you would start using signals from users, right? You would start looking at which results are clicked through most frequently, would you start looking at stuff like that at that point?


John: I don’t think we would use that for direct ranking like that. We use signals like that to analyze the algorithms in general, because across a million different search queries we can figure out like which one tends to be more correct or not, depending on where people click. But for one specific query for like a handful of pages, it can go in so many different directions. It’s really-

It’s been a long help theory (lead most publicly by Rand Fishkin but others, as well) that Google is using data from user behaviour such as clicks and bounce rate as one of the ranking signals in its algorithm.

Google engineers have publicly refuted this charge on numerous occasions, saying that user signals are only used in an evaluative sense, to measure the quality of the algorithm.

User behaviour is too noisy a signal, too easily spammed, they argue as proof of this.

John Mueller’s answer to Smith’s question was “don’t think we would use that for direct ranking like that.” What does think mean, in this context? Either you’re using user behaviour as a quality signal or you aren’t…you’re a Google engineer – surely there isn’t much for you to think about “I don’t think” reeks of either dishonesty or ignorance, or perhaps a mixture of the two.

I know we use user signals but I’m not allowed to say, and anyway, I haven’t been officially told we use user data. You know, maybe we aren’t.”

Google engineers have publically been very clear when previously questioned on the same matter – why wasn’t Mueller able to provide the same clarity in Brighton? Is it because he’s aware that Google’s position on the matter is no longer valid, reasonable, or even close to the truth?

Earlier in the day I watched a talk with Michelle Wilding, Head of SEO & Content at The Telegraph.

In one of her slides she asserted that websites ‘pogo-sticking between positions 1, 2 and 3’ was related to certain patterns of user behaviour. I questioned her on the point afterwards and she was very cagey (‘I don’t know if you’re trying to slip me up, but go and read a few case studies’) but I imagine someone in her position would have some sort of data or has run experiments to validate these theories.

https://image.slidesharecdn.com/brightonseo-uxbestfriendmwbslideshare2-190412091609/95/brightonseo-why-ux-is-seos-best-friend-12th-april-2019-12-638.jpg?cb=1555061513

SEO [the verb]’s change in direction has meant that the user is now first and foremost. Google cares deeply about providing quality experiences for searchers. Therefore UX is more important than ever – and so is collaborating with your UX/CRO department.

I also recently read an article in SEOBook that makes the argument for how it would be possible for Google to evaluate and discern between different types of user behaviour to ensure the data was as clean as possible.

And, since Google is tracking the behavior of end users on their own website, anomalous behavior is easier to track than it is tracking something across the broader web where signals are more indirect. Google can take advantage of their wide distribution of Chrome & Android where users are regularly logged into Google & pervasively tracked to place more weight on users where they had credit card data, a long account history with regular normal search behavior, heavy Gmail users, etc.

I watch a talk in April 2017 at Brighton by Malcolm Slade who suggested that backlink data alone wasn’t enough to account for how Google ranked sites for commercial queries on page one, that there had to be some alternative factor no-one was considering. Slade called it ‘Brand’ and encouraged SEO’s to continue with ATL efforts and brand marketing, as well as encouraging users to search for brands in Google.

Essentially, if backlinks are such an important part of Google’s algorithm, and backlinks are such a highly spamable signal, why not use user data?

Surely user data is just as reliable (and as spammy) as backlinks. Remember: Google has access to huge swathes of data via its own website, its own browser and its own mobile phone operating system, as well as personal accounts (gmail) to track and monitor its users with their permission. Strictly speaking it would be much easier for Google to make sense of user data sampled through the environment of its own walled garden, than it would to use backlinks, which are as regularly exchanged for cash under the table as auditorium 1 speaker deals at Brighton.

Perhaps its true that Google doesn’t use user data in its core algorithm.

Google constantly evaluates SERPs via user data (for example CTR) and uses patterns learned here to make algorithmic tweaks designed to surface the most popular results from the user data at the top of the SERP. After which the algo tweaks are instantly released to the live algorithm, ready to be re-tested.

Google says they make ‘thousands’ of updates per year, 2-3 times per day.

Granted, some of these updates (a lot) are design, UX and functionality changes.

Perhaps another reason for the high number of updates is due to a constant iterative feedback loop between Google’s AI machine and live results in such a way that the index is constantly being improved and updated.

This would make sense, and it would also make comments by Google engineers technically true (that user data doesn’t feed into the live algorithm) even if this isn’t exactly in the spirit of the way their algorithm really works.

Google’s Link Graph (are links still a signal?)

Another point that made me seriously re-consider Goole’s approach to using user data is Mueller’s response to holes developing in the link graph via large, authoritative sites applying a blanket nofollow to all external sites (and the fact this is becoming more commmonplace).

Hannah: So many major UK news sites, examples include The Daily Mail and The Mirror, are now no following all external links, even those placed editorially. So don’t just mean they’re no following through like advertorial. They’re doing that sitewide. Plus we’re also seeing other new sites like The Sun, The Independent, The Telegraph going no follow section by section it would seem. So some sections you can get a followed link if a journalist writes about something and they will link, and most of the time they then no follow. How does this impact the link graph for Google?

John:Okay. On the one hand, this is definitely something that does affect our link graph. It does affect our ability to pick up especially new and fresh content.

Asked how all UK publishers moving to blanket nofollow external links would affect Google’s link graph, Mueller didn’t have a clear answer. A change like this would leave a big hole in the link graph – we know that ‘links’ are one of Google’s main three ranking signals and John knows this better than most.

He seemed more concerned about Google’s ability to discover new content if large sites refuse to link to external assets rather any potential affect on the link graph. This means either: a) Google no longer respects the nofollow attribute or b) Google uses other signals that are more important than backlinks (ie user behaviour).

If you replaced external links with user behaviour, the only use of backlinks (although a fairly significant one) would be discoverability of content.

Imagine for a second that Google no longer uses backlinks as a major ranking signal (Gary Ilyes as said on numerous occasions “you’re paying way too much attention to links”).

Perhaps, instead, Google uses backlinks as a way to discover new content, and a way to prioritise crawl budget. ie, URLs with one or two external links would only be crawled once or twice a week. URLs with 100’s of links would be crawled daily.

In this case, backlinks would still be a vital signal (you want Google to discover and crawl your pages regularly) but it wouldn’t contribute to how well they rank. Ranking would be measured by complex content analysis programs, relationship between entities in documents and data in Knowledge Graph, site speed, layout and signals relating to how users interact with your content.

It makes sense, and intuitively feels similar to how the current SEO landscape operates (large core algorithmic updates aside).

Google ‘accidentally’ deleting 4% of its index. Was it an algorithm update gone wrong?

On the topic of large algorithm updates, Smith asked Mueller for further clarity on the recent (April 2019) indexing issue, where Google ‘accidentally-on-purpose’ deleted a huge section of its index, an issue some in the SEO industry theorised could have been an algorithm update gone sour.

Mueller’s was the passe Google PR response: Data error.

I just want to make this clear. I don’t have a problem with John, but when you’re essentially the PR mouthpiece for a company that bleeds billions of marketing dollars from its creators by scraping their content and serving it to users alongside paid advertising – you’ve got to do a little bit better than shrug your shoulders and through a sheepish grin mumble; “its kinda, like, sorta, um…I don’t kinda, um know…you know?”

Congrats to Hannah Smith (of Verve Search, Mueller’s interviewer for the 2019 keynote Q&A session) for asking the tough questions. Even if John shouldn’t be held accountable for Google’s corporate responsibility (to be clear: he shouldn’t) someone at Google should. Questions like those Hannah asked in such an important public SEO forum are essential for the health and growth of the industry. Far better than the snooze-fest Aleyda Solis pushed out 1 1/2 years ago as John and she grinned at one another through moonshine eyes for a nauseating half hour of giggly in-jokes.

Full transcript here courtesy of viperchill

https://pbs.twimg.com/media/D3-VtYZWwAINxsz.jpg:large

Agree, disagree? I’d love to hear your thoughts. Email: [email protected]

Categories: Column

Dylan

Dylan owns and runs SEOYates.