Machine Intelligence and disambiguation

I n the process of making machines as intelligent as humans, Google and a few others started using #algorithms (mostly LSTM) that track what a user searched in the past. This happened over a decade ago when they started using user based search inputs and started recommending search results.

To this they added algorithms for explicit and implicit intent. Explicit means the user has queried a particular word with an intent to find out more or to transact. Implicit means, if you are using a mobile and type a cuisine, #Google would detect that you are on the go and probably looking for a restaurant serving that cuisine nearby.

Without having to rewrite the web content, we can still allow Google to help your site to rank higher (for certain queries).

This means that if a website mentioned certain words or concept in detail, only then it would rank high for that user. 

This change spawned the whole industry of #contentmarketing which involves building answers to questions that a typical user would ask. This whole thing became too resource intensive, big spenders can afford to throw money at this effort, where as smaller organisations quickly started to disappear from the organic search results.

 Without having to rewrite the web content, we can still allow Google to help your site to rank higher (for certain queries).

For example, let’s say you have a news snippet in your blog about Barrack Obama. Given the nature of the post (size) you can’t add endless (usually obvious) information about  him in the news snippet. Typically Google ranks up your post only for his name or semantically close words for Obama. Unfortunately, you don’t have many words then how will you rank up.

There is a way out to tackle this problem. This is called #structureddata augmentation. By augmenting your content with a Schema/ DBpedia markup you are providing multiple information to Google about the string “Obama” used in your post. So if a user searches for former President from Hawaii or husband of Michelle Obama in his search query,  the chances of your small snippet showing up high in the SERP is very high.

At luminate.ai , we use curated massive datasets, learning algorithms and predictive models to help you with automatically augmenting structured data on contextually relevant words in your post. Using this method, we have seen a spurt in organic traffic by over 20 % in a short period of time over the baseline.

Structured data also helps Google to show your page link in SERP as rich snippets, includes it in Google Knowledge Graph or even Voice search results.

2019-01-30T12:40:42+00:00