I don’t know about you but I get more and more frustrated each time I use Google or one of the other search engines to search for information on a specific subject. I type in a specific keyword and a large number of pages are returned that are totally irrelevant or only slightly relevant to what I want.
The search engine providers spend a huge amount of time and resources to try and ensure that pages returned are relevant to the keywords used. This process has improved a great deal over the last few years but still it is far from perfect.
The problem that search engine providers face would simply disappear if the keywords on every page correctly and accurately reflected the content! Of course that is almost impossible to achieve unless each page is physically checked against the keywords by someone who understands the content.
Two years ago I decided to start out on a new mission to see if it was possible to resolve this problem in some way. After six months of research I suddenly had one of those ‘light bulb’ moments when I realised that I was approaching the problem from the wrong angle.
I was assuming that the existing concepts of search engines - that a user enters a search term and the engine uses that search term to return appropriate pages would have to remain the central pivot – was the only approach.
I had forgotten the very things I identified earlier – that the problem would go away if the keywords always reflected accurately, the content of the page. In addition keywords that were not relevant would not be allowed.
Of course everyone reading this will probably say “Well that is common sense” and read on without really considering the meaning of the actual statement.
I asked myself:-
What would happen if all the keywords accurately reflected the content of every page being searched?
Would it be possible to achieve such an objective, and if so, could it be done at a cost that was bearable?
After consideration I decided that it would not be possible to achieve that objective with the existing pages as Google alone processing over one trillion pages of information. Of course as time moves forward it will become two, three, five and even ten trillion making the problem even harder to resolve.
It there an alternative answer?
Well I believe there is. We may not be able to resolve the existing problem easily or in the short term (if at all) but we can introduce new technology that almost guarantees that keywords always reflect the content, and quality content at that … and better still we can do that at no or little additional cost.
I have a good plan, but don’t have the money to fund it. I’d be happy to give away the lion’s share of the business opportunity to anyone who sees the potential and can successfully develop the software needed to achieve the results or to any individual or business that would like to fund it.
Call Me …. but only if you are interested!
+++