Today I was searching for specific news on a very specific topic, based on one web page I’d found. I was having difficulty getting results because they seemed to rely on combinations of words that I just couldn’t nail down. Each combination just wasn’t returning the right hits. Clearly Google’s referral system couldn’t hack it, for some reason.
I thought to myself, “If only I could just say to Google ‘Look Google, I’m interested in this page. Now go away and find stuff related to it for me, why don’t you.'” And then I realised, wouldn’t that be an amazing feature? If somehow a search engine could scan a page, or set of pages, work out the semantic threads in it, and then do the same for other pages and cross-reference? I don’t think we’re talking meta-text here. We’re talking context.
Perhaps this will be a step closer when vertical search engines take off – I mean, really take off, so that they supplant Wikipedia as the first Google hit for the term. I don’t consider the current candidates any ‘smarter’ than their horizontal cousins: for example researchindex.com seems more a portal (with annoying pop-under ads and a home-page set feature – despite my link you’re better off not going there), and Jumbo doesn’t seem to know anything about Eniac.
If anyone does invent this, don’t forget, it was my idea first. It’s not a search engine either. It’s a semantic motor – smarter and faster. And it’s got to be called CON_text. Trademark.