Indeed, this latest development as cool as it is is a continuation of a long string of innovation the sector has seen since the birth of the web. Wolfram Alpha is likely to fill one niche area of the market, while other innovations step-up to solve other problems. That may still leave Google holding the bulk of the market, but at the very least it's encouraging that so many are continuing to develop new ideas despite such a near monopoly.
How search works
Every search engine is a bit different, much to the annoyance of search engine optimisers the world over. That said, search engines basically send out a robot or spider across the web to follow each and every link. The service then indexes what it finds, with different engines storing different bits for example, Google, stores everything found in the source of the page, while others just look at what's displayed.
When a user searches all these indexed pages, relevancy comes into play. Google's PageRank is the most famous algorithm, but every engine has its own way of finding the best results.
But it's not just the web that's searched. Wolfram Alpha is promising to return answers, not just documents, from across the web, while many other searches look at more than the web or even limit themselves to specific niche areas of it. Consider the Pirate Bay. The now-notorious site offers a search of BitTorrents, specific file types that are key to its users. Or consider Google Maps it is searching for anything that can be geographically pin-pointed. Even services like Twitter are changing the way search works, by letting us search what people are tweeting about.
Before the web
Quite simply, the world didn't need search in the early days. In the beginning, all web servers were listed on a CERN website that was edited by a man then known as Tim Berners-Lee those were the days before he was knighted.
Once that list became unwieldy, a search tool dubbed Archie that's Archive' without the v' searched via a database of web servers. Soon after, Gopher's rise led to a pair of new search tools, comically dubbed Veronica and Jughead, which searched using file names and menu titles.
It wasn't until 1993 that the first robot came about. It was called the World Wide Web Wanderer, but it was for measuring the web, not searching it.
Modern search kicked off in December of that year with JumpStation, which used a robot to crawl the web, indexing it to make it searchable the three key aspects of modern, or at least current, search. JumpStation was limited to titles, but another system called WebCrawler took it a step further the next year, managing to search full text.
Lycos kicked off the money making in 1994. The Carnegie Mellon project not only robotically indexed every word on a page for searching, but it was also used by the public and went commercial.
But it had competition. Among the pack that emerged over the next few years was Excite, Magellan and Infoseek, in addition to Altavista and Yahoo. Perhaps surprisingly now, at the time Yahoo didn't search via full pages and keywords, but instead used a web directory system.
By 1996, dominant browser Netscape was struggling to keep things fair, so for a fee of $5 million it let search engines buy the search spot on the Netscape page in rotation.
But the search engine market was set for a shakeup. In that same year, Larry Page and Sergey Brin teamed up at Standford University to develop a search engine based on relevancy, initially dubbing it BackRub. Two years later, Google was incorporated as a company with investment of $1 million; a year later, they had $25 million to play with.
In 2000, the market as we know it now started to take shape. The dot com bust took down some, but Google's PageRank bumped it into the limelight, and it started offering advertising that year.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.