Valued at over $110 billion, topping Apple as the world’s most valuable brand, Google is not only the biggest search engine in the world but also one of the most influential tech companies in existence, with a wide range of online services, the most popular mobile operating system, a cutting-edge fiberoptic infrastructure, and a fleet of self-driving cars, just to name a handful of Google’s initiatives. Since its IPO on August 19, 2004, Google’s original offering price of $85 a share has increased more than 11-fold, to around $940 as of May 2017.
Even though the online landscape is vastly different now compared to 1996, which is when Larry Page and Sergey Brin founded Google as Ph.D. students at Stanford University, the company still stands behind its original slogan, “Don’t be evil.”
But how exactly Google got to where the company is today, and how did people search for things on the web before they could use Google? Let’s explore the story behind the iconic minimalistic search page.
Search Engines Before Google
In 1987, when asked to connect the School of Computer Science to the Internet, Peter Deutsch, Alan Emtage, and Bill Heelan thought it would be great to allow people to find specific files on FTP sites easily. They created and released Archie, a tool for indexing FTP archives. Archie is now widely recognized as the first internet search engine. Back then, there we so few files on the internet that people didn’t need any other tools apart from the Unix grep command to search the listings created by Archie.
It wasn’t until 1994 before the first engine with support for natural language queries was released. Its name was Altavista, and it became the most-used search engine before it was overthrown by Google. Altavista and other search engines at the time, such as Yahoo! Search, Lycos, or LookSmart, relied on an entirely different method for delivering search results compared to modern search engines.
These early search engines relied on databases of textual keywords, which were often populated manually by netizens and human editors. When a user made a search, an early search engine would compare the search term with an extensive database of terms and presented the user with the most relevant terms.
The relevancy was usually determined based on similarity, which resulted in many misleading results. Quite often, the topmost search result wouldn’t be the official website of a company or product but an online retailer selling the company’s products. The internet needed a better solution, one that could keep up with the rapid growth of the web and provide users from around the world with accurate search results. This solution came to life with the registration of the domain google.com on September 15, 1997.
Other Articles To Consider:
Google Enters the Stage
To understand what made Google so different from other search engines at the time, it’s important to understand Google’s main ranking algorithm, PageRank (PR). According to Google, “PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.”
If this method of evaluating the relevancy of websiteslooks familiar, that’s because it has been inspired by how academics evaluate publications. Academic papers are considered to be of value when they cite eminent authors and are, in turn, cited by acclaimed academics and journals.
Larry Page, after whom PageRank is named, created a web crawler, which started exploring the websites in March 1996. The crawler creates complex webs of links by following link after link. PageRank then assigns each link a numerical value based on its importance. The total number of links that lead to and from a page and their values are the basic influencers of how high or low website ranks on Google’s search engine results page (SERP).
The World Ruled by Google
In 1999, Google was handling 500,000 searches a day; in 2017, Google processes over 40,000 search queries every second, which is over 3.5 billion searches per day. Since its early days, the number of ranking factors has grown exponentially, with many SEO experts talking about over 200 factors that help Google decide how to rank a site.
Considering that over 77 percent of people around the world use Google as their preferred search engine, every business and website owner should strive to please Google’s ranking algorithms. While there are many shady tactics floating around the web that describe how to trick Google into ranking a site higher than it should be, their usage should be avoided at all cost.
Not only are these socalled black hat techniques not guaranteed to work, but they can lead to a drastic penalization and even an outright ban from Google. Instead, it’s a much better idea to stick with white hat techniques recommended by Google.
White hat techniques can be grouped into several broad categories: domain factors, page-level factors, site-level factors, backlink factors, user interaction factors, special algorithm rules, social signals, broad signals, on-site web spam factors, and off-site web spam factors.
Any modern website that wants to rank high on Google must be mobile-friendly, optimized for speed and security, feature unique content that keeps visitors engaged, use relevant keywords, take advantage of schema microformats and local searches, have an established social media presence, and much more.
Because only the most experienced web development agencies and those who hire them can develop websites that meet all the ranking criteria, Google is greatly influencing who gets noticed on the web and who is destined to fade into the background.
In the world ruled by Google, anyone with an online presence needs to have at least some understanding of how it works. While Google’s search engine algorithms may seem incomprehensibly complex—and they are, to some extend—the basic principle behind them is simple: Google wants to highlight the most valuable websites