Login
Password

Forgot your password?

How a Search Engine Works

By Edited Nov 13, 2013 0 0

Google's Search Engine
Search engines are the most popular method used to find information in the world.

Search engines are pretty simple websites on the outside, a search box with a submit button, but on the inside they hide some of the most complex and advanced software and hardware setups in the world.

Google receives several hundred million queries every day and they need to ensure that people using Google to search, receives quality and relevant links to every query the type in and at the speed of light. Few people understand the complexities of a search engine as advanced as Google or Bing's search engines.

Why we need to know how a search engine work

It might not seem important to know how a search engine works, but when you have a blog or a website that you want to rank for, it can be to your advantage to learn everything about search engines, especially Google. Learning search engine optimization (SEO) can be made easier when you see that a search engine is not just a page on the web.

When you know how a car work you can better make use of what the car offers to you as the driver, the same goes for search engines.

A typical search engine is made up of three parts; the front end, the back end and collecting information about websites.

Front end

Go to Google's website and there you will find a search form, where you will type in your query, and press the search button on the left.

Google has developed an algorithm that compares your search query to its internal database and retrieves links that match as close as possible to your search query. Google then directs your browser to the search engine results page (SERP), where it displays all the relevant links it could find in its database.

The front end of a search engine is the part a user sees and uses to query the database and view the retrieved information.

Back end

The back end of a search engine is software that uses algorithms to collect information. The algorithm indexes information it collects into it's database which can then be accessed by the front end. Information is stored using keywords, phrases, links to and from the webpage, the URL (Universal Resource Locator) and the code that was used to create the webpage.

Collecting information

Spiders, crawlers and robots are small programs that crawls the web from one URL to the next, collecting as much information from a webpage as it can. The small program collects information based on the search engines algorithm and then sends the information back to the search engine where it is indexed into the database.

The information in the index are updated on a regular basis to keep search results fresh.


Advertisement

Comments

Add a new comment - No HTML
You must be logged in and verified to post a comment. Please log in or sign up to comment.

Explore InfoBarrel

Auto Business & Money Entertainment Environment Health History Home & Garden InfoBarrel University Lifestyle Sports Technology Travel & Places
© Copyright 2008 - 2016 by Hinzie Media Inc. Terms of Service Privacy Policy XML Sitemap

Follow IB Technology