A search engine is a tool that helps us to retrieve information from the World Wide Web.
Search engines rely on automated programs called ‘spiders’ (also called crawlers or robots) to traverse the World Wide Web, following hyperlinks (linked text) from web page to web page. These spiders collect and catalogue data from each web page and store the information in a database called a ‘search index’.
The major search engines have their own search engine spiders and create their own search index which is regularly revised to keep the information accurate and up-to-date. Did you know the individual crawlers actually have names?! Google’s web crawler is called ‘Googlebot’ and Yahoo’s web crawler is called ‘Slurp’.
Data collected and stored in a search index provides an overview of a web page. The page can then be quickly and accurately matched to a relevant search query and included in the search engine results page (SERP).
The goal of a search engine is to provide the most relevant match to each search query in as little time as possible.
The goal of a web page owner is to make sure their web page is matched to every relevant search query and included in the search results page when it should be. Search engine optimisation helps to make this happen.