An index is a large, computer-generated record of every page on a website that can be successfully accessed by a search engine. This index is created by web crawlers and is actively used by search engines to quickly retrieve relevant pages when users type in their queries.

Search engines rely on these indexes to help users find the information they are looking for. When a user enters a search query, the engine instantaneously scans its index to locate pages that match the request, and then presents those results to the searcher almost immediately.

SEO software, more broadly termed ‘crawlers’ or ‘spiders,’ build these indexes using complex algorithms that analyze the content of web pages. These bots meticulously examine every part of the content, including text, images, and embedded links. The information collected for a search engine’s index is subsequently used to determine that page’s ranking position in the search results. If a page is unable to be crawled for any reason, it will not be entered into the index and, consequently, cannot be ranked.