Your Ultimate Checklist for Choosing a Bulk Index Checker

Your Ultimate Checklist for Choosing a Bulk Index Checker

Penalty

When it comes to optimizing your SEO efforts, having the right tools in your arsenal is crucial. DA One such tool is a bulk index checker, which can save you loads of time and help you effectively monitor the indexing status of various URLs on your website.

DNS

  • Algorithm
  • Engine
  • Session
  • URL
  • Domain
  • Boost
  • Fetch
  • PA
  • Audit
  • Vitals
But with so many options out there, how do you choose the best one for your needs? Here's your ultimate checklist to help you make an informed decision!


Firstly, consider the accuracy of the tool. The main purpose of a bulk index checker is to determine whether your pages have been indexed by search engines like Google. Therefore, its important that the results provided by the tool are accurate and reliable. You wouldn't want to make decisions based on incorrect data, right?


Next, you should look at the speed of the tool. Time is money, especially in the fast-paced world of digital marketing. A good bulk index checker should be able to process large batches of URLs quickly, without compromising on the accuracy of its results. Slow tools can hinder your productivity and delay your SEO strategies.


Penalty

Another important factor is ease of use. The tool should have a user-friendly interface that is easy to navigate even for beginners. Index Complicated tools can be frustrating and may require additional time to learn, which can slow down your overall workflow. Hosting Content Look for a tool with straightforward instructions and supportive customer service!


Price is also a crucial consideration. While its tempting to go for free tools, they often come with limitations that might affect their usefulness.

Content

  1. Data
  2. Breadcrumb
  3. Crawlability
  4. XML
  5. Anchor
  6. Insight
  7. Ranking
  8. Organic
  9. Onpage
  10. CTR
Paid tools generally offer more features and better performance, but it's important to ensure that the cost fits within your budget.

Tag

  • Google
  • Cache-Control
  • Frequency
  • SEO
  • Bingbot
  • Speed
  • HTTP
  • Title
  • Canonical
  • Publish
  • Domain
  • CSS
Compare different tools and check if they offer a free trial period to test their capabilities before committing to a purchase.


Lastly, check for additional features that can add value to your SEO efforts.

Content

  1. Bot
  2. Disavow
  3. Relevance
  4. Intent
  5. Monitor
  6. Microdata
  7. Update
  8. Bandwidth
  9. Query
  10. Freshness
Some bulk index checkers come with extra functionalities like reporting features, integration capabilities with other SEO tools, and updates on changes in index status. Description Authority These features can provide deeper insights and enhance your SEO strategy.


Remember, choosing the right bulk index checker is vital for your website's SEO success! Tag Take your time to evaluate each tool against these criteria.

Index

  1. Speed
  2. HTTP
  3. Title
  4. Canonical
  5. Publish
  6. Domain
  7. CSS
  8. Error
  9. Clicks
  10. Structure
  11. Warning
  12. JavaScript
  13. Content
  14. HTTPS
And don't forget to read reviews and ask for recommendations from other SEO professionals. They can offer valuable insights from their experiences, helping you make a better choice.


Choosing the right tool might seem daunting, but with this checklist, you're well on your way to finding a bulk index checker that fits your needs perfectly!

Index

  • Duplicate
  • TrustFlow
  • Bounce
  • Performance
  • Structured
  • Search
  • Verify
  • Outbound
  • IP
  • UX
  • Desktop
  • Inbound
Happy hunting!

undefined undefined undefined.

Index Checker

Read more here also:

Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing.

Popular search engines focus on the full-text indexing of online, natural language documents.[1] Media types such as pictures, video, audio,[2] and graphics[3] are also searchable.

Meta search engines reuse the indices of other services and do not store a local index whereas cache-based search engines permanently store the index along with the corpus. Unlike full-text indices, partial-text services restrict the depth indexed to reduce index size. Larger services typically perform indexing at a predetermined time interval due to the required time and processing costs, while agent-based search engines index in real time.

Indexing

[edit]

The purpose of storing an index is to optimize speed and performance in finding relevant documents for a search query. Without an index, the search engine would scan every document in the corpus, which would require considerable time and computing power. For example, while an index of 10,000 documents can be queried within milliseconds, a sequential scan of every word in 10,000 large documents could take hours. The additional computer storage required to store the index, as well as the considerable increase in the time required for an update to take place, are traded off for the time saved during information retrieval.

Index design factors

[edit]

Major factors in designing a search engine's architecture include:

Merge factors
How data enters the index, or how words or subject features are added to the index during text corpus traversal, and whether multiple indexers can work asynchronously. The indexer must first check whether it is updating old content or adding new content. Traversal typically correlates to the data collection policy. Search engine index merging is similar in concept to the SQL Merge command and other merge algorithms.[4]
Storage techniques
How to store the index data, that is, whether information should be data compressed or filtered.
Index size
How much computer storage is required to support the index.
Lookup speed
How quickly a word can be found in the inverted index. The speed of finding an entry in a data structure, compared with how quickly it can be updated or removed, is a central focus of computer science.
Maintenance
How the index is maintained over time.[5]
Fault tolerance
How important it is for the service to be reliable. Issues include dealing with index corruption, determining whether bad data can be treated in isolation, dealing with bad hardware, partitioning, and schemes such as hash-based or composite partitioning,[6] as well as replication.

Index data structures

[edit]

Search engine architectures vary in the way indexing is performed and in methods of index storage to meet the various design factors.

Suffix tree
Figuratively structured like a tree, supports linear time lookup. Built by storing the suffixes of words. The suffix tree is a type of trie. Tries support extendible hashing, which is important for search engine indexing.[7] Used for searching for patterns in DNA sequences and clustering. A major drawback is that storing a word in the tree may require space beyond that required to store the word itself.[8] An alternate representation is a suffix array, which is considered to require less virtual memory and supports data compression such as the BWT algorithm.
Inverted index
Stores a list of occurrences of each atomic search criterion,[9] typically in the form of a hash table or binary tree.[10][11]
Citation index
Stores citations or hyperlinks between documents to support citation analysis, a subject of bibliometrics.
n-gram index
Stores sequences of length of data to support other types of retrieval or text mining.[12]
Document-term matrix
Used in latent semantic analysis, stores the occurrences of words in documents in a two-dimensional sparse matrix.

Challenges in parallelism

[edit]

A major challenge in the design of search engines is the management of serial computing processes. There are many opportunities for race conditions and coherent faults. For example, a new document is added to the corpus and the index must be updated, but the index simultaneously needs to continue responding to search queries. This is a collision between two competing tasks. Consider that authors are producers of information, and a web crawler is the consumer of this information, grabbing the text and storing it in a cache (or corpus). The forward index is the consumer of the information produced by the corpus, and the inverted index is the consumer of information produced by the forward index. This is commonly referred to as a producer-consumer model. The indexer is the producer of searchable information and users are the consumers that need to search. The challenge is magnified when working with distributed storage and distributed processing. In an effort to scale with larger amounts of indexed information, the search engine's architecture may involve distributed computing, where the search engine consists of several machines operating in unison. This increases the possibilities for incoherency and makes it more difficult to maintain a fully synchronized, distributed, parallel architecture.[13]

Inverted indices

[edit]

Many search engines incorporate an inverted index when evaluating a search query to quickly locate documents containing the words in a query and then rank these documents by relevance. Because the inverted index stores a list of the documents containing each word, the search engine can use direct access to find the documents associated with each word in the query in order to retrieve the matching documents quickly. The following is a simplified illustration of an inverted index:

Inverted index
Word Documents
the Document 1, Document 3, Document 4, Document 5, Document 7
cow Document 2, Document 3, Document 4
says Document 5
moo Document 7

This index can only determine whether a word exists within a particular document, since it stores no information regarding the frequency and position of the word; it is therefore considered to be a Boolean index. Such an index determines which documents match a query but does not rank matched documents. In some designs the index includes additional information such as the frequency of each word in each document or the positions of a word in each document.[14] Position information enables the search algorithm to identify word proximity to support searching for phrases; frequency can be used to help in ranking the relevance of documents to the query. Such topics are the central research focus of information retrieval.

The inverted index is a sparse matrix, since not all words are present in each document. To reduce computer storage memory requirements, it is stored differently from a two dimensional array. The index is similar to the term document matrices employed by latent semantic analysis. The inverted index can be considered a form of a hash table. In some cases the index is a form of a binary tree, which requires additional storage but may reduce the lookup time. In larger indices the architecture is typically a distributed hash table.[15]

Implementation of Phrase Search Using an Inverted Index

[edit]

For phrase searching, a specialized form of an inverted index called a positional index is used. A positional index not only stores the ID of the document containing the token but also the exact position(s) of the token within the document in the postings list. The occurrences of the phrase specified in the query are retrieved by navigating these postings list and identifying the indexes at which the desired terms occur in the expected order (the same as the order in the phrase). So if we are searching for occurrence of the phrase "First Witch", we would:

  1. Retrieve the postings list for "first" and "witch"
  2. Identify the first time that "witch" occurs after "first"
  3. Check that this occurrence is immediately after the occurrence of "first".
  4. If not, continue to the next occurrence of "first".

The postings lists can be navigated using a binary search in order to minimize the time complexity of this procedure.[16]

Index merging

[edit]

The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing,[17] where a merge identifies the document or documents to be added or updated and then parses each document into words. For technical accuracy, a merge conflates newly indexed documents, typically residing in virtual memory, with the index cache residing on one or more computer hard drives.

After parsing, the indexer adds the referenced document to the document list for the appropriate words. In a larger search engine, the process of finding each word in the inverted index (in order to report that it occurred within a document) may be too time consuming, and so this process is commonly split up into two parts, the development of a forward index and a process which sorts the contents of the forward index into the inverted index. The inverted index is so named because it is an inversion of the forward index.

The forward index

[edit]

The forward index stores a list of words for each document. The following is a simplified form of the forward index:

Forward Index
Document Words
Document 1 the,cow,says,moo
Document 2 the,cat,and,the,hat
Document 3 the,dish,ran,away,with,the,spoon

The rationale behind developing a forward index is that as documents are parsed, it is better to intermediately store the words per document. The delineation enables asynchronous system processing, which partially circumvents the inverted index update bottleneck.[18] The forward index is sorted to transform it to an inverted index. The forward index is essentially a list of pairs consisting of a document and a word, collated by the document. Converting the forward index to an inverted index is only a matter of sorting the pairs by the words. In this regard, the inverted index is a word-sorted forward index.

Compression

[edit]

Generating or maintaining a large-scale search engine index represents a significant storage and processing challenge. Many search engines utilize a form of compression to reduce the size of the indices on disk.[19] Consider the following scenario for a full text, Internet search engine.

Given this scenario, an uncompressed index (assuming a non-conflated, simple, index) for 2 billion web pages would need to store 500 billion word entries. At 1 byte per character, or 5 bytes per word, this would require 2500 gigabytes of storage space alone.[citation needed] This space requirement may be even larger for a fault-tolerant distributed storage architecture. Depending on the compression technique chosen, the index can be reduced to a fraction of this size. The tradeoff is the time and processing power required to perform compression and decompression.[citation needed]

Notably, large scale search engine designs incorporate the cost of storage as well as the costs of electricity to power the storage. Thus compression is a measure of cost.[citation needed]

Document parsing

[edit]

Document parsing breaks apart the components (words) of a document or other form of media for insertion into the forward and inverted indices. The words found are called tokens, and so, in the context of search engine indexing and natural language processing, parsing is more commonly referred to as tokenization. It is also sometimes called word boundary disambiguation, tagging, text segmentation, content analysis, text analysis, text mining, concordance generation, speech segmentation, lexing, or lexical analysis. The terms 'indexing', 'parsing', and 'tokenization' are used interchangeably in corporate slang.

Natural language processing is the subject of continuous research and technological improvement. Tokenization presents many challenges in extracting the necessary information from documents for indexing to support quality searching. Tokenization for indexing involves multiple technologies, the implementation of which are commonly kept as corporate secrets.[citation needed]

Challenges in natural language processing

[edit]
Word boundary ambiguity
Native English speakers may at first consider tokenization to be a straightforward task, but this is not the case with designing a multilingual indexer. In digital form, the texts of other languages such as Chinese or Japanese represent a greater challenge, as words are not clearly delineated by whitespace. The goal during tokenization is to identify words for which users will search. Language-specific logic is employed to properly identify the boundaries of words, which is often the rationale for designing a parser for each language supported (or for groups of languages with similar boundary markers and syntax).
Language ambiguity
To assist with properly ranking matching documents, many search engines collect additional information about each word, such as its language or lexical category (part of speech). These techniques are language-dependent, as the syntax varies among languages. Documents do not always clearly identify the language of the document or represent it accurately. In tokenizing the document, some search engines attempt to automatically identify the language of the document.
Diverse file formats
In order to correctly identify which bytes of a document represent characters, the file format must be correctly handled. Search engines that support multiple file formats must be able to correctly open and access the document and be able to tokenize the characters of the document.
Faulty storage
The quality of the natural language data may not always be perfect. An unspecified number of documents, particularly on the Internet, do not closely obey proper file protocol. Binary characters may be mistakenly encoded into various parts of a document. Without recognition of these characters and appropriate handling, the index quality or indexer performance could degrade.

Tokenization

[edit]

Unlike literate humans, computers do not understand the structure of a natural language document and cannot automatically recognize words and sentences. To a computer, a document is only a sequence of bytes. Computers do not 'know' that a space character separates words in a document. Instead, humans must program the computer to identify what constitutes an individual or distinct word referred to as a token. Such a program is commonly called a tokenizer or parser or lexer. Many search engines, as well as other natural language processing software, incorporate specialized programs for parsing, such as YACC or Lex.

During tokenization, the parser identifies sequences of characters that represent words and other elements, such as punctuation, which are represented by numeric codes, some of which are non-printing control characters. The parser can also identify entities such as email addresses, phone numbers, and URLs. When identifying each token, several characteristics may be stored, such as the token's case (upper, lower, mixed, proper), language or encoding, lexical category (part of speech, like 'noun' or 'verb'), position, sentence number, sentence position, length, and line number.

Language recognition

[edit]

If the search engine supports multiple languages, a common initial step during tokenization is to identify each document's language; many of the subsequent steps are language dependent (such as stemming and part of speech tagging). Language recognition is the process by which a computer program attempts to automatically identify, or categorize, the language of a document. Other names for language recognition include language classification, language analysis, language identification, and language tagging. Automated language recognition is the subject of ongoing research in natural language processing. Finding which language the words belongs to may involve the use of a language recognition chart.

Format analysis

[edit]

If the search engine supports multiple document formats, documents must be prepared for tokenization. The challenge is that many document formats contain formatting information in addition to textual content. For example, HTML documents contain HTML tags, which specify formatting information such as new line starts, bold emphasis, and font size or style. If the search engine were to ignore the difference between content and 'markup', extraneous information would be included in the index, leading to poor search results. Format analysis is the identification and handling of the formatting content embedded within documents which controls the way the document is rendered on a computer screen or interpreted by a software program. Format analysis is also referred to as structure analysis, format parsing, tag stripping, format stripping, text normalization, text cleaning and text preparation. The challenge of format analysis is further complicated by the intricacies of various file formats. Certain file formats are proprietary with very little information disclosed, while others are well documented. Common, well-documented file formats that many search engines support include:

Options for dealing with various formats include using a publicly available commercial parsing tool that is offered by the organization which developed, maintains, or owns the format, and writing a custom parser.

Some search engines support inspection of files that are stored in a compressed or encrypted file format. When working with a compressed format, the indexer first decompresses the document; this step may result in one or more files, each of which must be indexed separately. Commonly supported compressed file formats include:

  • ZIP - Zip archive file
  • RAR - Roshal ARchive file
  • CAB - Microsoft Windows Cabinet File
  • Gzip - File compressed with gzip
  • BZIP - File compressed using bzip2
  • Tape ARchive (TAR), Unix archive file, not (itself) compressed
  • TAR.Z, TAR.GZ or TAR.BZ2 - Unix archive files compressed with Compress, GZIP or BZIP2

Format analysis can involve quality improvement methods to avoid including 'bad information' in the index. Content can manipulate the formatting information to include additional content. Examples of abusing document formatting for spamdexing:

  • Including hundreds or thousands of words in a section that is hidden from view on the computer screen, but visible to the indexer, by use of formatting (e.g. hidden "div" tag in HTML, which may incorporate the use of CSS or JavaScript to do so).
  • Setting the foreground font color of words to the same as the background color, making words hidden on the computer screen to a person viewing the document, but not hidden to the indexer.

Section recognition

[edit]

Some search engines incorporate section recognition, the identification of major parts of a document, prior to tokenization. Not all the documents in a corpus read like a well-written book, divided into organized chapters and pages. Many documents on the web, such as newsletters and corporate reports, contain erroneous content and side-sections that do not contain primary material (that which the document is about). For example, articles on the Wikipedia website display a side menu with links to other web pages. Some file formats, like HTML or PDF, allow for content to be displayed in columns. Even though the content is displayed, or rendered, in different areas of the view, the raw markup content may store this information sequentially. Words that appear sequentially in the raw source content are indexed sequentially, even though these sentences and paragraphs are rendered in different parts of the computer screen. If search engines index this content as if it were normal content, the quality of the index and search quality may be degraded due to the mixed content and improper word proximity. Two primary problems are noted:

  • Content in different sections is treated as related in the index when in reality it is not
  • Organizational side bar content is included in the index, but the side bar content does not contribute to the meaning of the document, and the index is filled with a poor representation of its documents.

Section analysis may require the search engine to implement the rendering logic of each document, essentially an abstract representation of the actual document, and then index the representation instead. For example, some content on the Internet is rendered via JavaScript. If the search engine does not render the page and evaluate the JavaScript within the page, it would not 'see' this content in the same way and would index the document incorrectly. Given that some search engines do not bother with rendering issues, many web page designers avoid displaying content via JavaScript or use the Noscript Archived 2020-07-07 at the Wayback Machine tag to ensure that the web page is indexed properly. At the same time, this fact can also be exploited to cause the search engine indexer to 'see' different content than the viewer.

HTML priority system

[edit]

Indexing often has to recognize the HTML tags to organize priority. Indexing low priority to high margin to labels like strong and link to optimize the order of priority if those labels are at the beginning of the text could not prove to be relevant. Some indexers like Google and Bing ensure that the search engine does not take the large texts as relevant source due to strong type system compatibility.[22]

Meta tag indexing

[edit]

Meta tag indexing plays an important role in organizing and categorizing web content. Specific documents often contain embedded meta information such as author, keywords, description, and language. For HTML pages, the meta tag contains keywords which are also included in the index. Earlier Internet search engine technology would only index the keywords in the meta tags for the forward index; the full document would not be parsed. At that time full-text indexing was not as well established, nor was computer hardware able to support such technology. The design of the HTML markup language initially included support for meta tags for the very purpose of being properly and easily indexed, without requiring tokenization.[23]

As the Internet grew through the 1990s, many brick-and-mortar corporations went 'online' and established corporate websites. The keywords used to describe webpages (many of which were corporate-oriented webpages similar to product brochures) changed from descriptive to marketing-oriented keywords designed to drive sales by placing the webpage high in the search results for specific search queries. The fact that these keywords were subjectively specified was leading to spamdexing, which drove many search engines to adopt full-text indexing technologies in the 1990s. Search engine designers and companies could only place so many 'marketing keywords' into the content of a webpage before draining it of all interesting and useful information. Given that conflict of interest with the business goal of designing user-oriented websites which were 'sticky', the customer lifetime value equation was changed to incorporate more useful content into the website in hopes of retaining the visitor. In this sense, full-text indexing was more objective and increased the quality of search engine results, as it was one more step away from subjective control of search engine result placement, which in turn furthered research of full-text indexing technologies.[citation needed]

In desktop search, many solutions incorporate meta tags to provide a way for authors to further customize how the search engine will index content from various files that is not evident from the file content. Desktop search is more under the control of the user, while Internet search engines must focus more on the full text index.[citation needed]

See also

[edit]

References

[edit]
  1. ^ Clarke, C., Cormack, G.: Dynamic Inverted Indexes for a Distributed Full-Text Retrieval System. TechRep MT-95-01, University of Waterloo, February 1995.
  2. ^ "An Industrial-Strength Audio Search Algorithm" (PDF). Archived (PDF) from the original on 2006-05-12. Retrieved 2014-01-07.
  3. ^ Charles E. Jacobs, Adam Finkelstein, David H. Salesin. Fast Multiresolution Image Querying. Department of Computer Science and Engineering, University of Washington. 1995. Verified Dec 2006
  4. ^ Brown, E.W.: Execution Performance Issues in Full-Text Information Retrieval. Computer Science Department, University of Massachusetts Amherst, Technical Report 95-81, October 1995.
  5. ^ Cutting, D., Pedersen, J.: Optimizations for dynamic inverted index maintenance. Proceedings of SIGIR, 405-411, 1990.
  6. ^ Linear Hash Partitioning. MySQL 5.1 Reference Manual. Verified Dec 2006
  7. ^ trie, Dictionary of Algorithms and Data Structures, U.S. National Institute of Standards and Technology.
  8. ^ Gusfield, Dan (1999) [1997]. Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology. US: Cambridge University Press. ISBN 0-521-58519-8..
  9. ^ Black, Paul E., inverted index, Dictionary of Algorithms and Data Structures, U.S. National Institute of Standards and Technology Oct 2006. Verified Dec 2006.
  10. ^ C. C. Foster, Information retrieval: information storage and retrieval using AVL trees, Proceedings of the 1965 20th national conference, p.192-205, August 24–26, 1965, Cleveland, Ohio, United States
  11. ^ Landauer, W. I.: The balanced tree and its utilization in information retrieval. IEEE Trans. on Electronic Computers, Vol. EC-12, No. 6, December 1963.
  12. ^ Google Ngram Datasets Archived 2013-09-29 at the Wayback Machine for sale at LDC Catalog
  13. ^ Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Google, Inc. OSDI. 2004.
  14. ^ Grossman, Frieder, Goharian. IR Basics of Inverted Index. 2002. Verified Aug 2011.
  15. ^ Tang, Hunqiang. Dwarkadas, Sandhya. "Hybrid Global Local Indexing for Efficient Peer to Peer Information Retrieval". University of Rochester. Pg 1. http://www.cs.rochester.edu/u/sandhya/papers/nsdi04.ps
  16. ^ Büttcher, Stefan; Clarke, Charles L. A.; Cormack, Gordon V. (2016). Information retrieval: implementing and evaluating search engines (First MIT Press paperback ed.). Cambridge, Massachusetts London, England: The MIT Press. ISBN 978-0-262-52887-0.
  17. ^ Tomasic, A., et al.: Incremental Updates of Inverted Lists for Text Document Retrieval. Short Version of Stanford University Computer Science Technical Note STAN-CS-TN-93-1, December, 1993.
  18. ^ Sergey Brin and Lawrence Page. The Anatomy of a Large-Scale Hypertextual Web Search Engine. Stanford University. 1998. Verified Dec 2006.
  19. ^ H.S. Heaps. Storage analysis of a compression coding for a document database. 1NFOR, I0(i):47-61, February 1972.
  20. ^ The Unicode Standard - Frequently Asked Questions. Verified Dec 2006.
  21. ^ Storage estimates. Verified Dec 2006.
  22. ^ Google Webmaster Tools, "Hypertext Markup Language 5", Conference for SEO January 2012.
  23. ^ Berners-Lee, T., "Hypertext Markup Language - 2.0", RFC 1866, Network Working Group, November 1995.

Further reading

[edit]
  • R. Bayer and E. McCreight. Organization and maintenance of large ordered indices. Acta Informatica, 173-189, 1972.
  • Donald E. Knuth. The Art of Computer Programming, volume 1 (3rd ed.): fundamental algorithms, Addison Wesley Longman Publishing Co. Redwood City, CA, 1997.
  • Donald E. Knuth. The art of computer programming, volume 3: (2nd ed.) sorting and searching, Addison Wesley Longman Publishing Co. Redwood City, CA, 1998.
  • Gerald Salton. Automatic text processing, Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 1988.
  • Gerard Salton. Michael J. McGill, Introduction to Modern Information Retrieval, McGraw-Hill, Inc., New York, NY, 1986.
  • Gerard Salton. Lesk, M.E.: Computer evaluation of indexing and text processing. Journal of the ACM. January 1968.
  • Gerard Salton. The SMART Retrieval System - Experiments in Automatic Document Processing. Prentice Hall Inc., Englewood Cliffs, 1971.
  • Gerard Salton. The Transformation, Analysis, and Retrieval of Information by Computer, Addison-Wesley, Reading, Mass., 1989.
  • Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. Chapter 8. ACM Press 1999.
  • G. K. Zipf. Human Behavior and the Principle of Least Effort. Addison-Wesley, 1949.
  • Adelson-Velskii, G.M., Landis, E. M.: An information organization algorithm. DANSSSR, 146, 263-266 (1962).
  • Edward H. Sussenguth Jr., Use of tree structures for processing files, Communications of the ACM, v.6 n.5, p. 272-279, May 1963
  • Harman, D.K., et al.: Inverted files. In Information Retrieval: Data Structures and Algorithms, Prentice-Hall, pp 28–43, 1992.
  • Lim, L., et al.: Characterizing Web Document Change, LNCS 2118, 133–146, 2001.
  • Lim, L., et al.: Dynamic Maintenance of Web Indexes Using Landmarks. Proc. of the 12th W3 Conference, 2003.
  • Moffat, A., Zobel, J.: Self-Indexing Inverted Files for Fast Text Retrieval. ACM TIS, 349–379, October 1996, Volume 14, Number 4.
  • Mehlhorn, K.: Data Structures and Efficient Algorithms, Springer Verlag, EATCS Monographs, 1984.
  • Mehlhorn, K., Overmars, M.H.: Optimal Dynamization of Decomposable Searching Problems. IPL 12, 93–98, 1981.
  • Mehlhorn, K.: Lower Bounds on the Efficiency of Transforming Static Data Structures into Dynamic Data Structures. Math. Systems Theory 15, 1–16, 1981.
  • Koster, M.: ALIWEB: Archie-Like indexing in the Web. Computer Networks and ISDN Systems, Vol. 27, No. 2 (1994) 175-182 (also see Proc. First Int'l World Wide Web Conf., Elsevier Science, Amsterdam, 1994, pp. 175–182)
  • Serge Abiteboul and Victor Vianu. Queries and Computation on the Web. Proceedings of the International Conference on Database Theory. Delphi, Greece 1997.
  • Ian H Witten, Alistair Moffat, and Timothy C. Bell. Managing Gigabytes: Compressing and Indexing Documents and Images. New York: Van Nostrand Reinhold, 1994.
  • A. Emtage and P. Deutsch, "Archie--An Electronic Directory Service for the Internet." Proc. Usenix Winter 1992 Tech. Conf., Usenix Assoc., Berkeley, Calif., 1992, pp. 93–110.
  • M. Gray, World Wide Web Wanderer.
  • D. Cutting and J. Pedersen. "Optimizations for Dynamic Inverted Index Maintenance." Proceedings of the 13th International Conference on Research and Development in Information Retrieval, pp. 405–411, September 1990.
  • Stefan Büttcher, Charles L. A. Clarke, and Gordon V. Cormack. Information Retrieval: Implementing and Evaluating Search Engines Archived 2020-10-05 at the Wayback Machine. MIT Press, Cambridge, Mass., 2010.
(Learn how and when to remove this message)

 

Search engine optimization (SEO) is the practice of improving the visibility and performance of websites and web pages in search engine results pages (SERPs).[1] It focuses on increasing the quantity and quality of traffic from unpaid (organic) search results rather than paid advertising.[2] SEO applies to multiple search formats, including web, image, video, news, academic, and vertical search engines, as well as AI-assisted search interfaces.

SEO is commonly used as part of a broader digital marketing strategy and involves optimizing technical infrastructure, content relevance, and authority signals to improve rankings for user queries.[3] The objective of SEO is to attract users who are actively searching for information, products, or services, thereby supporting brand visibility, user engagement, and conversions.[4]

History

[edit]

Webmasters and content providers began optimizing websites for search engines in the mid-1990s as the first search engines were cataloging the early Web. Search engine users would query the URL of a page, and then receive information found on the page, if it existed in the search engine's index.

ALIWEB and the earliest versions of search engines required website developers to manually upload website index files in order to be searchable and widely did not utilize any form of ranking algorithm for user queries.[1] The emergence of automated web crawlers would later be used to proactively discover and index websites. This led to website developers to optimize their website’s search signals, including the use of meta tags, to achieve greater visibility in search results.

According to a 2004 article by former industry analyst and current Google employee Danny Sullivan, the phrase "search engine optimization" came into use in 1997. Sullivan credits SEO practitioner Bruce Clay as one of the first people to popularize the term.[5]

In some cases, early search algorithms weighted particular HTML attributes in ways that could be leveraged by web content providers to manipulate their search rankings.[6] As early as 1997, search engine providers began adjusting their algorithms to prevent these actions.[3] Eventually, search engines would incorporate more meaningful measures of page purpose, including the more recent development of semantic search.[7]

Some search engines frequently sponsor SEO conferences, webchats, and seminars. Major search engines provide information and guidelines to help with website optimization.[8][4] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[2] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status.

In 2015, it was reported that Google was developing and promoting mobile search as a key feature within future products, resulting in brands and marketers shifting toward mobile-first experiences.[9]

Relationship between SEO and large language models

[edit]

In the 2020s, the rise of generative AI tools such as ChatGPT, Claude, Perplexity, and Gemini gave rise to discussion around a concept variously referred to as generative engine optimization, answer engine optimization, large language model optimization, or artificial intelligence optimization.[citation needed]

This approach focuses on optimizing content for inclusion in AI-generated answers provided by large language models (LLMs).[citation needed] This shift has led digital marketers to discuss content formats, authority signals, and how structured data is presented to make content more "promotable".[10]

Relationship between Google and SEO industry

[edit]

In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[11] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.

Page and Brin founded Google in 1998.[12] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[13] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link-building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focus on exchanging, buying, and selling links, often on a massive scale. Some of these schemes involved the creation of thousands of sites for the sole purpose of link spamming.[14]

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation.[15] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization and have shared their personal opinions.[16] Patents related to search engines can provide information to better understand search engines.[17] In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[18]

In 2007, Google announced a campaign against paid links that transfer PageRank.[19] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any no follow links, in the same way, to prevent SEO service providers from using no follow for PageRank sculpting.[20] As a result of this change, the usage of no follow led to evaporation of PageRank. In order to avoid the above, SEO engineers developed alternative techniques that replace no followed tags with obfuscated JavaScript and thus permit PageRank sculpting. Additionally, several solutions have been suggested that include the usage of iframes, Flash, and JavaScript.[21] As a response, in 2019, Google would change its position and started using these tags as hints as a way to better understand how to appropriately analyze and use links within their systems".[22]

In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[23] On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts, and other content much sooner after publishing than before, Google Caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[24] Google Instant, a real-time search feature, was introduced in late 2010 to deliver more timely and relevant results. Historically, site administrators often spent months or years optimizing websites to improve their search rankings. With the rise of social media platforms and blogs, major search engines adjusted their algorithms to enable fresh content to rank more quickly in search results.[25]

Google has implemented numerous algorithm updates to improve search quality, including Panda (2011) for content quality, Penguin (2012) for link spam, Hummingbird (2013) for natural language processing, and BERT (2019) for query understanding. These updates reflect the ongoing evolution of search technology and Google's efforts to combat spam while improving user experience.

On May 20, 2025, Google announced that AI Mode would be released to all US users. AI Mode uses what Google calls a "query fan-out technique" which breaks down the search query into multiple sub-topics which generates additional search queries for the user.[26]

Methods

[edit]

Getting indexed

[edit]
A simple illustration of the Pagerank algorithm. Percentage shows the perceived importance.

The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine-indexed pages do not need to be submitted because they are found automatically. The Yahoo! Directory and DMOZ, two major directories which closed in 2014 and 2017 respectively, both required manual submission and human editorial review.[27] Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links[28] in addition to their URL submission console.[29] Yahoo! formerly operated a paid submission service that guaranteed to crawl for a cost per click;[30] however, this practice was discontinued in 2009. Nevertheless, SEO tools such as Semrush enable analysis of both paid and organic traffic by providing insights into cost per click and keyword performance.[31]

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by search engines. The distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[32]

Mobile devices are used for the majority of Google searches.[33] In November 2016, Google announced a major change to the way they are crawling websites and started to make their index mobile-first, which means the mobile version of a given website becomes the starting point for what Google includes in their index.[34] In May 2019, Google updated the rendering engine of their crawler to be the latest version of Chromium (74 at the time of the announcement). Google indicated that they would regularly update the Chromium rendering engine to the latest version.[35] In December 2019, Google began updating the User-Agent string of their crawler to reflect the latest Chrome version used by their rendering service. The delay was to allow webmasters time to update their code that responded to particular bot User-Agent strings. Google ran evaluations and felt confident the impact would be minor.[36]

Preventing crawling

[edit]

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually <meta name="robots" content="noindex"> ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish to crawl. Pages typically prevented from being crawled include login-specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[37]

In 2020, Google sunsetted the standard (and open-sourced their code) and now treats it as a hint rather than a directive. To adequately ensure that pages are not indexed, a page-level robot's meta tag should be included.[38]

Increasing prominence

[edit]

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility. Page design makes users trust a site and want to stay once they find it. When people bounce off a site, it counts against the site and affects its credibility.[39]

Writing content that includes frequently searched keyword phrases so as to be relevant to a wide variety of search queries will tend to increase traffic. Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's metadata, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL canonicalization of web pages accessible via multiple URLs, using the canonical link element[40] or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score. These are known as incoming links, which point to the URL and can count towards the page link's popularity score, impacting the credibility of a website.[39]

White hat versus black hat techniques

[edit]
Common white-hat methods of search engine optimization

SEO techniques can be classified into two broad categories: techniques that search engine companies recommend ("white hat"),[41] and those techniques of which search engines do not approve ("black hat"). Search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods and the practitioners who employ them as either white hat SEO or black hat SEO.[42] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[43]

An SEO technique is considered a white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[8][4][44] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online algorithms, rather than attempting to trick the algorithm from its intended purpose. White hat SEO has been compared to web development that promotes accessibility,[45] although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines or involve deception. One black hat technique uses hidden text, either as text colored similar to the background, in an invisible div, or positioned off-screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking. Another category sometimes used is grey hat SEO. This is in between the black hat and white hat approaches, where the methods employed avoid the site being penalized but do not act in producing the best content for users. Grey hat SEO is entirely focused on improving search engine rankings.[citation needed]

Search engines may penalize sites they discover using black or grey hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for the use of deceptive practices.[46] Both companies subsequently apologized, fixed the offending pages, and were restored to Google's search engine results page.[47]

Companies that employ black hat techniques or other spammy tactics can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[48] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[49] Google's Matt Cutts later confirmed that Google had banned Traffic Power and some of its clients.[50]

As marketing strategy

[edit]

SEO is one approach within digital marketing, alongside other strategies such as pay-per-click advertising and social media marketing. Search engine marketing (SEM) is the practice of designing, running, and optimizing search engine ad campaigns. Its difference from SEO is most simply depicted as the difference between paid and unpaid priority ranking in search results. SEM focuses on prominence more so than relevance; website developers should regard SEM with the utmost importance with consideration to visibility as most navigate to the primary listings of their search.[51] A successful Internet marketing campaign may also depend upon building high-quality web pages to engage and persuade internet users, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[52][53]

In November 2015, Google released a full 160-page version of its Search Quality Rating Guidelines to the public,[54] which revealed a shift in their focus towards "usefulness" and mobile local search. In recent years the mobile market has exploded, overtaking the use of desktops, as shown in by StatCounter in October 2016, where they analyzed 2.5 million websites and found that 51.3% of the pages were loaded by a mobile device.[55] Google has been one of the companies that are utilizing the popularity of mobile usage by encouraging websites to use their Google Search Console, the Mobile-Friendly Test, which allows companies to measure up their website to the search engine results and determine how user-friendly their websites are. The closer the keywords are together their ranking will improve based on key terms.[39]

SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantee and uncertainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[56] Search engines can change their algorithms, impacting a website's search engine ranking, possibly resulting in a serious loss of traffic. According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[57] Industry analysts note that websites may face risks from algorithm changes that can significantly impact organic traffic. In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.

International markets and SEO

[edit]

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. Google has maintained dominant market share in most regions, with varying percentages by market.[58] In markets outside the United States, Google's share is often larger, and data showed Google was the dominant search engine worldwide as of 2007.[59] As of 2006, Google had an 85–90% market share in Germany.[60] As of March 2024, Google still had a significant market share of 89.85% in Germany.[61] As of March 2024, Google's market share in the UK was 93.61%.[62]

Successful search engine optimization (SEO) for international markets requires more than just translating web pages. It may also involve registering a domain name with a country-code top-level domain (ccTLD) or a relevant top-level domain (TLD) for the target market, choosing web hosting with a local IP address or server, and using a Content Delivery Network (CDN) to improve website speed and performance globally. It is also important to understand the local culture so that the content feels relevant to the audience. This includes conducting keyword research for each market, using hreflang tags to target the right languages, and building local backlinks. However, the core SEO principles—such as creating high-quality content, improving user experience, and building links—remain the same, regardless of language or region.[60]

Regional search engines have a strong presence in specific markets:

  • China: Baidu leads the market, controlling about 70 to 80% market share.[63]
  • South Korea: Since the end of 2021, Naver, a domestic web portal, has gained prominence in the country.[64][65]
  • Russia: Yandex is the leading search engine in Russia. As of December 2023, it accounted for at least 63.8% of the market share.[66]

Multilingual SEO

[edit]

By the early 2000s, businesses recognized that the web and search engines could help them reach global audiences. As a result, the need for multilingual SEO emerged.[67] In the early years of international SEO development, simple translation was seen as sufficient. However, over time, it became clear that localization and transcreation—adapting content to local language, culture, and emotional resonance—were more effective than basic translation.[68]

[edit]

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[69][70]

In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. KinderStart's website was removed from Google's index prior to the lawsuit, and the amount of traffic to the site dropped by 70%. On March 16, 2007, the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[71][72]

See also

[edit]

References

[edit]
  1. ^ a b Hendy, Carl (July 3, 2024). "The History of Search Engines". audits. Retrieved September 17, 2025.
  2. ^ a b "Sitemaps". Archived from the original on June 22, 2023. Retrieved July 4, 2012.
  3. ^ a b Laurie J. Flynn (November 11, 1996). "Desperately Seeking Surfers". New York Times. Archived from the original on October 30, 2007. Retrieved May 9, 2007.
  4. ^ a b c "Bing Webmaster Guidelines". bing.com. Archived from the original on September 9, 2014. Retrieved September 11, 2014.
  5. ^ Danny Sullivan (June 14, 2004). "Who Invented the Term "Search Engine Optimization"?". Search Engine Watch. Search Engine Watch. Archived from the original on April 23, 2010. Retrieved May 14, 2007. See Google groups thread Archived June 17, 2013, at the Wayback Machine.
  6. ^ "What Is Black-Hat SEO? Definition, Techniques, and Why To Avoid It". SEO.com. Retrieved September 17, 2025.
  7. ^ "What is semantic search, and how does it work?". Google Cloud. Retrieved September 17, 2025.
  8. ^ a b "Google's Guidelines on Site Design". Archived from the original on January 9, 2009. Retrieved April 18, 2007.
  9. ^ ""By the Data: For Consumers, Mobile is the Internet" Google for Entrepreneurs Startup Grind September 20, 2015". Archived from the original on January 6, 2016. Retrieved January 8, 2016.
  10. ^ "What is generative engine optimization (GEO)?". July 29, 2024. Archived from the original on July 29, 2024. Retrieved July 30, 2025.
  11. ^ Brin, Sergey & Page, Larry (1998). "The Anatomy of a Large-Scale Hypertextual Web Search Engine". Proceedings of the seventh international conference on World Wide Web. pp. 107–117. Archived from the original on October 10, 2006. Retrieved May 8, 2007.
  12. ^ "Co-founders of Google - Google's co-founders may not have the name recognition of say, Bill Gates, but give them time: Google hasn't been around nearly as long as Microsoft". Entrepreneur. October 15, 2008. Archived from the original on May 31, 2014. Retrieved May 30, 2014.
  13. ^ Thompson, Bill (December 19, 2003). "Is Google good for you?". BBC News. Archived from the original on January 25, 2009. Retrieved May 16, 2007.
  14. ^ Zoltan Gyongyi & Hector Garcia-Molina (2005). "Link Spam Alliances" (PDF). Proceedings of the 31st VLDB Conference, Trondheim, Norway. Archived (PDF) from the original on June 12, 2007. Retrieved May 9, 2007.
  15. ^ Hansell, Saul (June 3, 2007). "Google Keeps Tweaking Its Search Engine". New York Times. Archived from the original on November 10, 2017. Retrieved June 6, 2007.
  16. ^ Sullivan, Danny (September 29, 2005). "Rundown On Search Ranking Factors". Search Engine Watch. Search Engine Watch. Archived from the original on May 28, 2007. Retrieved May 8, 2007.
  17. ^ Christine Churchill (November 23, 2005). "Understanding Search Engine Patents". Search Engine Watch. Archived from the original on February 7, 2007. Retrieved May 8, 2007.
  18. ^ "Google Personalized Search Leaves Google Labs". searchenginewatch.com. Search Engine Watch. Archived from the original on January 25, 2009. Retrieved September 5, 2009.
  19. ^ "8 Things We Learned About Google PageRank". www.searchenginejournal.com. October 25, 2007. Archived from the original on August 19, 2009. Retrieved August 17, 2009.
  20. ^ "PageRank sculpting". Matt Cutts. Archived from the original on January 6, 2010. Retrieved January 12, 2010.
  21. ^ "Google Loses "Backwards Compatibility" On Paid Link Blocking & PageRank Sculpting". searchengineland.com. June 3, 2009. Archived from the original on August 14, 2009. Retrieved August 17, 2009.
  22. ^ Sullivan, Danny; Illyes, Gary. "Evolving "nofollow" – new ways to identify the nature of links | Google Search Central Blog". Google for Developers. Retrieved March 19, 2026.
  23. ^ "Personalized Search for everyone". Archived from the original on December 8, 2009. Retrieved December 14, 2009.
  24. ^ "Our new search index: Caffeine". Google: Official Blog. Archived from the original on June 18, 2010. Retrieved May 10, 2014.
  25. ^ "Relevance Meets Real-Time Web". Google Blog. Archived from the original on April 7, 2019. Retrieved January 4, 2010.
  26. ^ "AI in Search: Going beyond information to intelligence". blog.google.com. May 20, 2025. Retrieved June 23, 2025.
  27. ^ "Submitting To Directories: Yahoo & The Open Directory". Search Engine Watch. March 12, 2007. Archived from the original on May 19, 2007. Retrieved May 15, 2007.
  28. ^ "What is a Sitemap file and why should I have one?". Archived from the original on July 1, 2007. Retrieved March 19, 2007.
  29. ^ "Search Console - Crawl URL". Archived from the original on August 14, 2022. Retrieved December 18, 2015.
  30. ^ Sullivan, Danny (March 12, 2007). "Submitting To Search Crawlers: Google, Yahoo, Ask & Microsoft's Live Search". Search Engine Watch. Archived from the original on May 10, 2007. Retrieved May 15, 2007.
  31. ^ Erdmann, Anett; Arilla, Ramón; Ponzoa, José M. (May 2022). "Search engine optimization: The long-term strategy of keyword choice". Journal of Business Research. 144: 650–662. doi:10.1016/j.jbusres.2022.01.065.
  32. ^ Cho, J.; Garcia-Molina, H.; Page, L. (1998). "Efficient crawling through URL ordering". Seventh International World-Wide Web Conference. Brisbane, Australia: Stanford InfoLab Publication Server. Archived from the original on July 14, 2019. Retrieved May 9, 2007.
  33. ^ "Mobile-first Index". Archived from the original on February 22, 2019. Retrieved March 19, 2018.
  34. ^ Phan, Doantam (November 4, 2016). "Mobile-first Indexing". Official Google Webmaster Central Blog. Archived from the original on February 22, 2019. Retrieved January 16, 2019.
  35. ^ "The new evergreen Googlebot". Official Google Webmaster Central Blog. Archived from the original on November 6, 2020. Retrieved March 2, 2020.
  36. ^ "Updating the user agent of Googlebot". Official Google Webmaster Central Blog. Archived from the original on March 2, 2020. Retrieved March 2, 2020.
  37. ^ "Newspapers Amok! New York Times Spamming Google? LA Times Hijacking Cars.com?". Search Engine Land. May 8, 2007. Archived from the original on December 26, 2008. Retrieved May 9, 2007.
  38. ^ Jill Kocher Brown (February 24, 2020). "Google Downgrades Nofollow Directive. Now What?". Practical Ecommerce. Archived from the original on January 25, 2021. Retrieved February 11, 2021.
  39. ^ a b c Morey, Sean (2008). The Digital Writer. Fountainhead Press. pp. 171–187.
  40. ^ "Bing – Partnering to help solve duplicate content issues – Webmaster Blog – Bing Community". www.bing.com. February 12, 2009. Archived from the original on June 7, 2014. Retrieved October 30, 2009.
  41. ^ Aryani, Diah; Setiawan, Aji; Tjahjono, Budi (May 31, 2023). "Comparative Analysis Of On-Page And Off-Page White Hat Search Engine Optimization (SEO) Techniques On Website Popularity". International Journal of Science, Technology & Management. 4 (3): 527–533. doi:10.46729/ijstm.v4i3.815.
  42. ^ Andrew Goodman. "Search Engine Showdown: Black hats vs. White hats at SES". SearchEngineWatch. Archived from the original on February 22, 2007. Retrieved May 9, 2007.
  43. ^ Jill Whalen (November 16, 2004). "Black Hat/White Hat Search Engine Optimization". searchengineguide.com. Archived from the original on November 17, 2004. Retrieved May 9, 2007.
  44. ^ "What's an SEO? Does Google recommend working with companies that offer to make my site Google-friendly?". Archived from the original on April 16, 2006. Retrieved April 18, 2007.
  45. ^ Andy Hagans (November 8, 2005). "High Accessibility Is Effective Search Engine Optimization". A List Apart. Archived from the original on May 4, 2007. Retrieved May 9, 2007.
  46. ^ Matt Cutts (February 4, 2006). "Ramping up on international webspam". mattcutts.com/blog. Archived from the original on June 29, 2012. Retrieved May 9, 2007.
  47. ^ Matt Cutts (February 7, 2006). "Recent reinclusions". mattcutts.com/blog. Archived from the original on May 22, 2007. Retrieved May 9, 2007.
  48. ^ David Kesmodel (September 22, 2005). "Sites Get Dropped by Search Engines After Trying to 'Optimize' Rankings". Wall Street Journal. Archived from the original on August 4, 2020. Retrieved July 30, 2008.
  49. ^ Adam L. Penenberg (September 8, 2005). "Legal Showdown in Search Fracas". Wired Magazine. Archived from the original on March 4, 2016. Retrieved August 11, 2016.
  50. ^ Matt Cutts (February 2, 2006). "Confirming a penalty". mattcutts.com/blog. Archived from the original on June 26, 2012. Retrieved May 9, 2007.
  51. ^ Tapan, Panda (2013). "Search Engine Marketing: Does the Knowledge Discovery Process Help Online Retailers?". IUP Journal of Knowledge Management. 11 (3): 56–66. ProQuest 1430517207.
  52. ^ Melissa Burdon (March 13, 2007). "The Battle Between Search Engine Optimization and Conversion: Who Wins?". Grok.com. Archived from the original on March 15, 2008. Retrieved April 10, 2017.
  53. ^ "SEO Tips and Marketing Strategies". Archived from the original on October 30, 2022. Retrieved October 30, 2022.
  54. ^ ""Search Quality Evaluator Guidelines" How Search Works November 12, 2015" (PDF). Archived (PDF) from the original on March 29, 2019. Retrieved January 11, 2016.
  55. ^ Titcomb, James (November 2016). "Mobile web usage overtakes desktop for first time". The Telegraph. Archived from the original on January 10, 2022. Retrieved March 17, 2018.
  56. ^ Andy Greenberg (April 30, 2007). "Condemned To Google Hell". Forbes. Archived from the original on May 2, 2007. Retrieved May 9, 2007.
  57. ^ Matt McGee (September 21, 2011). "Schmidt's testimony reveals how Google tests algorithm changes". Archived from the original on January 17, 2012. Retrieved January 4, 2012.
  58. ^ Graham, Jefferson (August 26, 2003). "The search engine that could". USA Today. Archived from the original on May 17, 2007. Retrieved May 15, 2007.
  59. ^ Greg Jarboe (February 22, 2007). "Stats Show Google Dominates the International Search Landscape". Search Engine Watch. Search Engine Watch. Archived from the original on May 23, 2011. Retrieved May 15, 2007.
  60. ^ a b Mike Grehan (April 3, 2006). "Search Engine Optimizing for Europe". Click. Archived from the original on November 6, 2010. Retrieved May 14, 2007.
  61. ^ "Germany search engine market share 2024". Statista. Retrieved January 6, 2025.
  62. ^ "UK search engines market share 2024". Statista. Retrieved January 6, 2025.
  63. ^ "China search engines market share 2024". Statista. Retrieved January 6, 2025.
  64. ^ cycles, This text provides general information Statista assumes no liability for the information given being complete or correct Due to varying update; Text, Statistics Can Display More up-to-Date Data Than Referenced in the. "Topic: Search engines in South Korea". Statista. Retrieved January 6, 2025.
  65. ^ "South Korea: main service used to search for information 2024". Statista. Retrieved January 6, 2025.
  66. ^ "Most popular search engines in Russia 2023". Statista. Retrieved January 6, 2025.
  67. ^ Bento, Fabio; Tagliabue, Marco; Lorenzo, Flora (July 24, 2020). "Organizational Silos: A Scoping Review Informed by a Behavioral Perspective on Systems and Networks". Societies. 10 (3): 56. doi:10.3390/soc10030056. hdl:10642/8939.
  68. ^ "SEO Starter Guide: The Basics | Google Search Central | Documentation". Google for Developers. Retrieved January 13, 2025.
  69. ^ "Search King, Inc. v. Google Technology, Inc., CIV-02-1457-M" (PDF). docstoc.com. May 27, 2003. Archived from the original on May 27, 2008. Retrieved May 23, 2008.
  70. ^ Stefanie Olsen (May 30, 2003). "Judge dismisses suit against Google". CNET. Archived from the original on December 1, 2010. Retrieved May 10, 2007.
  71. ^ "Technology & Marketing Law Blog: KinderStart v. Google Dismissed—With Sanctions Against KinderStart's Counsel". blog.ericgoldman.org. March 20, 2007. Archived from the original on May 11, 2008. Retrieved June 23, 2008.
  72. ^ "Technology & Marketing Law Blog: Google Sued Over Rankings—KinderStart.com v. Google". blog.ericgoldman.org. Archived from the original on June 22, 2008. Retrieved June 23, 2008.
[edit]
Listen to this article (22 minutes)
 
Spoken Wikipedia icon
This audio file was created from a revision of this article dated 20 May 2008 (2008-05-20), and does not reflect subsequent edits.