Complete Definitive Guide to Search Engine Optimization

Browser-based search engine Optimization
They “crawl” or “spider” the web and when people search for what they found. If you change your website, crawler-based search engine Optimization will eventually find those changes, which can affect how you’re listed. Page titles, body text, and other elements play a role.
Human-powered directories
Submit a short description for your entire site to the directory, or the editors will write one for the sites they review. The search looks for matches only in the submitted description.
Changing your website has no effect on your listing. Things that are useful for improving your search engine listing have nothing to do with improving your directory listing. The only exception is that a good website with good content is more likely to be reviewed for free than a bad website.
Parts of a crawler-based search engine
Crawler-based search engines have three main elements. The first is a spider, also called a crawler. A spider visits a web page, reads it, and then follows links to other pages within the web. This is what it means when someone refers to a website that is “browsed” or “browsed”. The spider returns to the site regularly, such as every month or two, to look for changes.
An index sometimes called a catalog, is like a huge book containing a copy of every web page the spider finds. If the website changes, this book will be updated with new information.
Sometimes it can take a while for new pages to be added to the index or changes that the spider finds. So the web page may have been “viewed” but not yet “indexed”. Until it is indexed – added to the index – it is not available to those searching with a search engine.
This is a program that goes through the millions of pages recorded in the index to find matches to a search and ranks them according to what it thinks is most relevant.
Major Search Engines: Same but different
This is why the same search in different search engine Optimization often returns different results. Now let’s take a closer look at how a crawler-based search engine evaluates the records it collects.
How search engine Optimization rank websites
Search for anything using your favorite browser-based search engine. Almost instantly, the search engine sorts through the millions of pages it knows about and offers you the ones related to your topic. Matches will even be sorted so that the most relevant matches are first.
Of course, search engines don’t always get it right. Irrelevant pages get passed and sometimes it can take a little longer to find what you’re looking for. But overall, the search engines are doing an amazing job.
As WebCrawler founder Brian Pinkerton says, “Imagine walking up to a librarian and saying, ‘travel.’ They will look at you with a blank face.”
Good – the librarian really isn’t going to stare at you blankly. Instead, they ask you to better understand what you’re looking for.
Unfortunately, search engines don’t have the ability to ask a few questions to focus a search like librarians can. They also cannot rely on judgment and past experience to rate websites like humans.
So how do crawler-based search engines determine relevance when faced with hundreds of millions of web pages to sort through? It follows a set of rules known as an algorithm. How exactly a particular search engine’s algorithm works is a closely guarded trade secret.
Location, location, location… and frequency
One of the main rules in the ranking algorithm involves the placement and frequency of keywords on a web page. Call it the location/frequency method for short.
Remember the librarian mentioned above? They need to find books that match your “travel” requirement, so it makes sense for them to look at books with travel in the title first. Search engines work the same way. Pages with search terms that appear in the HTML title tag are often considered more relevant than others for a given topic.
Search engines will also check whether the search keywords appear at the top of the web page, such as in the headline or the first few paragraphs of the text. They assume that any page \relevant to the topic will mention these words right from the start.
Frequency is another major factor in how search engines determine relevance. Pages with higher frequency are often considered more relevant than other websites.
Spices in the recipe
Now it’s time to determine the location/frequency method described above. All major search engines follow it to some extent; in the same way, cooks can follow a standard chili recipe. But chefs like to add their own secret ingredients. So are search engines and spice to the location/frequency method. No one does it exactly the same, which is one of the reasons why the same search in different search engines produces different results.
First, some search engines index more websites than others. Some search engines also index websites more frequently than others. As a result, no two search engines have exactly the same collection of web pages to search. This naturally creates differences when comparing their results.
Search engines can also penalize sites or exclude them from the index if they detect search engine “spam”. An example is when a word is repeated hundreds of times on a page to increase the frequency and move the page higher in the listings. Search engines track common spamming methods in a variety of ways, including tracking complaints from their users.
Off-page SEO factors
Crawler-based search engines now have a lot of experience with webmasters constantly rewriting their websites in an effort to get better rankings. For this reason, all major search engine Optimization now also use “off-page” ranking criteria.
Off-page factors are those that webmasters cannot easily control. The main one is link analysis. By analyzing how pages link to each other, the search engine can determine what the page is about and whether the page is considered “important” and therefore deserves a ranking boost. In addition, sophisticated techniques are used to filter out webmasters’ attempts to create “artificial” links to boost their rankings.
Another off-page factor is click-through measurement. In short, this means that a search engine can track what result someone chooses for a particular search, and ultimately discard high-ranking pages that don’t attract clicks, while promoting lower-ranking pages that do attract visitors. As with link analysis, systems are used to compensate for artificial links generated by overzealous webmasters.
Search Engine Ranking Tips
A search engine query will often return thousands or even millions of matching web pages. In many cases, only the 10 “most relevant” matches will appear on the first page.
Everyone who runs a website naturally wants to be in the “top ten” results. Being on the list of 11 or more means that many people may miss your website.
The tips below will help you get closer to that goal, both for keywords you think are important and for phrases you might not even anticipate.
For example, let’s say you have a page dedicated to stamp collecting. Whenever someone types “stamp collecting”, you want your site to be in the top ten results. Then those are your target keywords for that page.
Each page on your website will have different target keywords that reflect the content of the page. For example, let’s say you have another page about stamp history. The keywords for the given page can then be “stamp history”.
Your target keywords should always have at least two or more words. There will usually be too many pages relevant to a single word, such as “stamps”. This competition means your chances of success are lower. Don’t waste time fighting against the odds. Choose phrases of two or more words and you will have a better chance of success.