Site search can produce pages targeted towards phrases for which you would like to rank in search engines. It is tempting to try to make those pages SEO friendly and link to them so that they get indexed in search engines. That isn't a good idea. If you do so, Google could penalize your entire site.
Web Development
If you run a crawler against your own site, it will generally crawl all your pages and then give you a report. It is tempting to think that Googlebot works the same way, but it doesn't. Googlebot doesn't crawl your entire site, wait for a while, and then come back and crawl your entire site again.
Some website owners put a lot of stock in their Alexa rank. They hope that improving their Alexa rank will help their SEO. That just isn't the case.
Googlebot can now fully render pages and sites that are built with JavaScript. It is tempting to think that Google can now index all JavaScript sites, but that is not the case. You need to know how search engine crawlers index JavaScript sites because there are quite a few common pitfalls when trying to build a search engine friendly site in JavaScript.
If you create a page on your website (like /some-page.html
), then it is just one page, even URL parameters are added (like /some-page.html?foo=bar
), right? The myth is that search engines see both of those URLs as a single page.
In reality, search engines have to treat URLs with parameters of if they could be completely separate pages. Some sites rely on URL parameters to show different content (eg /show?page=some-page
and /show?page=other-page
).
It is tempting to think that you can have Google forget everything it knows about your site and start over. However Google has a very long memory. There is no way to reset everything that Google knows about your site.
Webmasters live in fear of the "duplicate content penalty." The myth is that having two URLs on your site that show the same content is an SEO disaster. It will cause the rankings of your entire site to plummet.
I have heard several people say that they think your sitemap controls which pages from your website are on Google. In reality, XML sitemaps have little to do with which pages Google chooses to index. It is very common for Google to index pages that are not included in an XML sitemap.
If you disallow a page in robots.txt, Google may choose to index the page anyway. Google claims to honor, robots.txt, so how is that possible?
Many SEO guides suggest creating XML Sitemaps. They either say or imply that sitemaps are needed to get Google to index your site and get good rankings. XML sitemaps do have some uses for SEO, but:
- XML sitemaps won't influence Google rankings.
- Google rarely chooses to index a URL that it only finds via an XML sitemap.