Site search can produce pages targeted towards phrases for which you would like to rank in search engines. It is tempting to try to make those pages SEO friendly and link to them so that they get indexed in search engines. That isn't a good idea. If you do so, Google could penalize your entire site.
If you run a crawler against your own site, it will generally crawl all your pages and then give you a report. It is tempting to think that Googlebot works the same way, but it doesn't. Googlebot doesn't crawl your entire site, wait for a while, and then come back and crawl your entire site again.
If you create a page on your website (like
/some-page.html), then it is just one page, even URL parameters are added (like
/some-page.html?foo=bar), right? The myth is that search engines see both of those URLs as a single page.
In reality, search engines have to treat URLs with parameters of if they could be completely separate pages. Some sites rely on URL parameters to show different content (eg
It is tempting to think that you can have Google forget everything it knows about your site and start over. However Google has a very long memory. There is no way to reset everything that Google knows about your site.
Webmasters live in fear of the "duplicate content penalty." The myth is that having two URLs on your site that show the same content is an SEO disaster. It will cause the rankings of your entire site to plummet.
I have heard several people say that they think your sitemap controls which pages from your website are on Google. In reality, XML sitemaps have little to do with which pages Google chooses to index. It is very common for Google to index pages that are not included in an XML sitemap.
Many SEO guides suggest creating XML Sitemaps. They either say or imply that sitemaps are needed to get Google to index your site and get good rankings. XML sitemaps do have some uses for SEO, but:
- XML sitemaps won't influence Google rankings.
- Google rarely chooses to index a URL that it only finds via an XML sitemap.