Stephen Ostermiller's Blog

SEO Myth: Now that Googlebot renders JavaScript, all Angular and React sites get crawled

Googlebot can now fully render pages and sites that are built with JavaScript. It is tempting to think that Google can now index all JavaScript sites, but that is not the case. You need to know how search engine crawlers index JavaScript sites because there are quite a few common pitfalls when trying to build a search engine friendly site in JavaScript.

Slower index speed

The first thing to know is that Google is much slower to crawl and index JavaScript powered sites than sites built with plain HTML. Google has said that they have worked to render sites much more quickly, but webmasters are still observing that it can take extra weeks to get JavaScript sites crawled and indexed.

The single page application problem

The biggest pitfall is building a single page site. If your site only has one page, search engines will only index a single page. It is impossible to fit a significant amount of content onto a single page. If you have one very large page, search engines are not likely to index all the content on it, nor will it be possible to rank for most of the relevant keywords. It is much easier to rank for lots of keywords in search engines when you create lots of pages.

Single page application frameworks are not limited to building single page sites. To such frameworks, "single page" means that users only load your web application on the first page load. Clicking to other pages within it only changes the main content, but doesn't force them to reload the entire web application.

To create multiple pages under a single page application framework:

  • Assign each chunk of content to its own URL.
  • As users navigate the site, use pushState to change the URL they see, even if they are not loading whole new pages.
  • Ensure that starting the web application at a deep URL initially shows the content associated with that URL.

Use links

To make your website crawlable, you need to have the JavaScript render <a href=/foo.html> links. When it crawls, Googlebot:

  1. Dowloads the page
  2. Downloads all supporting JS and CSS
  3. Renders the page
    1.Indexes the keywords shown in the rendered page
  4. Scans the document object model (DOM) for <a> links to other pages so that it knows what to crawl next.

If you don't use links, Googlebot won't be able to find the other pages on your site. It is easy to make the mistake of using some other type of HTML element that users can click on for navigation. However Googlebot doesn't click on anything. It won't try to click on every element to see if it changes the URL. You need to use anchor HTML elements for Googlebot.

When users click on links, you probably don't want them to load full new pages. You can intercept clicks on the links for users, have your JavaScript update the content within the current page, set the URL using pushState and return false or event.preventDefault() within the link's event handler to simulate loading a new page without causing your webapp to reload.

Bots don't interact

Googlebot will only see what is rendered in the onload event for the page. Once the initial page load is rendered, Googlebot indexes what it find on the page at that point. Googlebot doesn't click, scroll, or otherwise interact with the page. Any content that is shown only when a user clicks on something, scrolls the page, or after any other interaction won't be indexed. You need to ensure that all the words in the page that you want indexed are visible when the page loads without any user interaction at all.


This article was written as part of a series about SEO myths.

Leave a comment

Your email address will not be published.