Why Your Webpage Isn’t Being Indexed – Common Causes

It’s a bad feeling: you put in long hours working on a web page, adding keywords, minifying code and doing all the other things that are important for search engine optimization only to discover your page is not only not on the first page of search results, but it hasn’t been indexed at all. 

Every SEO professional has dealt with this frustration at one time or another in their career, so you’re far from the first. Webpages don’t get indexed all the time, usually on purpose, but sometimes by accident. 

In this piece, we’re going to dive into some of the common reasons your page or pages aren’t showing up in Google search – and just as importantly, how to fix them. But there’s one thing you need to do first:

Make Sure the Page is Actually Not Indexed

There’s limited real estate on the first page of Google search results, which means even the world’s best SEO experts occasionally come up short. Before you dive into the technical reasons why your page isn’t listed, first make sure it’s not just lost in the netherworld of page 2 or beyond. 

The easiest way to do this is with a site search using the URL of the page. For example, “site:MySite.com/MissingPage.”If Google returns results, the page has been indexed, it just needs more SEO love to rank better.

Common Reasons Pages Aren’t Indexed

1.  Your Site Isn’t Responsive

Predicting the increasing dominance of smartphones and tablets for search, Google first introduced mobile-first indexing in 2018. This means, it considers the mobile version of a site to be the primary one. As a result, if your site isn’t responsive (i.e., not mobile-friendly), your pages are going to be penalized. 

The Fix: If your site is built in WordPress or a similar platform, you should have an option to automatically generate a responsive version of your site. If you don’t there are numerous  plugins that will handle it automatically. 

2. Your Site Loads Too Slowly 

Every site is assigned a crawl budget. If your site takes too long to load, either because of server speed, bloated code, redirect loops or a number of different factors, your site might exhaust its crawl budget before Google spiders can discover and index every page. 

The Fix: Use Google PageSpeed Insights to find the root of your problems, then take steps to alleviate it. 

3. Your Content is Thin

If Google determines the content of your page is too sparse or that you’re not providing enough useful information, it may choose not to index it. 

The Fix: Make sure every page on your site is dedicated to a keyword and properly optimized. It should contain information that is relevant to search queries and provide visitors with value. 

4. You Didn’t Add All Your Domain Properties to GSC

If you have more than one version of your domain, you need to make sure each has been added and verified in Google Search Console. This includes HTTP and HTTPS versions, as well as URLS with and without a www (e.g., MySite.com and www.MySite.com).

The Fix: Go to GSC, verify your ownership of the domain(s) and add them.

5. You’re Not Being Crawled

If you have a page or pages that aren’t being indexed, the usual culprit is that something is preventing web crawlers from discovering them. There are several reasons why this can happen, and luckily most of them have easy fixes.

Errors in Robots.txt 

The robots.txt file is a file that lives at the root of your site. It tells search spiders which URLS it has access to, and which are not available. 

The Fix: Open your robots.txt file in a text editor program and verify that the page or pages in question aren’t accidentally disallowed. If you find one or more that are, simply change the “disallow” to “allow.” 

Noindex Tags

If Google encounters a noindex tag in a page’s HTML, it will not include that page in its index. This is usually reserved for thank you pages and other content that’s not suitable for search engines, but occasionally they’re left in accidentally during site refreshes.

The Fix: Get rid of the noindex tag on any page that you want included in search results. 

Site Architecture Problems

Search engines prefer sites with clean, easy-to-follow architecture. This not only makes it easy for spiders to follow links and discover pages, but it minimizes the chance of isolated “orphan” pages,  for which there is not linked path. 

The Fix: Organize your website in a logical, hierarchical manner. Make sure every page is linked to another.

Nofollow Tags

If your webpage includes nofollow instructions in the HTML, links on the page will not be followed by crawlers. This is obviously a problem for pages that don’t have any other links pointing at them.

The Fix: Perform a site audit to find inadvertent nofollow links. Change any relevant “nofollow” links to “dofollow.”

Access Restrictions

If you have pages gated behind a subscription or paywall, Google bots may not be able to access them, which means they won’t be included in search results. 

The Fix: Evaluate your gated pages and keep restrictions to only those that make sense. 

Eliminate Crawling and Indexing Problems for Good

The best way to ensure you don’t encounter any of the problems discussed here is by regularly auditing your website to find, identify and fix problems. If you’re doing this manually, however, this can be a time-consuming task.

Luckily, there’s a better way. 

Evisio automatically scans your site on a schedule looking for things like crawlability and indexability problems – and a whole lot more. When it finds things that are hurting your ranking in search results (or opportunities you can capitalize on), it gives you step-by-step instructions for taking care of it. 

And that’s just one small piece of evisio, the platform built to streamline everything SEO. See it for yourself. Request your free trial today.

< Go back to The SEO Program page.