Ways to fix - “Blocked by robots. txt” error in Google Search Console
Blocked by robots. txt" indicates that Google didn't crawl your URL because you blocked it with a Disallow directive in robots. txt. It also means that the URL wasn't indexed.
Do you see a warning message in Google Search Console: “Indexed, though blocked by robot.txt”? This message indicates that Google indexed a URL even though it was blocked by your robots.txt file.
Google shows a warning for these URLs because they’re not sure whether you want to have these URLs indexed. As per Google,
A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.
Google doesn't index every page you submit, and some pages may not be indexed for various reasons, including the "Blocked by robots.txt" error. So what should you do if you see this status in Google Search Console? Before getting into the solution, let us first understand what the error means.
What causes this GSC error message to appear?
The “Blocked by robots.txt” error means that your website’s robots.txt file is blocking Googlebot from crawling the page. In other words, Google is trying to access the page but is being prevented by the robots.txt file.
This can happen for multiple reasons, but the most common reason is that the robots.txt file is not configured correctly. For example, you may have accidentally blocked Googlebot from accessing the page, or you may have included a disallow directive in your robots.txt file that is preventing Googlebot from crawling the page.
Confirm that a page is blocked by robots.txt
Before fixing the issue, first try to confirm the issue. You can try Indexly's Page Inspection tool to confirm if the issue persists. Here are the steps to follow
- Open the URL Inspection tool.
- Inspect the URL shown for the page in the Google search result.
- In the inspection results, check the status of the Status section.
- If it says Blocked by robots.txt, then you've confirmed the problem.
- Move to the next section to fix it.
How to fix the 'Blocked by robots.txt' issue
- Use a robots.txt validator to find out which rule is blocking your page, and where your robots.txt file is.
- Fix or remove the rule:
- If you are using a website hosting service—for example, if your site is on Wix, Joomla, or Drupal— Search your hosting provider's documentation to learn how to unblock your page or site to Google.
- If you're able to modify your robots.txt file directly, remove the rule, or else update it according to robots.txt syntax.
- For example, a rule like below in robots.txt disallows indexing all of your blog pages
robots.txt
Now remove the error entries from robots.txt. That’s it! Once you’ve made these changes, Google will be able to access your website and the “Blocked by robots.txt” error will be fixed.
How to Prevent the Error From Happening Again
To prevent the “Blocked by robots.txt” error from happening again, we recommend please review your website’s robots.txt file on a regular basis.
This will help to ensure that all directives are accurate and that no pages are accidentally blocked from being crawled by Googlebot.
I hope you liked this article. 😄
Supercharge your SEO with Indexly
It usually takes a few weeks for Googlebot to crawl and index your website's pages. However, Indexly can simplify this process by automatically checking your sitemaps, finding new pages, and submitting them to Google Search Console.
This reduces human effort and errors and significantly reduces the time it takes to get indexed. When your website's pages are indexed, they rank higher on search engines, ultimately boosting organic traffic.