Ways to fix - “Blocked by robots. txt” error in Google Search Console
Blocked by robots. txt" indicates that Google didn't crawl your URL because you blocked it with a Disallow directive in robots. txt. It also means that the URL wasn't indexed.
Do you see a warning message in Google Search Console: “Indexed, though blocked by robot.txt”? This message indicates that Google indexed a URL even though it was blocked by your robots.txt file.
Google shows a warning for these URLS because it’s not sure whether you want to have these URLs indexed. As per Google,
A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.
Google doesn't index every page you submit, and some pages may not be indexed for various reasons, including the "Blocked by robots.txt" error. So, what should you do if you see this status in Google Search Console? Before getting into the solution, let us first understand what the error means.
What causes this GSC error message to appear?
The “Blocked by robots.txt” error means that your website’s robots.txt file is blocking Googlebot from crawling the page. In other words, Google is trying to access the page but is being prevented by the robots.txt file.
This can happen for multiple reasons, but the most common reason is that the robots.txt file is not configured correctly. For example, you may have accidentally blocked Googlebot from accessing the page, or you may have included a disallow directive in your robots.txt file that is preventing Googlebot from crawling the page.
Confirm that a page is blocked by robots.txt
Before fixing the issue, first try to confirm the issue. You can try Indexly's Page Inspection tool to confirm if the issue persists. Here are the steps to follow
- Open the URL Inspection tool.
- Inspect the URL shown for the page in the Google search result.
- In the inspection results, check the status of the Status section.
- If it says Blocked by robots.txt, then you've confirmed the problem.
- Move to the next section to fix it.

How to fix the 'Blocked by robots.txt' issue
1. Understand What’s Being Blocked
Go to Google Search Console:
- Navigate to Pages > Why pages aren’t indexed > Blocked by robots.txt.
- Click the affected URL.
- Use the URL Inspection Tool to confirm the block.
2. Check Your robots.txt
File
Visit:https://yourdomain.com/robots.txt
Look for lines like:
User-agent: *
Disallow: /path/
If the blocked URL falls under a Disallow
rule, Googlebot won’t crawl it.
3. Fix the Rule in robots.txt
Option A: Allow Googlebot to Crawl
If the URL should be indexed, modify or remove the blocking rule:
# Old rule (blocking)
Disallow: /blog/
# New rule (unblocking)
Allow: /blog/
Or delete the Disallow
line entirely if it’s unnecessary.
Option B: Block Crawl, But Still Index
If you want the page indexed but not crawled:
- Use a
noindex
meta tag instead of blocking via robots.txt. - If you block it in
robots.txt
, Google can’t see thenoindex
tag!
4. Test the Fix
Use the robots.txt Tester in Google Search Console:
- Go to Legacy tools > robots.txt Tester.
- Enter the blocked URL.
- Confirm it’s now crawlable.
5. Resubmit the URL
After updating robots.txt:
- Go to Indexly
- Click “Index"
How to Prevent the Error From Happening Again
To prevent the “Blocked by robots.txt” error from happening again, we recommend please review your website’s robots.txt file on a regular basis.
This will help to ensure that all directives are accurate and that no pages are accidentally blocked from being crawled by Googlebot.
I hope you liked this article. 😄