Robots.txt Tester & Validator
Test if your URLs are blocked by robots.txt. Validate syntax, find errors, and ensure search engines can crawl your important pages.
Paste or fetch a robots.txt to validate
User-agent:Specifies which crawler the rules apply toDisallow:Blocks access to specified pathsAllow:Allows access (overrides Disallow)Sitemap:Location of XML sitemapCrawl-delay:Seconds between requests (not supported by Google)Frequently Asked Questions
How to Use the Robots.txt Tester
What is Robots.txt Validation?
Robots.txt validation ensures your file follows the correct syntax and will be properly interpreted by search engine crawlers. A malformed robots.txt can accidentally block important pages from being indexed or allow crawlers access to private areas.
How to Test Your Robots.txt
To test your robots.txt: 1) Paste your robots.txt content or fetch it from your website, 2) Enter the URL path you want to test (e.g., /products/), 3) Select the User-Agent you want to test against (Googlebot, GPTBot, etc.), 4) Click 'Test URL' to see if the path is allowed or blocked.
Common Robots.txt Errors
- Missing User-agent directive before Allow/Disallow rules
- Using relative URLs in Sitemap directive instead of absolute URLs
- Forgetting the leading slash in paths (use /admin/ not admin/)
- Blocking CSS and JavaScript files that Google needs to render pages
- Using Crawl-delay with Google (not supported, use Search Console instead)