Test if URLs are blocked by your robots.txt file. Validate syntax, find issues, and ensure search engines can crawl your important pages.
Blocked by rule on line 3
disallow: /admin/Quick tests:
The robots.txt file is a text file that tells search engine crawlers which pages or sections of your website they can or cannot request. It's located at the root of your website (e.g., example.com/robots.txt) and is the first file crawlers look for.
User-agent: *Applies rules to all bots. Use specific names like Googlebot for targeted rules.
Disallow: /admin/Blocks crawlers from accessing paths starting with /admin/.
Allow: /public/Explicitly allows access. Useful to override a broader Disallow rule.
Sitemap: https://...Points crawlers to your XML sitemap for better discovery.