Related Tools
How to Use
- 1Use a preset (Allow All, Block All Bots, Block AI Bots, WordPress Default) or build from scratch.
- 2Add one or more User-agent rule blocks.
- 3Set Disallow and Allow paths for each agent.
- 4Optionally add Crawl-delay, Sitemap URL, and Host.
- 5Copy the generated robots.txt or download it as a file.
About robots.txt Generator
The robots.txt Generator creates standards-compliant robots.txt files for any website. Configure crawling rules for all bots or target specific user agents like Googlebot, Bingbot, GPTBot, and ChatGPT-User.
Includes one-click presets to block AI training bots (GPTBot, ChatGPT-User, Google-Extended, CCBot, anthropic-ai), set up a WordPress default configuration, or allow/block all crawlers. Add Sitemap URLs, Host directives, and Crawl-delay values as needed.
All generation runs locally in your browser. Copy the output or download it as a robots.txt file ready to upload to your server's root directory. No data is transmitted to any external service.
Frequently Asked Questions
What is robots.txt?
robots.txt is a file at the root of a website that tells search engine crawlers and other bots which pages or files they can or cannot access.
Where should I place robots.txt?
At the root of your domain, e.g. https://example.com/robots.txt. Most web servers serve it automatically from the public folder.
Does Disallow: / block all crawlers?
It instructs bots not to crawl any page, but compliant bots only. Malicious scrapers may ignore it.
Can I block AI training bots?
Yes. Use the "Block AI Bots" preset to add rules blocking GPTBot, ChatGPT-User, Google-Extended, CCBot, and anthropic-ai.
Does robots.txt guarantee pages won't be indexed?
No. robots.txt tells compliant crawlers not to access pages, but it doesn't prevent indexing if links exist elsewhere. Use noindex meta tags for guaranteed de-indexing.