mirror of
https://github.com/getgrav/grav.git
synced 2025-10-26 07:56:07 +01:00
I have found that Bing/Yahoo/DuckDuckGo, Yandex and Google report crawl errors when using the default robots.txt. Specifically their bots will not crawl the the path '/' or any sub-paths. I agree that the current robots.txt should work and properly implements the specification. However it still does not work. In my experience explicitly permitting the path '/' by adding the directive Allow: / resolves the issue. More details can be found in a blog post about the issue here: https://www.dfoley.ie/blog/starting-with-the-indieweb
14 lines
227 B
Plaintext
14 lines
227 B
Plaintext
User-agent: *
|
|
Disallow: /backup/
|
|
Disallow: /bin/
|
|
Disallow: /cache/
|
|
Disallow: /grav/
|
|
Disallow: /logs/
|
|
Disallow: /system/
|
|
Disallow: /vendor/
|
|
Disallow: /user/
|
|
Allow: /user/pages/
|
|
Allow: /user/themes/
|
|
Allow: /user/images/
|
|
Allow: /
|