mirror of
https://github.com/getgrav/grav.git
synced 2025-10-26 07:56:07 +01:00
Update robots.txt (#2632)
I have found that Bing/Yahoo/DuckDuckGo, Yandex and Google report crawl errors when using the default robots.txt. Specifically their bots will not crawl the the path '/' or any sub-paths. I agree that the current robots.txt should work and properly implements the specification. However it still does not work. In my experience explicitly permitting the path '/' by adding the directive Allow: / resolves the issue. More details can be found in a blog post about the issue here: https://www.dfoley.ie/blog/starting-with-the-indieweb
This commit is contained in:
committed by
Andy Miller
parent
20b9ca56fa
commit
ed87faad92
@@ -10,3 +10,4 @@ Disallow: /user/
|
||||
Allow: /user/pages/
|
||||
Allow: /user/themes/
|
||||
Allow: /user/images/
|
||||
Allow: /
|
||||
|
||||
Reference in New Issue
Block a user