From ed87faad92829b4218f91a3c81990f1fad23a7f5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Daith=C3=AD=20Se=C3=A1n=20=C3=93=20Foghl=C3=BA?= Date: Mon, 19 Aug 2019 03:22:33 +1000 Subject: [PATCH] Update robots.txt (#2632) I have found that Bing/Yahoo/DuckDuckGo, Yandex and Google report crawl errors when using the default robots.txt. Specifically their bots will not crawl the the path '/' or any sub-paths. I agree that the current robots.txt should work and properly implements the specification. However it still does not work. In my experience explicitly permitting the path '/' by adding the directive Allow: / resolves the issue. More details can be found in a blog post about the issue here: https://www.dfoley.ie/blog/starting-with-the-indieweb --- robots.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/robots.txt b/robots.txt index 41ea9dafe..96406e45e 100644 --- a/robots.txt +++ b/robots.txt @@ -10,3 +10,4 @@ Disallow: /user/ Allow: /user/pages/ Allow: /user/themes/ Allow: /user/images/ +Allow: /