I have found that Bing/Yahoo/DuckDuckGo, Yandex and Google report crawl errors when using the default robots.txt. Specifically their bots will not crawl the the path '/' or any sub-paths. I agree that the current robots.txt should work and properly implements the specification. However it still does not work.
In my experience explicitly permitting the path '/' by adding the directive Allow: / resolves the issue.
More details can be found in a blog post about the issue here: https://www.dfoley.ie/blog/starting-with-the-indieweb
When using "Fetch and Render as Google" in Google Search console it will report "partial" rendering due to the blocked images in /user/images directory which is blocked because of the Disallow /user/ rule. Proposing this small change as it improves google rendering of the page.
Prevent search engines from crawling and indexing unnecessary files and directories. The "/user/plugins/" directory may need to be added to the Allow list if plugins use frontend accessible assets. This is tested at a basic level using Google Fetch and Render.