Just to confirm what I said elsewhere, this site doesn't have any manual spam actions or anything like that. It's just a matter of Google trying to pick the correct canonical url when you have a lot of different (www, non-www, http, https) urls you're showing. If you make things more consistent, I think Google will stabilize on your preferred url pretty quickly.
And now I know what "nerfed" means. :)
But it's different for 5xx HTTP errors for the robots.txt file. As Googlebot is currently configured, it will halt all crawling of the site if the site’s robots.txt file returns a 5xx status code for robots.txt. This crawling block will continue until Googlebot sees an acceptable status code for robots.txt fetches (HTTP 200 or 404).
In summary: If for any reason we cannot reach the robots.txt due to an error (e.g a firewall blocking Googlebot or a 5xx error code when fetching) Googlebot stops its crawling and it's reported in Webmaster Tools as a crawl error. That Help Center article above is about the error message shown in Webmaster Tools.
Given that you said you did not see errors being reported, That suggests there was something else going on. If you need more help, our forums are a great place to ask.
Funny thing was I tried resubmitting the main page in GWT an all the traffic came back almost instantly.
Would "http:www.apple.com", "https:www.apple.com", "http:apple.com" and "https:apple.com" be treated by Google as four completely different and separate sites also to be ranked in isolation of each other? Why?
Many sites, for example give users their own "name.whatever.com" subdomain. In those cases treating the sites as the same doesn't make any sense.
Second, in webmaster tools, should we always have the www and non-www setup so we can do the "change of address". For example, if www.mysite.com is my preferred URL, do I need to also make sure mysite.com is in webmaster tools and change the address to go to www.mysite.com?