or I just wanted to make sure that you were adamant that the list of those three possibilities were equally probable, to reiterate
> AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
that each of these things, being logically consistent, have equal chances of being the case 5 years from now?
OK well you obviously seem to be having some bad time about something in your life right now so I won't continue, other than to note the comment that started this said
>There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like
which I took to mean there are probability distributions around what things will happen, and it seemed to be your assertion that there wasn't, that a number of things only one of which seemed especially probable, were equally probable. I'm glad to learn you don't think this as it seems totally crazy, especially for someone praising LLMs which after all spend their time making millions of little choices based on probability.
> AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
that each of these things, being logically consistent, have equal chances of being the case 5 years from now?