I am so tired of people acting like planning for an uncertain world is a zero sum game, decided by one central actor in a single pipeline execution model. I’ll unpack this below.
The argument above (or some version of it) gets repeated over and over, but it is deeply flawed for various reasons.
The argument implies that “we” is a single agent that must do some set of things before other things. In the real world, different collections of people can work on different projects simultaneously in various orderings.
This is very different than optimizing an instruction pipeline for a single core microprocessor. In the real world, different kinds of tasks operate on very different timescales.
As an example, think about how change happens in society. Should we only talk about one problem at a time? Of course not. Why? The pipeline to solving problems is long and uncertain so you have to parallelize. Raising awareness of an issue can be relatively slow. Do you know what is even slower? Trying to reframe an issue in a way that gets into people’s brains and language patterns. Once a conceptual model exists and people pay attention, then building a movement among “early adopters” has a fighting chance. If that goes well, political influence might follow.
I was more hinting at that if we fail to plan for the obvious stuff, what makes you think that we’ll be better at planning for the more obscure possibilities. The former should be much easier, but since we fail at it, we should first concentrate on getting better at that.
If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding.
If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu.
And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes.
Take global warming as an example: this is a real thing that's happening. We have measurements of CO2 concentrations and global temperatures. Most people accept that this is a real thing. And still getting anybody to do anything about it is nearly impossible.
Now you have a hypothetical risk of something that may happen sometime in the distant future, but may not. I don't see how you would be able to get anybody to care about that.