Zoom forward to a super-human AI that mimics our brains in its approach but exceeds its capacity. What is stopping it, for instance, learning that it can play the long game of being good until it has sufficient power at its disposal and then becoming evil? No matter what training data you present, you can't know exactly what the result will be.
I get the feeling that learning systems will be combined with model systems with the former performing "low level" tasks and the latter providing a verifiable "executive" that guides high level goals or outcomes.
I have pondered if it would be a workable field to have incentive based design in a formalized way to ensure that even a complete sociopath would find acting in a beneficial way the best option.
It leads to a scary question: what does a superhuman AI really want?
I'd be very hesitant to assume that an agent cannot learn under which circumstances it should be honest to gain a benefit without putting any innate value on honesty. A human agent is more than capable of reasoning like that, let alone a superhuman one.
A solver for e.g. 3-SAT is general only in a very narrow sense, namely that an entire class of problems can be reduced to the specific problem it solves. However, the solver itself is not doing the reducing, rather it is being spoon-fed instances generated by somebody, and that somebody is doing all the hard work of actually thinking. The solver is just doing a series of dumb steps very quickly, with lots of heuristics thrown in. How is that not also "system 1"?
Anyway, the whole thing was just a fancy way of saying that you can either solve problems exactly, in the way that complexity theorists and algorithm designers do things, or statistically, in the way that learning theorists do things. No need to superimpose a strained analogy.
Define "conclusive". There is considerable evidence for this dual reasoning mode.
As for your study, system 1 thinking is not inherently illogical. In fact, it's necessarily logical otherwise it would be maladaptive. The point is that it's logical in a "lossy" way that sometimes excludes pertinent information for speed of response, and so sometimes goes wildly wrong.
In any case, I look forward to more scholarship and experimentation at the intersection of these topics.
In other words, the more division you create in the world, the more 'specialness' you create. And in the immortal words of Syndrome, when everyone's special, no one is.
With a fine enough definition of thought, anything can fit the definition.
It immediately rubs me in the wrong way because dual process theories of the brain are wrong and outmoded and need to die out of the public consciousness now!
I am muttering angrily in cognitive science!
How are the calories are wasted exactly? Our hind brain triggers autonomous reactions to various inputs. Clearly not all such reactions are adaptive, sometimes staying very still and bearing some pain or discomfort is better than death. And so we evolved higher level cognitive faculties to make better choices, just a little slower than the hind brain. This is system 1.
I don't see why the exact same pressures couldn't work at this cognitive level as well. System 1 provided more adaptive reactions to a wider range of situations, but just a little slower than the hind brain. But even still, some metacognitive faculty would yield even better reactions in some circumstances, and so we evolved system 2.
But system 1 still has tremendous significance, because it's much better than our hind brain, is sufficient for most daily scenarios, and is not as calorically expensive as system 2.
The logic behind the efficiency gains is similar to the cache hierarchy in computers. We have more than one cache level because 2-3 cache levels is pretty close to optimal when trading off density, thermal considerations, and efficiency.
This story is false. The autonomic system isn't autonomous from the rest of the brain: its parameters, "set trajectories", are regulated by the rest of the brain (meaning: the limbic areas of the cortex), while the hypothalamus communicates with the endocrine system to predict and control the body from that angle. As you already say, a truly reactive body-regulator would get you killed very quickly.
Regulating the body by anticipating what it has to do is The Point of a brain. Further, the brain has six intrinsic networks to its functional connectivity, not two modular systems.
There's just no empirical evidence for a dual-process model. There's empirical evidence for an embodied predictive-control model. If you want to arrange this into "layers" from "animalistic" to "human", the way to do it would be to section off the particular cognitive functions which, at times, can be used for offline simulation of the environment, as a mode of metabolic reinvestment of surpluses.
I'm not sure the consensus is as strong as you imply:
No, because a dual process model spreads itself too thin: it doesn't coherently explain many experiments with a single theory, but instead rewrites the theory for every distinct experiment.
At the end AI will join our tense political atmosphere of parties fighting for ruling the world?