Didn't know this story was originally published in Nature. I read it in his excellent short story book Exhalation and loved it like I did many others in that collection.
Chiang is like Kafka and Borges in that he writes plain prose that blows your mind.
> Futures is a venue for very short stories or ‘vignettes’ of between 850 and 950 words. The subject is typically near-future, hard SF, although this can be interpreted liberally.
Warning. The SCP wiki is a memetic attentional infohazard designed by [REDACTED] to destabilise and capture the productivity of curious and/or creative individuals and channel it towards a currently unknown goal. Visitors are reminded to monitor their usage carefully. Early warning signs of contamination by the SCP wiki include: procrastination with regard to less interesting work; signing up for an account in order to "write a fun log entry" (particularly in respect of SCPs 914 and 682); difficulty sleeping "because you're not sure if this should be Euclid or Keter"; referring to the Doctor but qualifying this as "Gears or Kondraki. Not the British one"; sketching humanoid statues composed of rebar, concrete and Krylon spray paint. If more than ten (10) tabs containing pages of the SCP wiki are open simultaneously then the browser should be closed immediately by any available third party, the victim removed from the vicinity of all networked computing devices, and instructed to get back to work before the acute fascination leads to a loss of executive function, livelihood, marriage etc.
The method by which the favourite gets into the list is not discoverable. Why would anyone think that clicking on the time a submission was made would be the route to adding a favourite?
And the list of favourites would be much more discoverable if it were simply a link next to your name on the page. After all there is plenty of space at the top right of every page on HN
To me this short story doesn't seem that mind-bending or provocative. Doesn't everything hinge on the premise that you can send information back in time? Essentially this seems to be a simplified presentation of Newcomb's paradox [1], or perhaps some other temporal paradox.
The present story simply proposes a straight-forward way out of any such paradox, namely that there isn't any free will at all, hence there's no way to alter the future. However, since as far as we know there's no way to send information back in time, this doesn't actually say much about the real world or the actual existence of free will.
For those of you unfamiliar with Ted Chiang, he is a very highly regarded science fiction novelist who specializes in short-form writing. His novella "Story of Your Life" was the basis for the 2016 film Arrival, which I loved dearly and is definitely worth watching if you have not already.
I feel like this doesn't have quite the same implications... the fact that our brain is thinking thoughts before we become conscious of them is coherent, and doesn't conflict with free will.... and in fact, it kinda makes sense... if thoughts are a physical process, it makes sense that the physical process would have to start before it becomes conscious to you.
You'd be even even more surprised to learn uBlock can block GPDR banners and works for Safari (and of course Firefox on MacOS). The blocking has to be enabled: in the "uBlock preferences (dashboard) >> Filters Lists >> Annoyances" check all the boxes.
Possibly uses Asimov's Thiotimoline as its mechanism, I hear the Soviets had a 77,000 cell chronobattery running the prediction up to 12 days in advance.
> Most people agree these arguments are irrefutable, but no one ever really accepts the conclusion. The experience of having free will is too powerful for an argument to overrule.
It would be equally convincing, also without bringing anything to back it:
> Most people agree these arguments are irrefutable, but no one ever really accepts the conclusion. The experience of reassurance that offers determinism¹ is too powerful for an argument to overrule.
¹ Don’t blame me, the universe determined that things should be in a way which make me unresponsible of all that is wrong in this world. Also that’s the universe determined nature that anything positive should be credited to me.
My current opinion/impression/illusion (you name it), is that free will is an undecidable topic. As you can not "go out" of your actual experiment, you can only pick between models of the outside world. But for this very reason, you can’t "objectively" decide what causes or not your decision of believing in free will or determinism. Deterministic people will say that free-will believers are determined in such a way which forces them to do so, and free-will believers will say that the deterministic mindset people freely chose to believe in a deterministic model.
The concerned issue is not what the external world provides as data, but how we interpret it. That is, how we decide to opt for this or that interpretation. How we decide to opt for a model which includes free-decisions or only completely-bounded-decisions is not evaluable before a model is selected.
I strongly agree on the statement, that the problem is undecidable. The axiom is that the machine that predicts the future exists, so then we can use that as a lever. Build another machine that predicts predictor (antipredictor). Then you can prove it by observing antipredictor. If antipredictor lights up, you dont press the button (because you want to fool predictor) but then the future in which you pressed predictor does not exists (because antipredictor lighting up automatically means you must have pressed it!). Essentially, creating new devices that predict the future might as well create different systems from which you wont be able to observe other systems. In fact the whole dispute is pointless, even if our lives are deterministic, you wont be able to see it. In our world we can only use heuristics to determine the future of systems, but complete simulation of the future is impossible, because we would need to account for everything (and use this everything to compute everything). You can only build such machines / deduct such observations when you are outside of the system.
We can deduct another experiment. Take 2D world in which 2D creatures live. For them 3D space is space-time. For us it's 4D. Then 4D creatures can deduct that our world is deterministic or not. But 4D creatures cannot determine if their space-time is deterministic. 5D creatures can say that. It goes to infinity, if at least one space-time is non-deterministic then whole system is non-deterministic. The fact that this experiment can go into the infinity makes subject undecidable.
I think it's decidable, as long as we agree on our definitions and assumptions.
So for example let's assume that free will is the ability of an agent to do otherwise. I don't think that view of free will is compatible with agents being non-random consistent beings. For a decision to be mine it must be determined by my memories, preferences, personality, skills, experiences, etc. The decision must be determined by my state, or the decision does not come from me. But if the decision comes from my state, then given my current state I could not do otherwise.
To me that's just a trivial statement that I am a consistent being with characteristics that persist over time, and that I am responsible for my deliberate, considered actions. If I, this person and my current state, do not determine my actions in a straightforward cause and effect way, how can I be responsible for them? So I certainly hope my actions are determined.
> I think it's decidable, as long as we agree on our definitions and assumptions.
I guess, you mean if I accept your definitions, assumptions, and their underlying premises which will most likely lead us to conclude that your point of view is more reasonable, don’t you? :)
That is, yes, once we agreed on every underlying concept scopes and the rules to play with them, chances are far better that we agree on the conclusions. But the first part is actually a big part of what is so difficult with communication, isn’t it?
Oh absolutely, that's exactly my point. Even my example definition has ambiguities.
I defined an agent as being a non-random consistent being, but what does consistent mean exactly? What I meant was that I have reasons for my actions. After all, if I can give reasons for my actions, and I'm not deluded, surely my reasons determine my actions? Take that deterministic relationship away in the name of some abstruse philosophical concept of 'free will' being the ability to do otherwise, and why did I perform that action exactly?
Of course dualists have a very different conception of a human actor. They deny that we are 'mere' mechanisms and that there must be more to humans and their minds than the merely physical. Whatever that 'more' is. How exactly that gets round the problem I really don't know. Ive never read a convincing account of that. You're right, we need to dig into these assumptions. What is an actor? What is a consistent being? In dualism, what is this dual other thing, that isn't physical and what role does it play in decisions? That is the real question for dualism. Free will or lack of it is merely a consequence.
We are complicated pieces of biochemical machines. Atoms, molecules, cells interacting with each other. No magic, nothing special. Just nice, repeating patterns in infinity.
I didn't realize that this is a story, because my mind jumped to my high school experience from a physics experiment.
Specifically, visual information travel faster to my brain than the nerves inside my body. So I would click or flip the light switch only to see the light before the touch sensation.
Suppose that instead the device did work by analysing the user's brain state and predicts whether they will press the button or not. The device can also take into account the influence of the light flashing, or not, on the user's decision because it has full information about the state of the universe and the laws of physics. What happens then? Please think about it before reading my solution below, I'd be interested to see what independent conclusions other people come to.
I suspect that in that case, the device wouldn't work reliably. There's no way for it to influence the person's decision to press the button or not simply by choosing to illuminate the light or not even with full information. That's simply not a powerful enough input into the person's decision making process, even if that decision making process is perfectly deterministic and we ignore things like quantum indeterminacy. However I don't know how to prove or test that logically.
Suppose we built a simple machine to press the button on the predictor. It would have a way for pressing the button, and a way for reading out the LED light. It would have one simple rule:
"If the light has not been on in the last second, press the button"
You mean, stop and not press the button? In this case, the light never appears. The switch must be pressed eventually in the future so that the circuit sends the message back in time. If you find out a way to "trick" the system by never pressing the switch, it never sends the message in the first place.
The mechanism doesn't work by predicting whether you will press the button in 1 second. It works after you have actually pressed the button!
My point is that this device proves more than non-existence of free will. It requires inability to react to visual stimulus within 1 second timeframe.
I mean I can construct a robot which does the same - moves its arm towards the button and only stops when either the button is pressed or the light comes on.
You say you can construct such robot, but in fact, you can't. That's because if you could, then it would cause a paradox with a predictor. So you can't (though it may be fun watching you try).
My point exactly - the existence of the Predictor proves more than lack of free will. It's lack of free will, so we are automatons that respond to stimuli in a deterministic way, sure.
But there are also some strong restrictions on the set of reactions that are allowed, for example none of them can lead to a functioning counter-predictor robot.
I would like the story much better if they had a lower percentage of people falling into the catatonic state. A third of people using the device going catatonic is too many. Two problems.
1) If I created a device that caused a third of people to go catatonic after using it, then my device is going to be pulled from the market. I believe tobacco products kill 1/3 of the people who use them and they're highly regulated. The difference of course is that tobacco kills after years and years of usage. The implication in the story (both from the wordage and the fact that the message is coming from a year in the future) is that the device causes you to go catatonic in weeks or months. Societies protect the lives of their members or else they're replaced by other societies that do a better job as they die out from their failure. Hearing about an unregulated 1/3 chance catatonic machine breaks too much of my suspension of disbelief.
2) If such a device existed, I don't believe that it would cause a 1/3 catatonic rate. Two things here.
2.1) At least in my experience, people tend to learn and act by intuition and compartmentalization. High functioning individuals don't seem to have necessarily better intuitions, they are just much better at compartmentalizing the inconsistent aspects of their life such that one area doesn't contaminate another. High level performance in complex fields requires internalizing many incompatible frameworks. If you can't keep them separate then you'll let your intuition from one sphere cause you to fail in another.
Of course I could be totally wrong, but it feels to me like this is how the real world works. Which is why the story breaks my suspension of disbelief when they give the 1/3 catatonic rate.
Most people would play with the device and have really heated philosophical debates. And then they would go on with the rest of their life like they didn't have a future prediction machine.
The only people that should really be affected by this device are people who are either already having some serious problems handling modern life due to really horrible compartmentalization skills or people who's entire cognitive functioning is structural. Of course modern society is really complex. Like, there's a reason why the logical AI techniques from the 70s are being overtaken by the statistical models of machine learning. There's just no way that 1/3 of people operate in a structural way such that knowing that there's no free will destroys their ability to function.
2.2) And of course the second thing here is that we already have a lot of soul crushing aspects of modern life that destroy our illusion of self agency. Do you have a dead end job that you can't escape? Do you have toxic family members? Do you have a terminal disease? Did an unexpected accident cause you to lose someone close to you or permanently damage your health or destroy property that you can't afford to replace? Do national events cause drama, distress, and chaos? So many things happen in our lives that tell us that we do not have control. If the human condition doesn't cause a 1/3 catatonic rate, then a 1 second future prediction shouldn't either.
EDIT: So the gut punch of the story is the end where the messenger indicates that free will doesn't exist in the long term (ie the messenger doesn't really want to send the message, but has to because free will doesn't exist). A 1 second lack of free will isn't scary, but a whole year lack of free will feels worse. Or at least it's supposed to.
But the fact of the matter is that I would rather be on a beach someplace relaxing instead of working for the next several decades. But I've got no choice. I'm already handling a lack of free will on the order of decades. Everyone is.
So I get what the author is trying to do and I think it's almost effective. But I have a hard time being emotionally sympathetic to the messenger in the story. Yeah, you were forced to write a message you knew about a year in advance. I'm forced to pay a mortgage for the next couple of decades. I'm having a hard time feeling sorry for you because it feels like I've got bigger problems.
Interesting. However, I disagree that if such a machine existed, people would lose motivation. They'd realize that the rational part of the brain is used just for rationalization and is not always actively engaged in real-time decision-making... but that's about it. They'd then beat the machine with little practice, regardless of how it is implemented.
>The heart of each Predictor is a circuit with a negative time delay — it sends a signal back in time.
If a Predictor was hooked up to a geiger counter instead of a button, the light would flash exactly one second before the counter would click. It would flash even before the decay occurred.
It doesn't mean that you don't have free will, it just means that during a one-second loop, you are constrained to logically consistent choices. You can't choose to fool yourself about your future behavior during that one second.
If it's real time travel, then that wouldn't show that free will doesn't exist. It would just show that there are different timelines, and the universe is forked every time a decision is made.
- http://www.multivax.com/last_question.html - http://www.galactanet.com/oneoff/theegg_mod.html - https://www.tor.com/2010/08/05/divided-by-infinity/