The esoteric design reminds me of the Connection Machine blog posted the other day, "to communicate to people that this was the first of a new generation of computers, unlike any machine they had seen before." 
I'm curious what they do with these prototypes once they are obsoleted in a matter of months, are the parts so expensive they tear it down to reuse them? Or will the machines be able to go on tour and stand in glass cases to intrigue the next generation of engineers? I know it had a tremendous effect on me to stand in front of a hand-wired lisp machine at the MIT museum.
So happy to be able to find a picture of the wirewrap inside:
One was seeing their Cray. I forget which specific model it was. It was gray and mauve, and had the fountain with the logo on the unit that pumped the coolant. Monsanto had a dedicated computer room for it with glass walls so you could see it. The overall effect was to make a very strong impression that this was something very special.
Another thing that stuck in my mind was seeing their bio labs. These were long concrete hallways dug halfway into the ground, I assume to make climate control easier. They had row upon row of corn plans under artificial light. These were the labs that developed roundup ready seed. I had no idea at the time the significance of what I was seeing or how contentious it would be now.
Last thing I'll mention is when we were walking outside and he pointed out the separate building the CEO et all worked out of. It was literally a bunker with earthen birms and such around it. My grandpa bragged that it was built to be bombproof in case terrorists attacked the CEO. At the time I was somewhat mystified why anyone would bomb the CEO of a chemical company. I certainly understand why now.
But anyhow, it was a cool experience and seeing that Cray probably helped inspire my interest in learning to program later.
Random other thing I'll mention is an email exchange from a mailing list back in the late 90s that focused on APL style languages. Someone told their story about how back during Cray's glory days they worked in a lab where he did interactive APL programming on a Cray machine. I can only imagine what that must have felt like at the time, typing arcane terse syntax into a prompt that would execute them shockingly fast.
As for APL, I haven't really got past an orientation in the language but it's held a total mystique to me since seeing this video circa 1975  walking through the language with a Selectric teletype acting as a REPL, totally flipped my understanding of computer history, I assumed it was all punchcard programming back in the old black and white days xD (I am born 1990 for reference, trying to catch up with what happened before me)
 (30min) https://www.youtube.com/watch?v=_DTpQ4Kk2wA
Bring back the impractical, space-eating circular design! I don't care about space efficiency. It's supposed to look cool.
And they do have a snazzy coloured endcap...
Well, a bunch of racks with green LEDs is more practical, cheap and functional, I guess.
I imagine there have been multiple cycles through the spectrum since then…
Tuning the band gap with a new material back then was difficult I think.
1. It was on loan to the Computer History Museum from Myhrvold and returned to him in 2016. It's unclear whether he did re-loan it or if he's busily calculating the values of polynomials with it.² The Computer History Museum website makes it sound like it's currently on display but I can find no news stories about it going back to the museum.
2. Just kidding about him calculating polynomials—it's (I think) on display in the lobby of Intellectual Ventures.
Maybe in 30 ~ 50 years or so.
Related, the DEC PDPs certainly look stylish!
I wonder if we measure it more precisely we'll get to something closer to 1.4142135623730950488...
Liquid Nitrogen with pumping: 40 K for a few thousand dollars.
Run of the mill Liquid Helium: 4 K for tens to hundreds of thousands of dollars.
But for these devices you need 15mK which is reachable only if you mix two different isotopes of Helium and pump the mixture into vacuum. Such a device is up to 1M$ and more.
And the insides of that device are in vacuum (actually, air freezing into ice on top of the chip can be a problem). The brass is basically the heat conductor between the chip and the cold side of your pumped He mixture (which is *not* just sloshing inside the whole body of the cryostat where the chips are).
Another reason you do not want the He sloshing around is because you will be opening this to make changes to the device and do not want all the extremely expensive He3 (the special isotope you need for the mixture) to be lost.
1 - https://en.wikipedia.org/wiki/Dilution_refrigerator
Or, look at literature on error rates for people performing simple repetitive tasks.
That said the whole hovering in free space thing was perhaps a bit over the top in the show.
They mostly hang like that since most dil units are designed so the coldest part is usually the lowest, and they’re generally orientation sensitive. You want easy access to the bottom part, so you just put the plates in descending order of temperature and you hang the thing from the ceiling.
I searched, but couldn't find it.
Also - when LISP was invented (1958) - what was the state of computers at the time? Doing some research - it seems like direct keyboard input to computers was only available for 2 years prior. It seems like languages were decades ahead of hardware.
I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.
Are there any articles on the process for how LISP was designed and implemented??
My intuition is that back then getting run time on computers was so scarce that the best programmers and mathematicians spent a great deal of brain time considering exactly what their software should be. If you only get one run a day, if that, you're gonna do your best to make it count. Today we're often in the opposite situation, where it can be entirely rational to burn incredible amounts of computation in the absolute sense to save brain time.
I have a much bigger emotional conflict when contrasting that with the current state of mainstream programming languages, that are only just beginning to tread onto territories like algebraic data types and pattern matching that ML paved almost 50 years ago. Is there any hope for true dependent typing to become popular before 2040?
As for languages ahead of the hardware, you might read up about Charles Babbage and Ada Lovelace, the latter a mathematician who translated problems into machine instructions for a machine that wouldn't be built for a hundred years - Babbage's design worked, but he spent all the money the Royal Society was willing to give trying improve the tolerances on his logical-clockwork.  But anyway, back to John McCarthy's paper, last page:
APPENDIX - HUMOROUS ANECDOTE
The demonstration was also one of the first to use closed circuit TV in order to spare the spectators the museum feet consequent on crowding around a terminal waiting for something to happen. Thus they were on the fourth floor, and I was in the first floor computer room exercising LISP and speaking into a microphone. The problem chosen was to determine whether a first order differential equation of the form M dx + N dy was exact by testing whether ΔM/Δy = ΔM /Δy, which also involved some primitive algebraic simplification. Everything was going well, if slowly, when suddenly the Flexowriter began to type (at ten characters per second) “THE GARBAGE COLLECTOR HAS BEEN CALLED. SOME INTERESTING STATISTICS ARE AS FOLLOWS:” and on and on and on.
The garbage collector was quite new at the time, we were rather proud of it and curious about it, and our normal output was on a line printer, so it printed a full page every time it was called giving how many words were marked and how many were collected and the size of list space, etc. During a previous rehearsal, the garbage collector hadn’t been called, but we had not refreshed the LISP core image, so we ran out of free storage during the demonstration.
 Jacquard's Web by James Essinger is the book you want to read for more.
- Nuclear fusion (Helion, ZAP, TAE, Tokamak Energy, CFS, Wendelstein).
- Self-driving cars.
- New types of nuclear fission reactors.
- Spaceflight (SpaceX Starship).
- Supersonic airplanes (Boom).
- Solid state batteries.
- Quantum computing.
- CPUs and GPUs on sub-5nm nodes.
- CRISPR-based therapies.
- Longevity research.
I'm semi-optimistic about space flight and longevity. I think Starship will fly, but I wouldn't be surprised if some of its most ambitious specs get dialed back a bit. I'll be somewhat (but not totally) surprised if the "chopsticks" idea works.
We will probably see aging-reversal to some limited extent within 10-20 years, but the effect will probably be more to extend "health span" than add that much to life span. (I'll take it.)
I'll add one not on the list: the use of deep learning to discover theories in areas like physics and math that have not occurred to humans and maybe are not capable of being found by ordinary human cognition.
Wildcard, but plausible: detection of a strong extrasolar biosphere candidate using JWST or another next-generation telescope. Detection would be based on albedo absorption spectra, so we wouldn't know for sure. Talk of an interstellar fly-by probe would start pretty quickly.
I wouldn't list sub-5nm as "far out." We will almost definitely get sub-5nm. AFAIK 3nm is in the pipeline. Sub-1nm is "far out" and may or may not happen.
Hololens 2 has shown that it isn't so easy to advance the field.
I don't think an Apple device is forthcoming or likely to leapfrog.
Facebook, Apple and others are releasing their first AR glasses then.
I think longevity research is a path to stagnation, and ultimately counter-productive, and should cease. If science advances one funeral at a time, then longevity is counterproductive for all other progress.
There are reasons to worry about the ethics of longevity research (especially if the benefits of it are not justly shared), but I don't think you can justify withholding life-improving medical treatment from people just because you want to help science by letting people die early.
That sort of thinking is how we get Logan's Run.
The thing that lives on, that can be effectively immortal (if we choose to protect it) is the biosphere in which we're embedded and, to a lesser extent, the nest of symbols humans have fashioned for themselves over the last few millenia. It is fascinating to imagine what it would be like to live through human history; however it is terrifying to imagine what the "great men" of history would have done or become if not cut down by time. The inevitability of death has surely stopped some great things from being done, but I'm equally sure it has stopped even worse things from being done - imagine human history if the Pharoahs of Egypt had had access to immortality! It's too horrible to imagine.
BTW the Logan's Run system was purely about maintaining homeostasis given limited resources, NOT about maintaining (or even enhancing) dynamism in the population by decreasing average life-span. In other words, unrelated.
I think what we disagree on is what it means to "engineer away" death. Are we engineering away death if we cure a disease, but don't extend the maximum lifespan of humans? Is extending the average lifespan to 100 years all right as long as those treatments are designed to not work on people over 100 years old? If a treatment is later found that helps 100 year olds to extend their age to 101, is that the treatment that should be banned, or is there some number N where adding N years to the previous maximum is morally wrong and the whole world has to agree on banning it?
Your point about the Pharaohs is maybe not as strong as you think, since of course the Pharaonic system did outlast any of the individual office holders. I don't think it was old age that lead to the fall of that regime, and there are plenty of regimes which manage to be equally horrible within a single lifetime, or that are overthrown within the space of one lifetime.
Thank you for that succinct explanation of the premise of Logan's Run. I wasn't sure if it worked as an analogy, since, as you say, the motivation of the society was different from the one you are advocating for, but I think the most relevant aspect of Logan's Run is the dystopian nature of a society which imposes age limits on its members, against their wishes.
I'm not at all against small increases in lifespan, and certainly for improving quality of life (e.g. defeating disease). I'm specifically against individual immortality because I strongly suspect it would quickly and inexorably lead to stagnation and death for our species.
short term hype (real advances that will happen in 1-2 years, but won't matter by 2030, because they are just a generational iteration)
Over-hyped far-future research. (things where the possibilities have yet to be brought down to earth by the practical limits of implementing them broadly / cost effectively) When these things do happen, they tend to be a bit of a let-down, because they don't actually provide the promised revolutionary changes. These things basically have to be over-hyped in order to get the necessary funding to bring them to reality.
Of the examples you have, I am only really excited about CRISPR, and to a lesser extent commercial spaceflight, and new nuclear. These have promise IMO, but I also don't expect them to be decade defining.
Personally I don't think we know what the next breakthrough will be yet. I expect it to take us very much by surprise, and start out as something unthreatening which then grows to a disruptive size / scale.
I hope there will be some unexpected breakthroughs too.
Compared to most of the other things listed, this is more of a nerd-aestheticism thing rather than something which is hugely important technologically.
- Unfortunately, even less local computing, with everything provisioned from the cloud under a SaaS payment model.
- More mRNA applications
- Power/energy networks and markets across borders.
- Theranos, but legitimate. Better, cheaper and more convenient early monitoring/diagnostics for vitamin deficiencies and early stages of disease.
- Carbon neutral combustible fuels.
- Cheaper grid-scale storage.
- Better understanding of the gut-brain connection.
Been there, done that.
- VR/AR (photonic override, more specifically)
- Fundamental physics (unlocked by tech)
Remote education. Available to any kid or adult anywhere.
I couldn’t learn maths in class. Too distracted, too annoyed with stupid questions. But I increased 3 symbols in 3 school terms with a slide projector and audio tape, where I could focus and rewind. Teacher there was for bits I didn’t learn from the slides. I’m probably in the minority but I’m sure there are more of me.
Digital education catches kids like me and kids who have no access to excellent educators. And marginal cost is zero, so no harm in giving access to the world.
And longevity research is not even a real need - we already live too long as it is, from the evolutionary and economic standpoint. I'd much rather someone came up with a way to cheaply and painlessly end one's life once quality of life begins to deteriorate due to chronic disease and such. Some kind of company (and legislative framework) where you pay, say $1K and they put you into a nitrogen chamber and then cremate and flush the ashes down the toilet afterwards. Or perhaps use them as fertilizer. I'd use the service myself at some point in distant future.
Voluntary euthanasia is ultimately challenging because of similar legal issues as with the death penalty - it cannot be undone, and there are forces in society that can lead individuals to use it for other reasons than just being over and done with suffering through old age.
> and there are forces in society
So? You're going to tell me I can't go anytime I want to? That's not the case even now. It's just that now I'd have to procure the nitrogen myself (which isn't difficult), and my relatives would have to deal with the body. I'm merely suggesting a service that resolves this purely logistical complication, and excludes the possibility of not quite dying but living the rest of one's life as a vegetable.
Think of what we have now: people spend years, sometimes decades suffering from chronic diseases, or just plain not having anything or anyone to live for. And it'll get worse as medicine "improves", and lifespans "improve" with it. Is it humane to withhold the option to end it all from them? I don't think it is. I will grant you that there are likely tens of millions of such people on the planet right now. I will also grant that this is not an uncontroversial thing to suggest. But the alternative we have now doesn't seem any more humane or dignified to me.
If this still doesn't sit right with people, we could age and condition-restrict it, or require a long waiting period for when this is not related to acute incurable disease.
> as with the death penalty
Which is also inhumane, IMO. It's much worse to spend the rest of one's days in confinement instead of 30 seconds until barbiturates kick in. That's what the sadists who are against the death penalty are counting on.
The death penalty does not exist to reduce the suffering of the convicted, but to get rid of them. The true issue with the death penalty is that it can't be graduated (except by adding "cruel and unusual punishment") and it can't be undone. Prison sentences can be legally challenged and the innocent can be freed early.
There is a real slippery slope here: what length of prison sentence is considered to be worse than the death penalty? An additional thing to consider is that many countries without the death sentence actually don't impose true life sentences, but very longish ones (upwards from 20 years). Confinement for life is for those irredeemly judged to be a threat to society after their sentence. Compared to that, many death row inmates actually spend decades fighting their sentence. They could end it at any time if they wanted.
> The death penalty does not exist to reduce the suffering of the convicted
There's an easy way out of your moral dilemma that you go into after this sentence, much like what I suggest for those on the outside: let the convicts choose whether they want to suffer for the rest of their days in prison, or be humanely and painlessly killed. I know which way I'd go, under the circumstances. And yes, I do insist that the killing must be humane, dignified, and painless. We have the technology to ensure all three of those things.
Regarding humane, dignified and painless killing: the Lethal Injection was supposed to be exactly this. But we humans are pretty good at botching things...
When reading "127-qubit system" you would expect that you can perform arbitrary quantum computations on these 127 qubits and they would reasonably cohere for at least a few quantum gates.
In reality the noise levels are so strong that you can essentially do nothing with them except get random noise results. Maybe averaging the same computation 10 million times will just give you enough proof that they were actually coherent and did a quantum computation.
The omission of proper technical details is essentially the same as lying.
Validation of experimental theory through the characterization and control of an entire system is not the same as building the same system and simply seeing the final state is what you expect. The latter is much easier and says very little about your understanding.
Here's an analogy: Two people can get drunk, shack up for the night, and 9 months later have created one of the most powerful known computers: A brain. Oops. On the flip, it's unlikely we'll have a full characterization and understanding of the human brain in our lifetimes – but if we ever do, the things we'll be able to do with that understanding will very likely be profound.
The accomplishment is more akin to creating a bathtub with 127 atoms and doing fluid dynamic simulations on that, which is a much harder problem in many ways than doing the 6e25 version of the experiment. But it is very questionable to me whether any claims of quantum supremacy retain validity when leaving the NISQ domain and trying to do useful computations.
Gil Kalai's work in the area  continues to be very influential to me, especially what I consider the most interesting observations, namely that classical computers only barely work -- we rely on the use of long settlement times to avoid Buridan's Principle , and without that even conventional computers are too noisy to do actual computation.
 https://gilkalai.wordpress.com/2021/11/04/face-to-face-talks... is a recent one
Gil Kalai and others with similar arguments play an important role in the QC community. They keep the rest of us honest and help point out the gaps. But I do think the ground they have to stand on is shrinking, and fast. Ultimately, they might still be right – that much is certain – but it seems to me that the strides being made in error correction, qubit design, qubit control, hardware architecture, and software are now pushing the field into an exponential scaling regime.
To me, the big question is much less whether we'll get there, and much more "what will they be good for?"
__EDIT:__ whoops wrong figure, just read section iv or see the first figure here 
 - https://arxiv.org/abs/2110.03137
 - https://ionq.com/posts/october-18-2021-benchmarking-our-next...
> The omission of proper technical details is essentially the same as lying.
The Chinese did show it some time ago: