Hacker News new | past | comments | ask | show | jobs | submit login
IBM unveils 127-qubit quantum processor (ibm.com)
319 points by ag8 65 days ago | hide | past | favorite | 313 comments



Quantum Computers sitting in their cryogenic chambers are such works of art, stacks of giant brass plates and hundreds of heat pipes (or coolant pipes? liquid helium I suppose) twisted and coiling throughout the structure hanging like some steampunk chandelier (why do they hang from above anyway?) EDIT: changed to a few direct links to pics: [0][1][2]

The esoteric design reminds me of the Connection Machine blog posted the other day, "to communicate to people that this was the first of a new generation of computers, unlike any machine they had seen before." [3]

I'm curious what they do with these prototypes once they are obsoleted in a matter of months, are the parts so expensive they tear it down to reuse them? Or will the machines be able to go on tour and stand in glass cases to intrigue the next generation of engineers? I know it had a tremendous effect on me to stand in front of a hand-wired lisp machine at the MIT museum.

[0] https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AANs...

[1] https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AANs...

[2] https://static.reuters.com/resources/r/?m=02&d=20191023&t=2&...

[3] https://tamikothiel.com/theory/cm_txts/


Cray supercomputers were also aesthetically beautiful machines:

https://cdn.britannica.com/11/23611-050-81E61C8A/Cray-1-supe...

So happy to be able to find a picture of the wirewrap inside: https://s-media-cache-ak0.pinimg.com/originals/e2/d2/47/e2d2...


When I was a kid I helped my grandpa move out of his office when he retired. He was a senior engineering fellow with Monsanto, specializing in bubble trays in refinery processes. He worked out of their big campus in St Louis, and gave me a tour as a thanks for helping him with the boxes and such. There's several things that have stuck in my memory from that tour.

One was seeing their Cray. I forget which specific model it was. It was gray and mauve, and had the fountain with the logo on the unit that pumped the coolant. Monsanto had a dedicated computer room for it with glass walls so you could see it. The overall effect was to make a very strong impression that this was something very special.

Another thing that stuck in my mind was seeing their bio labs. These were long concrete hallways dug halfway into the ground, I assume to make climate control easier. They had row upon row of corn plans under artificial light. These were the labs that developed roundup ready seed. I had no idea at the time the significance of what I was seeing or how contentious it would be now.

Last thing I'll mention is when we were walking outside and he pointed out the separate building the CEO et all worked out of. It was literally a bunker with earthen birms and such around it. My grandpa bragged that it was built to be bombproof in case terrorists attacked the CEO. At the time I was somewhat mystified why anyone would bomb the CEO of a chemical company. I certainly understand why now.

But anyhow, it was a cool experience and seeing that Cray probably helped inspire my interest in learning to program later.

Edit:

Random other thing I'll mention is an email exchange from a mailing list back in the late 90s that focused on APL style languages. Someone told their story about how back during Cray's glory days they worked in a lab where he did interactive APL programming on a Cray machine. I can only imagine what that must have felt like at the time, typing arcane terse syntax into a prompt that would execute them shockingly fast.


Thanks for sharing. I worked on the UIUC campus for some time right by the greenhouses where they experimented with cultivars of corn and other grasses, always funny to see giant 8 foot tall plants lit with sodium lamps through the winter.

As for APL, I haven't really got past an orientation in the language but it's held a total mystique to me since seeing this video circa 1975 [0] walking through the language with a Selectric teletype acting as a REPL, totally flipped my understanding of computer history, I assumed it was all punchcard programming back in the old black and white days xD (I am born 1990 for reference, trying to catch up with what happened before me)

[0] (30min) https://www.youtube.com/watch?v=_DTpQ4Kk2wA


Re APL I totally know what you mean. I've never done anything with that category of language that wasn't just golfing / goofing around, but conceptually it's made an impression. Once the big picture concept clicks you look at the way we write most code and see so much bookkeeping that's just managing names and values of iterators and indexes. Lifting that up to bulk transformation of multidimensional objects is powerful, and much closer to the intuitive picture of what's going on I have in my imagination.


To this day, whenever I hear the term "supercomputer", I can only visualize the Cray. I don't want to know what the latest supercomputer looks like, because I suspect it's just another boring bunch of rack aisles. Maybe with a snazzy color end-cap or blue LEDs on the doors.

Bring back the impractical, space-eating circular design! I don't care about space efficiency. It's supposed to look cool.



This is an area where I wouldn't mind a little splurge in design. These are supposed to be the greatest computing machines made, they should look fabulous and mysterious.

Well, a bunch of racks with green LEDs is more practical, cheap and functional, I guess.


Haha so green LEDs are cool again? I made a call back in 2004 that after blue LEDs became commonplace and white LEDs had their turn as the new hot thing, red would make a comeback. And it did. ;)

I imagine there have been multiple cycles through the spectrum since then…


Isn't that due to green being the highest energy of light for an LED available at that time? IIRC, everytime a new material was created to get the increase in band gap for a higher energy photon (i.e. blue or violet), there was a Nobel prize given out.

Tuning the band gap with a new material back then was difficult I think.


green has a lower band gap relative to blue/white, ~2V depending on chemistry, but there's something like that going on - IIRC human eyes are most sensitive to green around the ~500/550nm wavelength that green LEDs emit, so you can get good brightness out of low power.


Then consider that everybody's smartphones would make it into the TOP500 in 1993.


I'm a big fan of some of the graphics you see on the big rack farms too. Like the Jaguar art Cray used a decade or so back.


The computer history museum has a CRAY you can see very close ( https://computerhistory.org/ ). Worth the visit.


At the 1990 TeX Users Group meeting at Texas A&M University, one of the events was a tour of the computer center where we got to be in the room with their Cray. I think I sat on the bench.


You’ve just given me my weekend plans, thank you very much! They have a working IBM 1401 that you can see in action too!


Also a truly astonishingly beautiful babbage engine. I came from the UK to CA to see it.


You do know that there's one in London at the Science Museum (the one that was¹ in California was built at the same time for Nathan Myhrvold).

⸻⸻⸻

1. It was on loan to the Computer History Museum from Myhrvold and returned to him in 2016. It's unclear whether he did re-loan it or if he's busily calculating the values of polynomials with it.² The Computer History Museum website makes it sound like it's currently on display but I can find no news stories about it going back to the museum.

2. Just kidding about him calculating polynomials—it's (I think) on display in the lobby of Intellectual Ventures.


I was curious enough to actually contact IV and confirm that the other Difference Engine No 2 is, in fact, in their lobby.


CHM homepage says it’s closed to the public ‘until later this year’.


Damn it


I have a picture of a cray been serviced as an A2 framed print on my living room wall, it predates the missus which is why it was on the living room wall ;)


Someday when quantum computers are the size of dust particles and we're surrounded by them there'll be some version of a steampunk subculture that values decorating their homes with these ancient beautiful and laughably incapable devices.

Maybe in 30 ~ 50 years or so.


I'm very excited to configure DNS blocking for the quantum dust particles trying to serve me advertisements in my home.


At least analytics won’t be able to tell if you looked at the ad or not


“Why do you look at the speck of quantum dust in your brother’s eye and pay no attention to the GPU in your own eye? How can you say to your brother, ‘Let me take the speck out of your eye,’ when all the time there is a GPU in your own eye?"

https://www.biblegateway.com/passage/?search=Matthew%207:3-5...


But they can tell how probable that was


I can't help but think of turning them into horrible sounding organs.


I would gladly decorate my home with one. They're beautiful in their own way.


They hang from the ceiling because they use evaporative cooling (the high-energy particles escape, and the low-energy particles remain in the bucket), each lower stage a bit cooler than the one above it.


Also because it looks cool, which is only appropriate for a cooling system.


AFAIK, another reason to work downward, historically at least, was to lower an assembly into an open-neck dewar.


The TV show DEVS has a computer that looks a lot like those first three links. I always thought their prop was a set designer's imagination run wild, not actually based in what quantum computers look like.


Some additional information on ³He/⁴He dilution refrigeration that makes up the chandelier: https://en.wikipedia.org/wiki/Dilution_refrigerator


I was lucky enough to tour the IBM Thomas J. Watson Research Center in New York a few months ago and captured several sound recordings of this[0] room housing a quantum computer. Not only do they look cool, they sound very intense! [1] A stark contrast to the minimalist/austere design of the actual enclosure, or maybe it's fitting, depending on your perspective...

[0] https://www.ft.com/__origami/service/image/v2/images/raw/htt...

[1] https://drive.google.com/file/d/1CeZjXUH6Y8ZvfcS0IM0MWoLNAYJ...


Yep, definitely the sound of an AI actively hijacking my brain D: thanks


I look forward to the day we can look back at these "quantum chandeliers" with nostalgia, like we look back on those massive, room-sized mainframes today.


I can't wait to be in the vintage quantum computing club, where we build working replicas of the "quantum chandeliers" with more modern and stable parts, and tinker with them as functional room decoration.

Related, the DEC PDPs certainly look stylish!


The pipes you mentioned are microwave conduits (for various control signals).


0.141" semi-rigid coax, diameter of champions.


> 0.141" semi-rigid coax, diameter of champions.

I wonder if we measure it more precisely we'll get to something closer to 1.4142135623730950488...


It would be a bit of a surprise since the unit of measure is inches and I'm not sure what inches would have to do with quantum computing. I feel like the inch is essentially a random factor here.


I’d make this cable as a form of art.


Close, as the ratio of the inner and outer conductor diameters, to exactly get 50 Ohms, is an irrational number.


Thanks! Is my assumption of liquid helium running somewhere correct? I figure that's the only way to reach the temperatures required (single digit kelvins, no?)


It is even worse than single digit Kelvin.

Liquid Nitrogen with pumping: 40 K for a few thousand dollars.

Run of the mill Liquid Helium: 4 K for tens to hundreds of thousands of dollars.

But for these devices you need 15mK which is reachable only if you mix two different isotopes of Helium and pump the mixture into vacuum. Such a device is up to 1M$ and more.

And the insides of that device are in vacuum (actually, air freezing into ice on top of the chip can be a problem). The brass is basically the heat conductor between the chip and the cold side of your pumped He mixture (which is *not* just sloshing inside the whole body of the cryostat where the chips are).

Another reason you do not want the He sloshing around is because you will be opening this to make changes to the device and do not want all the extremely expensive He3 (the special isotope you need for the mixture) to be lost.


FWIW... small DR's are under 400k USD. The big ones are ~1M USD or more.


What’s a DR? Something refrigerator?


Dilution refrigerator. They are the type of refrigerator used to chill quantum computing devices. The wikipedia article has a pretty good description of how they work. It took me a few reads to understand it!


It's much colder than that, single-digit millikelvin. They use He-3/4 dilution refrigerators [1]. Getting things cold and electromagnetically-quiet enough that the quantum state doesn't collapse is a big challenge in the field.

1 - https://en.wikipedia.org/wiki/Dilution_refrigerator


And if you need colder than 2mk, nuclear demagnetization:

https://en.wikipedia.org/wiki/Magnetic_refrigeration#Nuclear...


And some people think the brain can be deterministic at 300K.


Why not? Computers are deterministic at higher temperatures.


Even assuming a computer model, think of an analog computer with long integration times, potentially high gain for thermal noise terms, simulating a chaotic system.

Or, look at literature on error rates for people performing simple repetitive tasks.


Unless I am missing something, I believe that these pictures are at 99% the cryogenic system.


Whoa, that is so cool! I thought the Eischer esque machine in Devs was mostly Hollywood fluff but it absolutely looked almost just like that.


The Devs producer met with the Google team in Santa Barbara (which works on very similar superconducting qubits), so it should look very close!

That said the whole hovering in free space thing was perhaps a bit over the top in the show.

https://qz.com/1826093/devs-creator-alex-garland-describes-t...


So the chandelier looking thing is a dilution fridge and is just used to make the processor cold. The processor is usually pretty small, not that different from your CPU. That part is indeed iterated on but the fridges aren’t changed much. The wiring is sometimes switched out, but the fridges get used forever. I’m using one now that’s maybe 30 years old. The dilution units are very hard to make, and there’s a very small academic family to which everyone who can make one traces back to.

They mostly hang like that since most dil units are designed so the coldest part is usually the lowest, and they’re generally orientation sensitive. You want easy access to the bottom part, so you just put the plates in descending order of temperature and you hang the thing from the ceiling.


What's that picture of what looks like an exploded CPU package on IBMs site? Is that metaphorical or is that really what the processor looks like? It looks small and not-chandelier like.


Are there any pictures of the hand-wired LISP Machine online?

I searched, but couldn't find it.

Also - when LISP was invented (1958) - what was the state of computers at the time? Doing some research - it seems like direct keyboard input to computers was only available for 2 years prior. It seems like languages were decades ahead of hardware.

I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.

Are there any articles on the process for how LISP was designed and implemented??


Photos are indeed sparse, here's the highest-res I could find of the machine I saw at the museum (built 1979, much more compact now that we can forego the vacuum tubes), thankfully there are enough pixels to read the post-it note naming the machine "Marvin": https://upload.wikimedia.org/wikipedia/commons/7/7d/MIT_lisp...


> I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.

My intuition is that back then getting run time on computers was so scarce that the best programmers and mathematicians spent a great deal of brain time considering exactly what their software should be. If you only get one run a day, if that, you're gonna do your best to make it count. Today we're often in the opposite situation, where it can be entirely rational to burn incredible amounts of computation in the absolute sense to save brain time.


> I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.

I have a much bigger emotional conflict when contrasting that with the current state of mainstream programming languages, that are only just beginning to tread onto territories like algebraic data types and pattern matching that ML paved almost 50 years ago. Is there any hope for true dependent typing to become popular before 2040?


Don't Prolog and Erlang have pattern matching built in??


Lean4 comes pretty close to a general purpose dependently typed programming language.


I can't give you a great answer because I wasn't there, but I did find a John McCarthy paper describing its usage by way of an automatic typewriter [0] as referenced by the wiki on Lisp [1], first implemented via punchcards on the IBM 704. That would be vacuum tube logic and magnetic core memory. ~19,000 pounds, 12k flops (36bit), 18kB of RAM, 3.75-ish Megabytes of storage per 2,400 feet of mylar tape. [2]

As for languages ahead of the hardware, you might read up about Charles Babbage and Ada Lovelace, the latter a mathematician who translated problems into machine instructions for a machine that wouldn't be built for a hundred years - Babbage's design worked, but he spent all the money the Royal Society was willing to give trying improve the tolerances on his logical-clockwork. [3] But anyway, back to John McCarthy's paper, last page:

  APPENDIX - HUMOROUS ANECDOTE
The first on-line demonstration of LISP was also the first of a precursor of time-sharing that we called “time-stealing”. The audience comprised the participants in one of M.I.T.’s Industrial Liaison Symposia on whom it was important to make a good impression. A Flexowriter had been connected to the IBM 704 and the operating system modified so that it collected characters from the Flexowriter in a buffer when their presence was signalled by an interrupt. Whenever a carriage return occurred, the line was given to LISP for processing. The demonstration depended on the fact that the memory of the computer had just been increased from 8192 words to 32768 words so that batches could be collected that presumed only a small memory.

The demonstration was also one of the first to use closed circuit TV in order to spare the spectators the museum feet consequent on crowding around a terminal waiting for something to happen. Thus they were on the fourth floor, and I was in the first floor computer room exercising LISP and speaking into a microphone. The problem chosen was to determine whether a first order differential equation of the form M dx + N dy was exact by testing whether ΔM/Δy = ΔM /Δy, which also involved some primitive algebraic simplification. Everything was going well, if slowly, when suddenly the Flexowriter began to type (at ten characters per second) “THE GARBAGE COLLECTOR HAS BEEN CALLED. SOME INTERESTING STATISTICS ARE AS FOLLOWS:” and on and on and on.

The garbage collector was quite new at the time, we were rather proud of it and curious about it, and our normal output was on a line printer, so it printed a full page every time it was called giving how many words were marked and how many were collected and the size of list space, etc. During a previous rehearsal, the garbage collector hadn’t been called, but we had not refreshed the LISP core image, so we ran out of free storage during the demonstration.

[0] http://jmc.stanford.edu/articles/lisp/lisp.pdf

[1] https://en.wikipedia.org/wiki/Lisp_(programming_language)

[2] https://en.wikipedia.org/wiki/IBM_704

[3] Jacquard's Web by James Essinger is the book you want to read for more.


One of the IBM scientists seems to have an old quantum computer as a piece of office decoration, so I guess there's some hope to preserve them: https://www.youtube.com/watch?v=OWJCfOvochA


They hang because they sit at the bottom of dilution refrigerators.


My antivirus client won't let me open that second link, claims some kind of malicious activity, didn't look into the details.


Strange, it is a personal blog with literally no javascript (just checked my network tab), might be worth investigating what your antivirus has against it. It's a very good read, so just in case your antivirus is friendly with archive.org: https://web.archive.org/web/20211113093602/https://tamikothi...


Amazing images, nothing that would look out of place in the Villa Straylight.


is this most advanced tech at this very moment?


Tangential question, what are the areas of technology where we can expect to see substantial progress or breakthroughs within 2030, i.e. what are the most exciting areas to follow and look forward to? Here's my list:

- Nuclear fusion (Helion, ZAP, TAE, Tokamak Energy, CFS, Wendelstein).

- Self-driving cars.

- New types of nuclear fission reactors.

- Spaceflight (SpaceX Starship).

- Supersonic airplanes (Boom).

- Solid state batteries.

- Quantum computing.

- CPUs and GPUs on sub-5nm nodes.

- CRISPR-based therapies.

- Longevity research.


I'd say fusion is a sleeper. You still have that stupid "30 years away and always will be" meme but there is real progress being made. Fusion would completely change the world, though not overnight because it would take another decade or so before it would advance enough to be cost competitive.

I'm semi-optimistic about space flight and longevity. I think Starship will fly, but I wouldn't be surprised if some of its most ambitious specs get dialed back a bit. I'll be somewhat (but not totally) surprised if the "chopsticks" idea works.

We will probably see aging-reversal to some limited extent within 10-20 years, but the effect will probably be more to extend "health span" than add that much to life span. (I'll take it.)

I'll add one not on the list: the use of deep learning to discover theories in areas like physics and math that have not occurred to humans and maybe are not capable of being found by ordinary human cognition.

Wildcard, but plausible: detection of a strong extrasolar biosphere candidate using JWST or another next-generation telescope. Detection would be based on albedo absorption spectra, so we wouldn't know for sure. Talk of an interstellar fly-by probe would start pretty quickly.

I wouldn't list sub-5nm as "far out." We will almost definitely get sub-5nm. AFAIK 3nm is in the pipeline. Sub-1nm is "far out" and may or may not happen.


Sadly I'd also qualify most of these as things that consumers are overly excited about but will never reach their expected potential (at least in our lifetimes) due to technological limits. Same as flying cars, 3D TVs, 3D printing, household robots, holograms, AR glasses.


3D TVs will come of age once autostereoscopic displays reach the right level of quality. After being blown away by my first glimpse of a display in around 1998 I fully expected them to be useable years ago. I guess we might still be another "10 years" away.

https://en.wikipedia.org/wiki/Autostereoscopy


I really doubt that there's going to be huge desire for 3D TVs at any point. People can already look at video on a 2D display and interpret 3D visuals from it. And if you want to be fully immersed in something, maybe you want VR instead.


"People can already look at video on a 2D display and interpret 3D visuals from it. " way off, depends heavily on contrast sensitivity which is a function of brightness and displays have long way to go (esp with ambient around) +HDR even breaks current VR chain because stray light.


Google Starline seemed amazing to me. I would love to have a large immersive 3D display for videotelephony, sports, nature documentaries, etc.


I want 3D TV for sports but apparently I'm the only one.


What happens when two people want to watch the TV?


Well, I'm assuming autostereoscopic displays of the future will solve issues of multiple sets of eyeballs on the same display.


AR glasses will hit a wall but passthrough AR will be converged on rapidly. Starting 2022


Any particular insight why 2022?

Hololens 2 has shown that it isn't so easy to advance the field.

I don't think an Apple device is forthcoming or likely to leapfrog.


> Any particular insight why 2022?

Facebook, Apple and others are releasing their first AR glasses then.


And there will be approximately zero non-gimmick software for them until at least 2032, if it ever materializes at all.


AR "FaceTime" (beaming in an avatar of another person to spend time with) will be a killer app.


But you aren't "spending time" with that person. You're spending time with that person's poorly rendered avatar. You can't even hear them properly. You can't see them at all. You can't touch them or read a lot of the nonverbal cues. And for the privilege of not being in their presence, you also need to pay several hundred (if not thousands, this being Apple) dollars, and both sides need to use Apple products. I very strongly suspect that all but the most ardent Apple fans will pass on this generous offer.


Passthrough AR isn't glasses AR, and is much more likely to be rapidly made capable. Lynx-R launches in Q1, and Meta's and Apple's headsets will likely use passthrough AR next year.

https://lynx-r.com/


I definitely hope for something novel with video see-through HMDs as they used to call them in 2002 [1] when I last worked on them. Latency wasn't solved last I checked and viewpoint offset is still an issue that throws users off.

[1] https://static.aminer.org/pdf/PDF/000/273/730/ar_table_tenni...


Latency is pretty good on Quest 1 - I haven't tried Quest 2. Looking forward to getting a Lynx to gauge how close we are to something good.


>flying cars At least you can buy them now.


Don't forget carbon sequestration and geoengineering.

I think longevity research is a path to stagnation, and ultimately counter-productive, and should cease. If science advances one funeral at a time, then longevity is counterproductive for all other progress.


So you have chosen death.

There are reasons to worry about the ethics of longevity research (especially if the benefits of it are not justly shared), but I don't think you can justify withholding life-improving medical treatment from people just because you want to help science by letting people die early.

That sort of thinking is how we get Logan's Run.


It's dishonest to characterize what I'm saying as "choosing death", as if I've got a nihilistic urge to nuke the planet. Death, birth, and renewal is part of the entire biosphere which has been existed for billions of years. The cells that defy death are called "cancer". The messiness with which humans come into being and then pass away again is NOT something to be engineered away - it is something to experience and learn to appreciate.

The thing that lives on, that can be effectively immortal (if we choose to protect it) is the biosphere in which we're embedded and, to a lesser extent, the nest of symbols humans have fashioned for themselves over the last few millenia. It is fascinating to imagine what it would be like to live through human history; however it is terrifying to imagine what the "great men" of history would have done or become if not cut down by time. The inevitability of death has surely stopped some great things from being done, but I'm equally sure it has stopped even worse things from being done - imagine human history if the Pharoahs of Egypt had had access to immortality! It's too horrible to imagine.

BTW the Logan's Run system was purely about maintaining homeostasis given limited resources, NOT about maintaining (or even enhancing) dynamism in the population by decreasing average life-span. In other words, unrelated.


I apologise for the "choosing death" meme, but I think it is only as inappropriate as you equating human beings with "cancer". You're right, though, that humans have to learn to experience and in some sense come to terms with the messiness of death.

I think what we disagree on is what it means to "engineer away" death. Are we engineering away death if we cure a disease, but don't extend the maximum lifespan of humans? Is extending the average lifespan to 100 years all right as long as those treatments are designed to not work on people over 100 years old? If a treatment is later found that helps 100 year olds to extend their age to 101, is that the treatment that should be banned, or is there some number N where adding N years to the previous maximum is morally wrong and the whole world has to agree on banning it?

Your point about the Pharaohs is maybe not as strong as you think, since of course the Pharaonic system did outlast any of the individual office holders. I don't think it was old age that lead to the fall of that regime, and there are plenty of regimes which manage to be equally horrible within a single lifetime, or that are overthrown within the space of one lifetime.

Thank you for that succinct explanation of the premise of Logan's Run. I wasn't sure if it worked as an analogy, since, as you say, the motivation of the society was different from the one you are advocating for, but I think the most relevant aspect of Logan's Run is the dystopian nature of a society which imposes age limits on its members, against their wishes.


Nothing wrong with improving quality of the life we do have, which naturally would mean increasing lifespan a little. In fact, I'd argue that's precisely the right way to spend resources - quality, not quantity.

I'm not at all against small increases in lifespan, and certainly for improving quality of life (e.g. defeating disease). I'm specifically against individual immortality because I strongly suspect it would quickly and inexorably lead to stagnation and death for our species.


I would argue that most of the items on this list can be subdivided into two types of hype.

short term hype (real advances that will happen in 1-2 years, but won't matter by 2030, because they are just a generational iteration)

Over-hyped far-future research. (things where the possibilities have yet to be brought down to earth by the practical limits of implementing them broadly / cost effectively) When these things do happen, they tend to be a bit of a let-down, because they don't actually provide the promised revolutionary changes. These things basically have to be over-hyped in order to get the necessary funding to bring them to reality.

Of the examples you have, I am only really excited about CRISPR, and to a lesser extent commercial spaceflight, and new nuclear. These have promise IMO, but I also don't expect them to be decade defining.

Personally I don't think we know what the next breakthrough will be yet. I expect it to take us very much by surprise, and start out as something unthreatening which then grows to a disruptive size / scale.


Interesting, I think all of those items have a good chance of actually happening ("longevity research" happening means some sort of meaningful progress).

I hope there will be some unexpected breakthroughs too.


As someone in their 30s who suffers from baldness and arthritis, two simple conditions yet no promising solutions in sight, I find it cute when people think we can somehow cheat death or aging in the next 300 years.


Synthetic meat of all kinds, ARM based processors, deep learning + AI, agtech/vertical farming/etc.


> ARM based processors

Compared to most of the other things listed, this is more of a nerd-aestheticism thing rather than something which is hugely important technologically.


But haven’t ARM based processors put already computers in billions of peoples hands that otherwise wouldn’t have been able to access otherwise? I could argue that is hugely important.


ARM is merely a commercially successful microarchitecture that turned out to be good at being optimized for mobile application.


The ARM revolution has already happened, in this case I viewed the mention to be more of a wishlist for the types of computers the global well-off buy.


Delivery drones: Wing, Amazon, Zipline, Volansi, etc.

Synthetic fuels.


- Crypto

- Unfortunately, even less local computing, with everything provisioned from the cloud under a SaaS payment model.

- More mRNA applications

- Power/energy networks and markets across borders.

- Theranos, but legitimate. Better, cheaper and more convenient early monitoring/diagnostics for vitamin deficiencies and early stages of disease.

- Carbon neutral combustible fuels.

- Cheaper grid-scale storage.

- Better understanding of the gut-brain connection.


I hate to tell you, but your list looks like it came straight out of the 1970's:)

- Spaceflight (SpaceX Starship).

- Supersonic airplanes (Boom).

Been there, done that.


They are modern takes on the Space Shuttle and Concorde respectively, but with the benefit of hindsight as well as half a century of advances in material science and control systems. But really the defining feature is that the Space Shuttle and Concorde were government-funded prestige projects, while their modern incarnations are economically viable.


- Psychadelics

- VR/AR (photonic override, more specifically)

- Fundamental physics (unlocked by tech)


Can you elaborate on “photonic override”? Googling that phrase pretty much just returns more HN comments and tweets by you :)


A hardware/software proxy that governs all photons you see.


This is desirable?


Desirable or not is subjective, but what's not as subjective is the liklihood of this technology stack arriving and becoming popular, which seems very likely/guaranteed at this point.


it does seem likely to arrive but whether it becomes popular more difficult to predict as it's conditioned on the sway of public opinion, which seems heterogeneous regarding AR/VR


Certainly for someone with defective default hardware ;)


I certainly see the utility there!


Look into “diminished reality” - the ability to block things out that would be a distraction, for instance.


Non animal food. Will transform land use.

Remote education. Available to any kid or adult anywhere.


I think we're learning the limits of screen-based education now. There's something about being in the same room with a person at a chalkboard that is far more effective - at least anecdotally. (I'd be surprised if there wasn't research backing this up). And this seems deeply unfortunate if true, because it means the cost of learning can't go all the way to the ground.


Two thoughts: Africa can’t afford in-person teachers. The ones in my country are not great on average.

I couldn’t learn maths in class. Too distracted, too annoyed with stupid questions. But I increased 3 symbols in 3 school terms with a slide projector and audio tape, where I could focus and rewind. Teacher there was for bits I didn’t learn from the slides. I’m probably in the minority but I’m sure there are more of me.

Digital education catches kids like me and kids who have no access to excellent educators. And marginal cost is zero, so no harm in giving access to the world.


mRNA-based therapies?


Half of these are on their way into "valley of disillusionment" before they become generally useful, though perhaps not as grandiose as originally promised.

And longevity research is not even a real need - we already live too long as it is, from the evolutionary and economic standpoint. I'd much rather someone came up with a way to cheaply and painlessly end one's life once quality of life begins to deteriorate due to chronic disease and such. Some kind of company (and legislative framework) where you pay, say $1K and they put you into a nitrogen chamber and then cremate and flush the ashes down the toilet afterwards. Or perhaps use them as fertilizer. I'd use the service myself at some point in distant future.


The How is not the issue. A combination of various drugs or an opiate overdose should do the trick. It's already legal in Switzerland, Canada, and Belgium.

Voluntary euthanasia is ultimately challenging because of similar legal issues as with the death penalty - it cannot be undone, and there are forces in society that can lead individuals to use it for other reasons than just being over and done with suffering through old age.


That's the point. As long as I'm lucid I should have full bodily autonomy, including the decision to shuffle off this mortal coil. In fact I already have control over this decision.

> and there are forces in society

So? You're going to tell me I can't go anytime I want to? That's not the case even now. It's just that now I'd have to procure the nitrogen myself (which isn't difficult), and my relatives would have to deal with the body. I'm merely suggesting a service that resolves this purely logistical complication, and excludes the possibility of not quite dying but living the rest of one's life as a vegetable.

Think of what we have now: people spend years, sometimes decades suffering from chronic diseases, or just plain not having anything or anyone to live for. And it'll get worse as medicine "improves", and lifespans "improve" with it. Is it humane to withhold the option to end it all from them? I don't think it is. I will grant you that there are likely tens of millions of such people on the planet right now. I will also grant that this is not an uncontroversial thing to suggest. But the alternative we have now doesn't seem any more humane or dignified to me.

If this still doesn't sit right with people, we could age and condition-restrict it, or require a long waiting period for when this is not related to acute incurable disease.

> as with the death penalty

Which is also inhumane, IMO. It's much worse to spend the rest of one's days in confinement instead of 30 seconds until barbiturates kick in. That's what the sadists who are against the death penalty are counting on.


Again, I am aware that there is no core technological or logistical issue. The issue is purely societal. Yes, I can get behind enabling people to specify policies on what to do when untreatable or mental diseases kick in.

The death penalty does not exist to reduce the suffering of the convicted, but to get rid of them. The true issue with the death penalty is that it can't be graduated (except by adding "cruel and unusual punishment") and it can't be undone. Prison sentences can be legally challenged and the innocent can be freed early.

There is a real slippery slope here: what length of prison sentence is considered to be worse than the death penalty? An additional thing to consider is that many countries without the death sentence actually don't impose true life sentences, but very longish ones (upwards from 20 years). Confinement for life is for those irredeemly judged to be a threat to society after their sentence. Compared to that, many death row inmates actually spend decades fighting their sentence. They could end it at any time if they wanted.


Regarding commercialization of suicide: I think you're missing my point entirely somehow. The "societal" issue where old people are unwanted already exists, and it will exist irrespective of any innovation of this kind. Moreover, old or terminally ill people already have full control in terms of whether they choose to live or kick the bucket. There's nothing whatsoever anyone can do about that. Tens of thousands of people in the United States take their lives every year. It's just that if they care about their loved ones (if any) the logistics of dying are horrific. I wouldn't want to subject anyone to that, but I'm afraid if I were terminally ill, that'd be a pretty shitty reason to continue living, and make everyone I love suffer with me.

> The death penalty does not exist to reduce the suffering of the convicted

There's an easy way out of your moral dilemma that you go into after this sentence, much like what I suggest for those on the outside: let the convicts choose whether they want to suffer for the rest of their days in prison, or be humanely and painlessly killed. I know which way I'd go, under the circumstances. And yes, I do insist that the killing must be humane, dignified, and painless. We have the technology to ensure all three of those things.


I can empathize with the first point.

Regarding humane, dignified and painless killing: the Lethal Injection was supposed to be exactly this. But we humans are pretty good at botching things...


It is incredibly dishonest of them to post this without any details about the noise parameters of the system.

When reading "127-qubit system" you would expect that you can perform arbitrary quantum computations on these 127 qubits and they would reasonably cohere for at least a few quantum gates.

In reality the noise levels are so strong that you can essentially do nothing with them except get random noise results. Maybe averaging the same computation 10 million times will just give you enough proof that they were actually coherent and did a quantum computation.

The omission of proper technical details is essentially the same as lying.


Basically little more than having a bathtub and claiming you've built a computer that does 600e23 node fluid dynamic calculations. But a lot more expensive.


I haven't read anything on this one yet, but your analogy fits Google's “quantum supremacy” paper really well, I likes it.


Ugh, the point is a fine one but it appears it has to be made:

Validation of experimental theory through the characterization and control of an entire system is not the same as building the same system and simply seeing the final state is what you expect. The latter is much easier and says very little about your understanding.

Here's an analogy: Two people can get drunk, shack up for the night, and 9 months later have created one of the most powerful known computers: A brain. Oops. On the flip, it's unlikely we'll have a full characterization and understanding of the human brain in our lifetimes – but if we ever do, the things we'll be able to do with that understanding will very likely be profound.


My reply was glib but I think in principle correct. The idea of a strictly controlled system in the NISQ domain to validate quantum supremacy in theory is an interesting approach, but it feel deceptive to me because this 127-qubit computer cannot in fact factor 127-bit numbers with Shor's algorithm or anything like that.

The accomplishment is more akin to creating a bathtub with 127 atoms and doing fluid dynamic simulations on that, which is a much harder problem in many ways than doing the 6e25 version of the experiment. But it is very questionable to me whether any claims of quantum supremacy retain validity when leaving the NISQ domain and trying to do useful computations.

Gil Kalai's work in the area [1] continues to be very influential to me, especially what I consider the most interesting observations, namely that classical computers only barely work -- we rely on the use of long settlement times to avoid Buridan's Principle [2], and without that even conventional computers are too noisy to do actual computation.

[1] https://gilkalai.wordpress.com/2021/11/04/face-to-face-talks... is a recent one

[2] https://lamport.azurewebsites.net/pubs/buridan.pdf


I mean, maybe in the theoretical sense, but do conventional computers barely work in practice? Seems a bit of a pedantic argument.

Gil Kalai and others with similar arguments play an important role in the QC community. They keep the rest of us honest and help point out the gaps. But I do think the ground they have to stand on is shrinking, and fast. Ultimately, they might still be right – that much is certain – but it seems to me that the strides being made in error correction, qubit design, qubit control, hardware architecture, and software are now pushing the field into an exponential scaling regime.

To me, the big question is much less whether we'll get there, and much more "what will they be good for?"


It seems like IBM has blown their credibility so many times. As soon as I saw IBM mentioned in the lead of the title, I knew what was to follow is almost entirely actual-content-free marketing spin.


With the caveat that the paper is from a competitor that I like, this benchmark paper [1] makes me inclined to disregard this result. See figure 1.

__EDIT:__ whoops wrong figure, just read section iv or see the first figure here [2]

[1] - https://arxiv.org/abs/2110.03137

[2] - https://ionq.com/posts/october-18-2021-benchmarking-our-next...


It's over here: [1]. James Wootton makes some comments about it here [2]. Looks like 1% error rate which I think is too high to do much with, but it is still exciting progress for those of us working in the field!

[1] https://quantum-computing.ibm.com/services?services=systems&...

[2] https://twitter.com/decodoku/status/1460616092959883265


It's particularly jarring given that IBM came up with the concept of quantum volume [0]...

[0] https://en.wikipedia.org/wiki/Quantum_volume


    > The omission of proper technical details is essentially the same as lying. 
Welcome, child, to the beautiful/"special" world of marketing!


Be gone fetus, for there has always been honest marketing. It's just harder to find, sadly.


If it’s below the noise floor, how do we separate it from ‘accidental (or incompetent?) honest marketing’?


Oh, I do not deny there are honest marketing people. But there are plenty of people in that game happy to omit some inconvenient factoids. And as always, those are the ones giving their craft as a whole a bad name.


Welcome to IBM. Is Quantum the new Watson?


I guess they should show that they have achieved quantum advantage:

The Chinese did show it some time ago:

https://www.globaltimes.cn/page/202110/1237312.shtml


Also another problem: you now have 2^127 output values leaving the quantum processor. If you're using a hybrid quantum algorithm that requires classical processing as well (which are most algos used today), you'd need more than a yottabyte of RAM. We can get around this problem by storing all 2^127 pieces of output data into other data types that compress the total size, but if you genuinely are trying to use all 2^127 outputs, you'd still need to do some pretty intensive searching to even find meaningful outputs. I guess this is where Grovers search could come really handy, right?


You don't get the entire wave-function as output; the wave-function is not observable. Different measurements might reveal information about certain components of the state, at least probabilistically, but those same measurements will always destroy some information. See the No-cloning Theorem.


Right, but you would still get the basis states for all 127 qubits right? And that would be 2^127 output states. Yes, you could do some sort of search maybe to find highest probability outputs only, but if you needed every output value for a follow up algorithmic step (like in VQE for ground state prep wherein you keep using previous results to adjust the wavefunctions until ground is reached), then wouldn't it be a bit tough to use?


You have 127 qubits that you measure and you end up with a classical string of length 127. Sure, that classical string, the measurement result, could have ended being any of 2^127 possible different values into which the wavefunction collapses. But that is no different from saying that there are 1^1024 possible states that a 1kB of classical RAM can be in. It is not related to the (conjectured) computational advantage that quantum computers have.


Right okay makes sense...guess I am just too used to NISQ and having to run many thousands of shots for high enough fidelity..if all you wanted was one output, then yeah one classical string is easy enough, thanks


I hope US esp. but also our EU scientist friends eat everybody else's lunch. Better tech. Better science. Better math. Better algos. And, yes, let's get those details right too. Noise management is a key discriminator between POC and practical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: