Hacker News new | past | comments | ask | show | jobs | submit login
Scott Aaronson, when reached for comment, said (scottaaronson.blog)
53 points by feross on Nov 16, 2021 | hide | past | favorite | 16 comments



Here are few thoughts on Aaranson's comments:

1) I have absolutely zero understanding of quantum computers (even after watching this video: https://www.youtube.com/watch?v=OWJCfOvochA), though I like to read up on quantum physics. However, it seems to me that I've lived through many "breakthroughs" in this technology. For example, here's an excerpt from a NYT article from 4/28/1998: https://www.nytimes.com/1998/04/28/science/quantum-computing...:

  The discovery has touched off a wave of excitement among physicists and computer scientists and is leading dozens of research centers worldwide to embark on similar experiments heralding the advent of an era of so-called quantum computers, specialized machines that may one day prove thousands or even millions of times faster than today's most powerful supercomputers.
It seems quantum computers that can kick ass is always 10-15 years away.

2) Here's a writeup from Pano Kanelos, President of the University of Austin of University of Austin, where he explains the motivation: https://bariweiss.substack.com/p/we-cant-wait-for-universiti.... Others can debate the anti-wokeness, but this part is pretty much fact:

  Over the last three decades, the cost of a degree from a four-year private college has nearly doubled; the cost of a degree from a public university has nearly tripled.
Colleges in the US are insanely expensive! If the new UoA starts fixing that I'm all of it, but I'll believe it when I see their tuition rates and first batch of graduates. This is the first question listed here (along with the high cost of healthcare in the US): https://patrickcollison.com/questions

3) The blog post he mentions is interesting. Of course, the same argument may be made for any job area for which demographic stats does not match the society at large. The argument is irrefutable because any point against it is labeled as offensive or biased, which makes it funny coming from a mathematician.


[flagged]


Wanna explain?


Well, my comment was flagged, so nobody would see my explanation.

I would recommend reading Piper's post and the follow-up to it, and then see if a "mass call to resign" was really the point.


unsurprisingly, supercomputers still beat quantum computers, and will for 20 years


Could you expand your thoughts a bit? Scott writes...

“5 minutes refers to the time needed to calculate a single amplitude (or perhaps, several correlated amplitudes) using tensor network contraction. It doesn’t refer to the time needed to generate millions of independent noisy samples, which is what Google’s Sycamore chip does in 3 minutes. For the latter task, more like a week still seems to be needed on the supercomputer."

...but you seem to have a different interpretation?


A week on a supercomputer is a typical length for a large job. For my PhD work I had supercomputer jobs running for an entire year. Supercomputer time is somethign you can simply buy, and no scientist needs answers that quickly. As long as the problem is classically solvable within a month for less than a few million dollars, you're way ahead of the best QCs.

The QC folks promoting fast solutions for problems that don't need them are selling snake oil.


This is a science project, not a commercial offering. The goal is to prove that it's possible to build a quantum computer that can do something that's intractable on a classical computer.

Unrelatedly, I find it hard to believe that, if a technology were invented that could do quantum simulations in a few minutes that take a week on a classical supercomputer, people wouldn't find economically valuable applications for it.


Not the GP, but unless you need verifiably independent noisy samples, quantum computers are too noisy to work at practical problem sizes, and afaik there's no clear path to fixing that. I wouldn't say they're 20 years away, but it's an r&d problem not an engineering problem so it's probably not going to be very soon


I mean, an iPhone CPU outperforms quantum computers at almost any task you can imagine. The only exception so far is tasks designed specifically so that only quantum computers can perform them efficiently.

In 20 years this may no longer be the case, though... I think back to 2001 and how little of the modern tech scene I was able to predict then.


as someone not involved in HPC at all but hears things about it occasionally, is there really anything that defines a supercomputer these days? Or is a supercomputer basically just a whole bunch of servers with low latency networking and running MPI?


Supercomputers can be differentiated from server farms in two ways:

1. Performance measured in FLOP's. Think supercomputers as gigantic graphics cards. The bandwidth between the nodes is incredible. Network topology is different.

2. Problem type is one with lots of interaction between partial solutions.

3. Problem solving capability instead of problem solving capacity.

Supercomputer design is a whole system design for the capability to solve one huge problem in a short time. With just a 'bunch of servers,' you add more servers and get more capacity to solve a larger number of limited-sized problems.

The supercomputer has usually the maximum problem size level and every aspect of the system is optimized and balanced together to reach that maximum. Even if the supercomputer is sold in pieces and can be extended, extensions scale only towards this maximum design where the capability peaks. Installing two maxed-out supercomputers side-by-side does not double the capability for most problems.

This way supercomputer is like a single computer. You can add components until you reach the limits. If you need a bigger computer to solve bigger problems, you build a bigger supercomputer from scratch.


I guess I can imagine how the topology can limit expansion after a certain point. What sort of speedup are we talking about though? Say I have some HPC code that takes a week to run on a purpose built supercomputer. If I threw a bunch of servers together, with the appropriate HW to do RDMA or whatever's needed, how much slower are we talking? 10x slowdown? 100x? I realize that's probably wildly under-specified, I'm just trying to get a sense of the scale of the difference.


I'll give a concrete example. Around 2000 I was running molecular dynamics simulations. Goal is to get the longest trajectory you can in a reasonable time. You can add more processors and speed up the simulation, but at some point, adding more processors doesn't speed things up unless you can speed up the network, because there is some computational barrier you can't cross until the network delivers some data to other nodes (btw this is the same as allreduce in horovod).

I had access to the fastest supercomputer at the time (a T3E with 64 nodes) as well as a small linux cluster.

The speedups I saw on the super computer were: 60X for 64 nodes. The speedups I saw on my linux cluster were: 4X for 8 nodes. All the problems were due to the slow network (10Mbit ethernet with TCP) and congestion (allreduce hammers the network). However, I didn't have to wait a week for my job to start running, and I could add more nodes and run different jobs in parallel.

I concluded that, unless I had an MD jhob that couldn't fit in the linux cluster, it was always better to use the linux cluster instead of the supercomputer.

It really depends a lot on the simulation and how much you can adjust your system to run faster on smaller machines.


I think the scale of compute speed really muddies the water with what we traditionally consider a super computer. A DGX pod is probably a super computer imo.

But with some problems we're seeing that we can solve them with petabyte scale lookup tables. Is that a super computer?

I'd really like to hear if there even is a coherent definition anymore or if that line is just blurred forever.


A full DGX pod, like full TPU pod is a "ML supercomputer". The big detail being that they don't typically support all CPU functionality, and tend to have support only for smaller datatypes (float instead of double). However, neither can act as a full replacement for current HPC codes, although that is a very exciting area of research.


depends who you talk to, I would say "a supercomputer is a machine designed for high performance computing that has capabilities several standard deviations above the mean".

There really are only about 10 supercomputers in the world at any time (the most that multiple nations can afford to run) and the capabilities keep being pushed up.

So no, I don't think it's just a bunch of servers with low latency network. Typically much more integration has to be done to achieve peak performance.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: