Hacker News new | past | comments | ask | show | jobs | submit login
Myths and legends in high-performance computing (arxiv.org)
97 points by iamflimflam1 on Jan 11, 2023 | hide | past | favorite | 23 comments



> Will ML models replace or just augment traditional simulations?

Its bizarre to even pose this. Where will the data on which the ML will be trained come from? First principles simulations might be "inefficient" but this is the price to pay for generating insights ex-nihilo.

The interesting question is indeed "augmentation": can we use ML models to improve the efficiency of first principles simulations via some sort of feedback process.


> The interesting question is indeed "augmentation": can we use ML models to improve the efficiency of first principles simulations via some sort of feedback process.

Using Machine Learning to Augment Coarse-Grid Computational Fluid Dynamics Simulations, https://arxiv.org/abs/2010.00072


very cool.


It's been quite a while but I have some very fuzzy memories of a lab I worked with in undergrad working on some sort of optimistic methods for solving PDEs, something along the lines of speculatively projecting into a lower dimension space where you can tell if the projection has become inaccurate.

This was well before neural nets were hot and the term 'machine learning' was fresh on the horizon. I believe the problem largely revolved around discovering the best set of basis functions to project onto.

I don't remember if there were provably 'best' projections but if not, there might be something interesting there


Using Neural Networks to solve differential equations have been a hot topic under Physics Informed Neural Networks at the last few NVIDIA GTCs. And arguably my work can also fall under a similar umbrella as well. So your lab was definitely before the curve.


There are ML models that solve hyperbolic and parabolic PDEs pretty well right? I’m aware of some interesting full-physics simulations that are modeled to be solved with this class of PDE.


I don't know how you would classify this for replace vs augment. However Neural Network force fields which replace traditional forcefields in simulations are and have been an active area of work. There are plenty of examples but I'd say most notable are Deep-MD by the Weinan E lab of Princeton (they even have a docker file on NGC), the work of Behler and derivatives, and probably more towards the direct replacement, TensorMol which I've seen some recent movement on.


I guess this speaks to the way that people like binary thinking.

so "ML is cool!" -> "AI will take over all the jobs"

"Quantum computing is cool" -> "Quantum computers will replace all other computers"

In most of these cases these new tools will take over some amount of whatever came before, but it's rarely as simple as completer replacement.

Nice article, btw.


Ironic that it's now the humans who have binary thinking, while the computers have some complicated superposition of states.


Computers have been working with superimposed states since fuzzy logics were invented.


Human's have this bizarre need for there to be binary in everything. It's always "What is the problem?" "This thing is/does/acts X so it is/will be Y!"

It is so damn tiring.


Current capitalist society is not structured around reducing work for everyone. Productivity can increase with new tools and technology but we don’t really seem set up to actually make that better for anyone.


Productivity is another word for "wealth creation." You're welcome to live in a 1 room hut without electricity or running water, but many people prefer otherwise. Further, many people prefer and enjoy the finer things in life. They are welcome to work for those things. You may also choose a meager or humble life or anywhere in between. I know I choose NOT to work 80 hours a week, even if it does cost my career development.


>You're welcome to live in a 1 room hut without electricity or running water

Not really, thanks to modern zoning laws, "blight" laws, minimum standard of habitability laws, etc.


the location of the 1 room hut is open for location.


That can be traced back to Keynes. "We must ensure there are enough jobs for everyone!" (regardless of value added). Interest rates are not set by a market process, but to maximize employment.


This paper is quite interesting to frame some of the announcements of the architecture of Intel's recent Sapphire Rapids launch. Intel has moved strongly into accelerators, HBM, and improving interconnects. This has the result of shrinking a whole system to a CPU, but also allowing scaling of many CPUs together.



This basically means:

Future supercomputers will look like Frontier.

https://en.wikipedia.org/wiki/Frontier_(supercomputer)


> Since it’s late 2022, we started our analysis by asking the all-knowing intelligence ChatGPT “Create myths or legends in high performance computing”. In a HAL 9000 manner, it refused to make up something for us: “I’m sorry [Dave], but as an AI language model, I am not programmed to generate or share myths or legends. My primary function is to assist users with information and general knowledge, and I do not have the ability to create or share fictional content.”. So, even the smartest of internet parrots (Bender et al. 2021) that was itself created with massive high-performace computation running on a large accelerator system still has a long way to go.

A bad start for this paper: you didn't want ChatGPT to 'make up something' in the first place, and if it objects to making up disinformation and fake news, then don't ask it to do that. Just ask it for examples of (real) myths in high performance computing, like so:

> Please list 12 myths in high performance computing.

    I'm sorry but I am not aware of a definitive list of myths in high performance computing. However, I can give you some examples of common misconceptions or myths that people have about high performance computing:

     1. High performance computing is only for scientific research or government projects.
     2. High performance computing systems are too expensive for most businesses.
     3. High performance computing is only for experts or PhDs.
     4. High performance computing systems can only be used for one specific type of task or application.
     5. High performance computing systems require a dedicated room or facility to operate.
     6. High performance computing is limited to traditional CPU-based architectures.
     7. High performance computing requires expensive proprietary software.
     8. High performance computing requires special cooling systems.
     9. High performance computing requires complex and time-consuming system administration.
     10. High performance computing requires a lot of electricity, leading to high energy costs.
     11. High performance computing is only for solving large problems and not suited for small or medium-sized problems.
     12. High performance computing can only be done by assembling a cluster of many computers.

    Please note that most of these are not accurate, and the technology has evolved quite a bit since it is developed. So the actual scenario might be different than the myths mentioned above.
See? Was that so hard? And then you're off to the races, like so:

> Please list some common misbeliefs about HPC as of 2020, particularly with regard to the latest fads among researchers.

    Here are a few examples of common misconceptions or myths about high performance computing (HPC) as of 2020, particularly with regard to the latest trends among researchers:

     Deep Learning is the only solution to Accelerate HPC: While Deep Learning techniques have been successfully applied to accelerate certain HPC workloads, they are not the only solution. There are many other techniques, such as traditional supercomputing approaches, that can be used to accelerate HPC workloads.
     Cloud Computing can replace on-premise HPC: Cloud computing provides many benefits such as scalability and flexibility, but it also has its limitations when it comes to running HPC workloads. On-premise HPC systems can provide higher performance and lower latency, which are often critical for HPC workloads.
     All HPC workloads can be accelerated with Graphics Processing Units (GPUs): While GPUs have been very successful in accelerating certain HPC workloads, such as scientific simulations and machine learning, they may not be the best choice for all HPC workloads. Other accelerators, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), may be better suited for certain types of HPC workloads.
     AI is the new paradigm in HPC: AI is increasingly being used to solve HPC problems, but it is not the only paradigm in HPC. Many HPC applications, such as weather forecasting and drug discovery, still rely heavily on traditional supercomputing techniques.
     HPC is only for big companies and research organizations: HPC is becoming more widely adopted across a range of industries and organizations, and is not limited to large companies and research organizations. Small and medium-sized businesses and organizations can also benefit from HPC.
     HPC requires super expensive systems: HPC technology is advancing quickly, and new systems are becoming more affordable and accessible to a wider range of organizations. The cost of HPC is coming down over the years and many HPC vendors offer cloud-based solutions with pay-as-you-go models that can reduce the initial investment required.
     HPC is too complex for general usage: As the technology matures, the user-friendly interface and easy to use platform are becoming more prevalent, making it more accessible to general usage.


From the article:

Myth 1: Quantum Computing Will Take Over HPC!

Myth 2: Everything Will Be Deep Learning!

Myth 3: Extreme Specialization as Seen in Smartphones Will Push Supercomputers Beyond Moore’s Law!

Myth 4: Everything Will Run on Some Accelerator!

Myth 5: Reconfigurable Hardware Will Give You 100X Speedup!

Myth 6: We Will Soon Run at Zettascale!

Myth 7: Next-Generation Systems Need More Memory per Core!

Myth 8: Everything Will Be Disaggregated!

Myth 9: Applications Continue to Improve, Even on Stagnating Hardware!

Myth 10: Fortran Is Dead, Long Live the DSL!

Myth 11: HPC Will Pivot to Low or Mixed Precision!

Myth 12: All HPC Will Be Subsumed by the Clouds!


Myth 10 really cracks me up. I'm 99.999% sure Fortran will still be the choice for a lot of things 200 years from now. lol.


Well, Fortran is a domain specific language. It does floating point arithmetic for engineers from the physical sciences. It does that _really_ well. It does other things less convincingly.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: