Hacker Newsnew | comments | show | ask | jobs | submit login

"Just as we learned to embrace languages without static type checking, and with the ability to shoot ourselves in the foot, we will need to embrace a style of programming without any synchronization whatsoever."

This is dangerous misinformation that is also being propagated by some managers of the "exascale" programs that seem to have lost sight of the underlying science. Some synchronization is algorithmically necessary for pretty much any useful application. The key is to find methods in which synchronization is distributed, with short critical paths (usually logarithmic in the problem size, with good constants).




Albeit this regards the first part of the sentence, I can say I shot myself in the foot quite a few times with statically typed languages.

-----


I think it was Rich Hickey who said something like: "What is true of every bug ever found in a program? It passed all the compiler's type checks!"

-----


Fair, but at the same time I've also had a compiler quickly draw my attention to a lot of potential errors thanks to static type checking more than once in the past, particularly when doing hairy refactors.

The relative strengths and weaknesses of dynamic and static languages are greatly exaggerated. Doing type checks at compile time won't make your code magically bug-free. But neither will delaying type checks until run-time free you from the shackles of the datatype hegemony. The trade-off between keystrokes and CPU cycles isn't really even all that much of a thing anymore, what with jitters closing the gap on one side of the fence, and type inference and generics closing it on the other.

-----


Better than just saying, "We'll continue to use MPI at exascale!"

-----


Well, we will.

Just not only MPI (at least not without significant extensions).

-----


We'll probably end up using just MPI at Exascale. I was at Supercomputing 2011, I swear half the speakers might have just gone up on stage, stuck their fingers in their ears, and yelled "NAH NAH NAH CAN'T HEAR YOU, MPI IS ALL WE NEED NAH NAH NAH".

-----


Heh. Well the current threading models are much worse from the perspective of libraries and hierarchical memory. I work with some of the MPICH developers and members of the MPI Forum. Nobody is satisfied with the status quo, but we need a viable threading system. Some people think we'll end up with MPI+CUDA/OpenCL, others (myself included) would generally prefer a system with in-node communicators and collectives, with a concept of compatible memory allocation. The system-level tools are basically available in pthreads and libnuma (unfortunately Linux-only), but we're working on formulating better APIs because the annotation-based approach of OpenMP isn't very good for long-lived threads or heterogeneous task distributions and systems like TBB are too intrusive and still don't really address NUMA.

-----




Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: