The default Realtek wifi card in t440/t440s (maybe also some other models) does not yet have a stable Linux driver, wifi keeps dropping. (The most recent driver from github is a bit better, it only drops a couple of times per hour, the default driver in Ubuntu 14.10 drops in like 5-10 minutes.)
If you're replacing WiFi cards in ThinkPads, check into which cards are supported by the unit - there's a whitelist that varies by each model. I learned the hard way trying to get someone with one of the cheaper models onto 802.11a with a replacement card - the new card was used in other models, but the laptops he had only shipped with one type of card and a correspondingly short whitelist.
You can find hacked firmwares to remove the whitelist functionality, but that depends on your willingness to run hacked firmware.
Jesus that should be made illegal on laptops. MSI did that with the wireless card. (Won't buy a product from them again) There is no reason that I shouldn't be able to switch out what's in the mini-pci slot.
Debatable but understandable. They whitelist cards that they tested and certified the laptops with, which may actually be significant / required for some devices. The problem device for me was a ThinkPad Edge something, I believe the lists for the better systems are significantly larger.
If they don't support the 3rd party card you put in there: Who cares? I'm not going to expect them to support the wireless functionality if I subsitute out the wireless card. What do they need to whitelist cards for?
I have a X230 that had one of those Realtek cards. It would drop a lot and be rather funky since I run linux 99% of the time. Then it finally died in February so I spent $13 for a replacement non-realtek. It has worked absolutely great, even with a minimalist Arch setup.
I don't see any reason (though I'd love to be enlightened) why the Rust implementation couldn't someday inline C functions. I'd actually be a little surprised if the underlying LLVM implementation can't use LTO to do it out of the box.
> Another huge reason to use plain text that he didn't mention is version control.
But version control tools are designed for code, i.e. showing which lines have been edited. With English text one would rather want to see which sentences have been edited. Are there tools for this? (Except MS Word's track changes feature.)
Well, one could write one sentence per one line, but that makes a pretty ugly txt document, when viewed raw.
> Well, one could write one sentence per one line, but that makes a pretty ugly txt document, when viewed raw.
Many of the tech writers I work with advocate exactly this.
In my stuff, I just hard line wrap the text. Diffs do tend to have more spurious whitespace changes because of this than I'd like, but that's still miles better than a completely opaque binary format like Word.
Not to advocate for word or anything, but technically it's a zip of xml and other stuff (images, etc) that get's pulled in through ... OLE(??). VC + markdown/latex excellent for collaboration or branching drafts.
For the use case of prose, this is a great alternative to the time investment needed to take up a heavyweight editor (e.g. Emacs or Vim) that can be made to operate on a clause-by-clause, sentence-by-sentence basis, and I recommend it to anyone not interested in taking the plunge into "customization culture" or using the other features those programs provide. My writing, when I don't need to use Word for work (thanks to co-workers who use it for everything), tends to be done in something unobtrusive like nano or sandy and looks much like the source from your second link, minus the HTML.
"Easy to edit," to take a phrase from your first link, is key.
Not sure what is the right way to do it. But in principle it shouldn't be a problem. An script could make a copy of the files but with one sentence per line. So you could edit the original and then uae the transformed version for version control.
For the purpose of version control, it doesn't even have to be exact. It doesn't matter if the detector inserts an incorrect line break after a certain combination of characters, as long as it does so consistently so that it produces a readable diff.
Chapel has been used for incompressible moving-grid fluid dynamics, so it's certainly feasible. For that problem the result was ~33% the lines of code of the MPI version. There is a performance hit, but the issues are largely understood; if (say) a meteorological centre were to put its weight behind it, a lot of things could get done.
It's also pretty easy to see how UPC or co-array fortran (which is part of the standard now, so isn't going anywhere any time soon) would work. They'd fall closer to MPI in complexity and performance.
You couldn't plausibly do big 3d simulations in Spark today; that's way outside of what it was designed for. Now analysing the results, esp of a suite of runs, that might be interesting.
If you store the data in arrays, you can use matrix multiplication libraries such as Intel's MKL or OpenBLAS, which are written to be exceptionally optimized for use on multiple cores. I cannot emphasize enough how much time and effort has been put into these libraries to multiply matrices as fast as can possibly be done.
If you use processes such as in the Erlang VM, they're doing calculations, sure, but they're also sending messages back and forth, and they're acting as supervisors, and they're being shuffled around by the VM. There's a lot going on. And that extra stuff that's going on takes away from the time you could be multiplying stuff. And even then, there's been no optimization done for this sort of calculation. There are a lot of tricks you can do. Heck, the better matrix multiplication libraries have individual optimizations for CPUs.
in High Performance Computing, there is (1) 3-dimensional simulations (weather, fluid dynamics, structural mechanics, all kinds of physics simulations, like magnetic storms in space or nuclear reactors etc.) and then there is (2) everything else, like data mining, machine learning, genomics etc.
Some of the sparse matrix computations in structural mechanics and in some machine learning algorithms have some overlap. But mostly, group 2 has little reason to be interested in what group 1 is doing.
Now, group 2 obviously has more modern tools than the 3d-simulation community, because machine learning came to common use much later that numerical fluid mechanics.
But do 3d-simulation people also have much reason to be interested in what the machine learning people are doing?
The "machine learning / big data" people are probably not doing anything that makes a weather prediction model to run faster? Or are they?
They are doing it (interesting things) for lower capex and lower development costs. On opex, good for operations, bad for power consumption (relatively).
In terms of absolute performance HPC is absolutely faster. In terms of bangs for bucks, Big Data is hands down faster. Also in terms of accessibility Big Data is hugely easier - I can build you a 100 core big data system for $300k
There are big data people running on supercomputers too. I know there are people writing custom asynch job managers to handle big data type problems because the top supercomputers have low memory latency.
Also I think the dichotomy you're looking for is IO bound vs CPU bound problems. Although certainly there are a plethora of different kinds of IO bound problems (asynch vs synch or disk bound vs memory bound vs cache bound).
that's silly: HPC has been pinching pennies before big data was a thing. and the computer industry is biz: you get what you pay for. if you can live with Gb performance, you can drop around $2k (IB card, cables, switches) off your price. But it's not as if the hardware is any different, faster or more accessible.
I think it's economics, GPU's are sold by the million, super computers interconnects are sold by the thousands. Commodity kit is mass produced spreading design, vvt and manufacturing tooling costs.
The hardware is different in terms of the layout. Aggregations of small cores on boards (gpus) vs. very high speed large cores with lots of local memory. Highly localised connections vs. an interconnect fabric.
And it is more accessible because it's affordable, and you can get at it in the cloud; this means that skills building is easier for more people and it also means that a wider user base is possible.
Anyway, for a typical HPC cluster, it's bog standard x86 hardware, the only remotely exotic thing is the Infiniband network. Common wisdom says that since Infiniband is a niche technology, it's hugely expensive, but strangely(?) it seems to have (MUCH!) better bang per buck than ethernet. A 36-port FDR IB (56 Gb/s) switch has a list price of around $10k, whereas a quick search seems to suggest a 48-port 10GbE switch has a list price of around $15k. So the per-port price is roughly in the same ballpark, but IB gives you >5 times better bandwidth and 2 orders of magnitude lower MPI latency. Another advantage is that IB supports multipathing, so you can build high bisection bandwidth networks (all the way to fully non-blocking) without needing $$$ uber-switches on the spine.
That's interesting, things may have changes with IB since I last looked.
The GPU thing seems to have fallen out of my original comment, I meant to write "I can build you a 100,000 core system for $300k" but some how the decimal point jumped left three times! To do that I would definitely have to use GPU's...
I am seriously lusting after such a device, I feel that there is much to be done.