Hacker Newsnew | comments | show | ask | jobs | submit | sampo's comments login

The default Realtek wifi card in t440/t440s (maybe also some other models) does not yet have a stable Linux driver, wifi keeps dropping. (The most recent driver from github is a bit better, it only drops a couple of times per hour, the default driver in Ubuntu 14.10 drops in like 5-10 minutes.)

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1239578

An Intel card is only about $20 off ebay, but you need to open the laptop in order to replace it.

reply


If you're replacing WiFi cards in ThinkPads, check into which cards are supported by the unit - there's a whitelist that varies by each model. I learned the hard way trying to get someone with one of the cheaper models onto 802.11a with a replacement card - the new card was used in other models, but the laptops he had only shipped with one type of card and a correspondingly short whitelist.

You can find hacked firmwares to remove the whitelist functionality, but that depends on your willingness to run hacked firmware.

reply


> whitelist

Jesus that should be made illegal on laptops. MSI did that with the wireless card. (Won't buy a product from them again) There is no reason that I shouldn't be able to switch out what's in the mini-pci slot.

reply


ThinkPads traditionally wound up in specifications for sophisticated systems.

http://web.archive.org/web/20110720220124/http://www-03.ibm....

reply


Debatable but understandable. They whitelist cards that they tested and certified the laptops with, which may actually be significant / required for some devices. The problem device for me was a ThinkPad Edge something, I believe the lists for the better systems are significantly larger.

Still annoying though.

reply


If they don't support the 3rd party card you put in there: Who cares? I'm not going to expect them to support the wireless functionality if I subsitute out the wireless card. What do they need to whitelist cards for?

reply


Compliance, particularly the section and additional info for Electromagnetic Compatibility Compliance http://www.lenovo.com/lenovo/us/en/compliance.html

reply


Yes, you need to buy a card that has a FRU code that is whilelisted by Lenovo.

reply


I have a X230 that had one of those Realtek cards. It would drop a lot and be rather funky since I run linux 99% of the time. Then it finally died in February so I spent $13[1] for a replacement non-realtek. It has worked absolutely great, even with a minimalist Arch setup.

[1] http://www.amazon.com/gp/product/B008V7AAJU/ref=oh_aui_detai...

reply


> Rust makes it easy to communicate with C APIs without overhead

Zero overhead vs. the case where you'd also have function call in C.

But in some cases, especially with numerics, if you have a C library that is designed to be inlined, you get slowdown even calling C from C, if you prevent inlining.

A random number generator, SIMD-oriented Fast Mersenne Twister [1], was such a case when I tried. Just calling with `_attribute__ ((noinline))` makes it 2x slower, even when calling from C.

[1] http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/

reply


I don't see any reason (though I'd love to be enlightened) why the Rust implementation couldn't someday inline C functions. I'd actually be a little surprised if the underlying LLVM implementation can't use LTO to do it out of the box.

reply


As mentioned downthread, this is exactly what Servo intends to do to interop with SpiderMonkey. I'm very excited to see that project proceed.

reply


For small functions that cannot be inlined, they should work across arrays that are as big as you can make them to make the overhead trivial.

reply


Can't link time optimization help with that? I assume it's language agnostic.

reply


> Why wouldn't someone choose the t-mobile unlimited plan vs this one?

Someone whose suburban home has no T-Mobile 4G coverage?

reply


A Chinese friend once told me she can't see her own nose. And the thought of us whites constantly having our nose in our field of view was funny to her.

reply


> Another huge reason to use plain text that he didn't mention is version control.

But version control tools are designed for code, i.e. showing which lines have been edited. With English text one would rather want to see which sentences have been edited. Are there tools for this? (Except MS Word's track changes feature.)

Well, one could write one sentence per one line, but that makes a pretty ugly txt document, when viewed raw.

reply


> Well, one could write one sentence per one line, but that makes a pretty ugly txt document, when viewed raw.

Many of the tech writers I work with advocate exactly this.

In my stuff, I just hard line wrap the text. Diffs do tend to have more spurious whitespace changes because of this than I'd like, but that's still miles better than a completely opaque binary format like Word.

reply


Not to advocate for word or anything, but technically it's a zip of xml and other stuff (images, etc) that get's pulled in through ... OLE(??). VC + markdown/latex excellent for collaboration or branching drafts.

reply


Once I read "Semantic Linefeeds" (http://rhodesmill.org/brandon/2012/one-sentence-per-line/) I've been experimenting with breaking on punctuation. Yes, it makes the raw text looks a bit odd (check the source on http://boston.conman.org/2015/04/16.1) but I've found it much easier to edit (especially when my girlfriend emails me corrections like spelling errors, typos, incorrect grammar, etc).

reply


For the use case of prose, this is a great alternative to the time investment needed to take up a heavyweight editor (e.g. Emacs or Vim) that can be made to operate on a clause-by-clause, sentence-by-sentence basis, and I recommend it to anyone not interested in taking the plunge into "customization culture" or using the other features those programs provide. My writing, when I don't need to use Word for work (thanks to co-workers who use it for everything), tends to be done in something unobtrusive like nano or sandy[0] and looks much like the source from your second link, minus the HTML.

"Easy to edit," to take a phrase from your first link, is key.

[0]: http://tools.suckless.org/sandy

reply


Not sure what is the right way to do it. But in principle it shouldn't be a problem. An script could make a copy of the files but with one sentence per line. So you could edit the original and then uae the transformed version for version control.

reply


The inquisitive Lt. Function_Seven asked, "How would the script know where one sentence ends and another begins?" as he began typing his query into the Yahoo! Search toolbar.

:) I think you just made the case for bringing back the two spaces after a period rule!

reply


FWIW, basic machine learning approaches to "sentence boundary detection" (as the task is called) get 199 out of 200 of these right (without using the "two space" clue), and have for a while. (e.g., http://sonny.cslu.ohsu.edu/~gormanky/blog/simpler-sentence-b...)

reply


For the purpose of version control, it doesn't even have to be exact. It doesn't matter if the detector inserts an incorrect line break after a certain combination of characters, as long as it does so consistently so that it produces a readable diff.

reply


    Ha.  You might be right.

reply


git diff --word-diff=color

Not exactly sentence-level, but perhaps good enough for some...

reply


http://rhodesmill.org/brandon/2012/one-sentence-per-line/

reply


And the time it takes to make a round trip.

-----


So, say you wanted to write a weather model, or engineering fluid mechanics model. Which options (besides MPI) you would look at?

-----


Chapel has been used for incompressible moving-grid fluid dynamics, so it's certainly feasible. For that problem the result was ~33% the lines of code of the MPI version. There is a performance hit, but the issues are largely understood; if (say) a meteorological centre were to put its weight behind it, a lot of things could get done.

It's also pretty easy to see how UPC or co-array fortran (which is part of the standard now, so isn't going anywhere any time soon) would work. They'd fall closer to MPI in complexity and performance.

You couldn't plausibly do big 3d simulations in Spark today; that's way outside of what it was designed for. Now analysing the results, esp of a suite of runs, that might be interesting.

-----


You cannot make an efficient fluid mechanics simulation on a 4000x4000x4000 grid if you set up a separate process for each individual gridcell. More efficient to just store your numbers in 3d arrays.

-----


why not?

-----


If you store the data in arrays, you can use matrix multiplication libraries such as Intel's MKL or OpenBLAS, which are written to be exceptionally optimized for use on multiple cores. I cannot emphasize enough how much time and effort has been put into these libraries to multiply matrices as fast as can possibly be done.

If you use processes such as in the Erlang VM, they're doing calculations, sure, but they're also sending messages back and forth, and they're acting as supervisors, and they're being shuffled around by the VM. There's a lot going on. And that extra stuff that's going on takes away from the time you could be multiplying stuff. And even then, there's been no optimization done for this sort of calculation. There are a lot of tricks you can do. Heck, the better matrix multiplication libraries have individual optimizations for CPUs.

-----


in High Performance Computing, there is (1) 3-dimensional simulations (weather, fluid dynamics, structural mechanics, all kinds of physics simulations, like magnetic storms in space or nuclear reactors etc.) and then there is (2) everything else, like data mining, machine learning, genomics etc.

Some of the sparse matrix computations in structural mechanics and in some machine learning algorithms have some overlap. But mostly, group 2 has little reason to be interested in what group 1 is doing.

Now, group 2 obviously has more modern tools than the 3d-simulation community, because machine learning came to common use much later that numerical fluid mechanics.

But do 3d-simulation people also have much reason to be interested in what the machine learning people are doing?

The "machine learning / big data" people are probably not doing anything that makes a weather prediction model to run faster? Or are they?

-----


They are doing it (interesting things) for lower capex and lower development costs. On opex, good for operations, bad for power consumption (relatively).

In terms of absolute performance HPC is absolutely faster. In terms of bangs for bucks, Big Data is hands down faster. Also in terms of accessibility Big Data is hugely easier - I can build you a 100 core big data system for $300k

-----


But your Big Data system is only good for Big Data. If I want to run a weather prediction model, it is not going to help.

My point is, the big data and the physics simulation people probably do not have a lot of common interests - besides using large amounts of computing power.

-----


There are big data people running on supercomputers too. I know there are people writing custom asynch job managers to handle big data type problems because the top supercomputers have low memory latency.

Also I think the dichotomy you're looking for is IO bound vs CPU bound problems. Although certainly there are a plethora of different kinds of IO bound problems (asynch vs synch or disk bound vs memory bound vs cache bound).

-----


that's silly: HPC has been pinching pennies before big data was a thing. and the computer industry is biz: you get what you pay for. if you can live with Gb performance, you can drop around $2k (IB card, cables, switches) off your price. But it's not as if the hardware is any different, faster or more accessible.

-----


I think it's economics, GPU's are sold by the million, super computers interconnects are sold by the thousands. Commodity kit is mass produced spreading design, vvt and manufacturing tooling costs.

The hardware is different in terms of the layout. Aggregations of small cores on boards (gpus) vs. very high speed large cores with lots of local memory. Highly localised connections vs. an interconnect fabric.

And it is more accessible because it's affordable, and you can get at it in the cloud; this means that skills building is easier for more people and it also means that a wider user base is possible.

-----


Huh, where did GPU's enter the discussion?

Anyway, for a typical HPC cluster, it's bog standard x86 hardware, the only remotely exotic thing is the Infiniband network. Common wisdom says that since Infiniband is a niche technology, it's hugely expensive, but strangely(?) it seems to have (MUCH!) better bang per buck than ethernet. A 36-port FDR IB (56 Gb/s) switch has a list price of around $10k, whereas a quick search seems to suggest a 48-port 10GbE switch has a list price of around $15k. So the per-port price is roughly in the same ballpark, but IB gives you >5 times better bandwidth and 2 orders of magnitude lower MPI latency. Another advantage is that IB supports multipathing, so you can build high bisection bandwidth networks (all the way to fully non-blocking) without needing $$$ uber-switches on the spine.

-----


That's interesting, things may have changes with IB since I last looked.

The GPU thing seems to have fallen out of my original comment, I meant to write "I can build you a 100,000 core system for $300k" but some how the decimal point jumped left three times! To do that I would definitely have to use GPU's...

I am seriously lusting after such a device, I feel that there is much to be done.

-----


Fortran has not been all caps after 1991, when the Fortran 90 standard came out. You knowledge is 24 years old.

-----


Yeah, I know. It's just that I like programming in an old-school way for fun. I briefly considered not using structured programming at all, but that's kinda too much.

-----

More

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: