Hacker Newsnew | comments | show | ask | jobs | submit | gilgoomesh's comments login

> useful and interesting dataset

I think the usefulness is limited by the fact that they haven't explained how they gathered this data. At first glance, the data looks to have some specific quirks (44% Linux users, overwhelmingly in the Finance industry) – you'd have to wonder it that's due to how the survey was conducted.


There are a couple of resources listed in the end. So we've gathered the information from reddit surveys, stackoverflow, our own surveys (with more than 5000 respondents), job ads, various language popularity indexes like Tiobe.

Not exactly a journal but NASA reproduced the Cannae and EmDrive style devices and documented it:

http://ntrs.nasa.gov/search.jsp?R=20140006052

Turning 17 watts of microwave power into 40-91 micronewtons of thrust isn't exactly stunning, though.


from the wikipedia ion thruster article:

> Ion thrusters have an input power spanning 1–7 kilowatts, exhaust velocity 20–50 kilometers per second, thrust 20–250 millinewtons and efficiency 60–80%.[1][2]

3 orders of magnitude less thrust than for not needing to accelerate any fuel all the way into orbit? i'll tell you space companies will send trucks full of money to anyone who can make a reliable one.


Another data point: my kitchen microwave has been getting 32 millimeters to the hot pocket.

There are a couple approaches.

One involves using a YAG laser to burn the floaters:

http://www.eyefloaters.com/index.php?option=com_content&task...

Sadly, it's done rarely enough that it could be snake oil or in-progress research.

A full vitrectomy is possible (replace the vitreous humour fluid in the eye):

http://www.allaboutvision.com/conditions/vitreoretinal-proce...

But this is dangerous enough that it's almost never performed simply for floaters.


I'll pass on both of them. Thanks for the info!

Yeah, that's a terrible title. The article's conclusion is less dramatic (to the point of being almost a different topic):

> unless the Go Language moves quickly towards the Gem, npm, or pip model or starts to have a user experience similar to Vagrant’s, I feel like a large number of users might be demotivated by the high entry-level barrier.

-----


Everybody saw the crash coming. It was clearly a bubble as the Shanghai Stock Exchange went up 250% in 12 months while every indicator (company profits, manufacturing prices, etc) were showing bad news.

https://au.finance.yahoo.com/q/bc?s=000001.SS&t=5y&l=on&z=l&...

Stock exchanges don't double in price year over year on bad news. That's just stupid.

I'm surprised the SSE has only fallen 30%. It still looks like it needs to lose another 30% (it's still up nearly 100% since early last year).

The only reason the entire world isn't shorting Chinese stocks is it's tricky to fight deliberate market manipulation by the Chinese government. Although I don't doubt this crash has made a few people very rich.

-----


Seriously. The average P/E ratio for a stock on the mainland indexes was 84. For a frame a reference, Facebook's is around 78. There's no way that could last.

-----


Wow. If that was the average, that's insane. Anybody investing in that market was gambling on the hype keeping it going.

-----


Textbook bubble.

When widows and orphans are borrowing money to gamble because they've been told it's a sure thing, it's time to get out.

-----


Here is my source, btw: http://www.bloomberg.com/news/articles/2015-06-16/china-bubb...

-----


Came to basically say exactly this. The only people who didn't see it coming are the Chinese "aunties" who put everything in without any real knowledge of truthworthy advice. There's a lot of people who lost a lot of money in China this way. That's the danger of armchair expertise, I suppose.

-----


You can express them as a signal folded over time.

-----


The absence of async IO in the standard library in Rust 1.0 is probably an artefact of Rust's original intention to use M:N green threads and blocking IO based on libuv:

https://www.reddit.com/r/rust/comments/1v2ptr/is_nonblocking...

The timing of the move away from green threads didn't really offer enough time to implement a stable async IO option before 1.0

-----


PyPy is on the graph (dotted mustard line between Javascript and Ruby).

-----


It seems to have been added after I made the comment.

-----


No. https://web.archive.org/web/20150423220114/https://bjpelc.wo...

-----


You've changed topic from code points to grapheme clusters. Rust's character/string support is strictly for code points (the documentation is fairly clear about the distinction).

Few string libraries actually deal with grapheme clusters as the native underlying representation (Swift being a notable exception).

-----


The broader point I'm making is that unicode is hard and attempts to simplify it by choosing a different encoding (i.e. switching to utf-32 to save yourself from all problems) are a bit misguided.

-----


Any situation where you have M userspace jobs running on N system threads, i.e. the number of tasks is different to the number of system threads.

Normally this occurs because you're running a large number of "green" threads on your own scheduler which schedules onto a thread pool underneath. This is good if all your threads are small/tiny since userspace thread creation is cheaper than creating an OS thread but if your jobs are long-lived then your userspace job scheduler is really just adding additional scheduling overhead on top of the overhead that the OS already has for thread scheduling and you would have been better with OS threads. If your M:N threading requires separate stack space for each job, there can be a sizeable overhead (this is why Rust abandoned M:N threading).

-----


Can you come up with some examples of when this would began to be noticeable to an end-user?

-----


If you're crossing the FFI boundary a lot, any overhead adds up quick. For example, drawing a bunch of small objects using Skia, performing lots of OpenGL draw calls, allocating LLVM IR nodes, or calling a C memory allocator…

-----


One of the nice things about M:N is it decouples concurrency from parallelism. That is, your application can spawn as many inexpensive green threads as the task requires without worrying about optimizing for core count or capping threads to avoid overhead, etc. With Go 1.5, underlying system thread count will default to the same as the system CPU core count.

-----


It's noticeable to the end-user only in it's negative performance implications in certain situations, making things slower than they would be otherwise on the same hardware. It's a low-level construct, it is not directly noticeable to the user either way. The negative performance implications are largely under heavy load. The post you replied to gave some more specific situations.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: