Hacker Newsnew | comments | show | ask | jobs | submit | hershel's comments login

I think there should be an UBER competitor ,that really helps cities build their own clone effectively.

If that's possible politically , it would be interesting : the cities would have regulatory power to win over UBER, this very critical business that's might just be the future of transport will be in public hands and hopefully will be guided to the good of the community, and drivers will be compensated fairly.

But getting the politics right is very difficult.

-----


"The continuing trend towards fewer people being employed in manufacturing, and greater automation of service jobs, will continue: our current societal model, whereby we work to earn money with which to buy the goods and services we need may not be sustainable in the face of a continuing squeeze on employment. But since when has consistency or coherency or even humanity been a prerequisite of any human civilization in history? We'll muddle on, even when an objective observer might look at us and shake her head in despair."

He wrote very little about that subject, although there's decent likelihood that will be the issue(together with AI,VR and possibly medical innovation) that will make 2034 very different from out time.

-----


I'd guess because Stross tends to be a very pragmatic / utilitarian sci-fi writer.

We suspect its going to be a thing, but nobody really knows what the death of human employment will mean, or if it will really happen. If it does, its probably much more of a black swan than "the internet of things", but as he noted with cars, sometimes things you discount end up revolutionizing the world, and stuff you thought was a killer, because of short range performance, ends up bland in the long run. Its kind of like stocks, everybody wants to buy FB or Google after they change the world, but its a whole nother story to figure that out beforehand.

You can guess that the death of employment for pay will be a huge event, but to say much more you have to start committing toward one of many possible paths beyond your event horizon. Maybe a revolution as robots steal our jobs, maybe boring and mostly like today, maybe endless freedom to create, or kind of pointless (cause we're all VR slaves or some other such thing)). The slope's there, but we can't see over the hill.

Aside: I dig that the later half reads like a love letter for one of my favourite, and one of the more durable languages out there. Almost 30 years and Perl's still quietly chuggin along.

-----


Here they describe it in detail:

http://ertos.nicta.com.au/research/l4.verified/proof.pml

Basically they plugged a lot of the possible holes, but it's not done yet.

-----


Even if they lose, won't the laws change to give them these powers ? I don't see something as potentially risky as drones running around unregulated.

-----


The laws already changed to give them these powers - the 2012 FAA Modernization Act authorizes the FAA to make rules about UAS.

However, the FAA's rulemaking abilities are limited by the Administrative Procedure Act (APA) and they haven't made any yet.

So, even if the FAA's appeal fails, there will be a very short window (probably a year or less) before drones are explicitly regulated on less shaky footing.

-----


> The laws already changed to give them these powers - the 2012 FAA Modernization Act authorizes the FAA to make rules about UAS.

So claims the FAA. I can't make my way through the text of the law, nor can I find any legal summary or analysis other than the FAA's own claim that it gives them the authority to regulate UAS.

-----


Let me save you the trouble. As a lawyer, familiar with reading legislative text, it clearly requires them to do rulemaking about UAS, and grants them the authority to do so.

-----


Everybody is arguing if they have the authority or not the more important question is why do they want to?

I find the FAA's attempt at banning anything commercial relating to drones a bit curious.

Step 1. Ban/regulate all commercial Drones.

Step 2. Require a license for commercial use.

Step 3. Profit?

-----


The FAA's goal in life is to ensure safety in the air. Drones are obviously within their field of interest. When commercial drones take off, there will obviously be a lot more drones in the air than the recreational drones and toy aircraft now, since there is a lot of money to be made, while comparatively few people fly things for a hobby.

-----


If that was the intent they would be putting rules in place to prevent people from getting hurt not outright banning them which is what their doing now.

This looks more like a move to kill an industry before it takes off if you pardon my pun.

Not to mention there's nothing to regulate at the moment no people have been hurt by drones and there's very few companies actually looking into them.

So the next logical question is who would loose money if commercial drone delivery services become the norm?

-----


> I don't see something as potentially risky as drones running around unregulated.

Really? How about people hitting baseballs with bats unregulated? That's a heck of a lot more dangerous than most hobbyist drones, perhaps unless you're deliberately trying to do harm (in which case regulations don't matter).

-----


It's risk assessment time. How many people are hitting baseball bats with unregulated bats today versus the number of drones that will be flying around cities delivering things all the time (and bumping into people on the sidewalks)? It takes little imagination to see that once drones are accepted as an efficient method of transportation, then companies will want to use them as much as possible, thus possibly creating a situation where we might want them to be regulated. Perhaps it's hard to see this happening from the few examples we've of hobbyist usage today but that is probably going to change (if they're accepted, etc, etc).

-----


This downvote pattern in HN is really interesting. Instead of discussing ideas people seem to downvote based on how they agree with it or not. The argument's validity usually is dismissed if it goes against someone's view. And, of course, serious replies rarely follow. Talk about bias bubble SV :)

-----


We're probably the last population targeted by this phone. It probably targets seniors or heavy prime users.

-----


Heavy prime users? But why? They are already locked in their ecommerce system. I am sure try are not trying to compete with apple or samsung but want to increase the adoption of amazon.com. Which means they want non prime users to use this phone.

-----


Like the author says ,it's a pretty forced explanation, but just to get more money out of them, the excel table shows how much:

http://stratechery.com/2014/amazons-whale-strategy/

-----


Since there are actor libraries like akka targeting the JVM and claiming to offer similar benefits, why should someone prefer erlang?

-----


Because Akka can't magically patch over the JVM's shared memory model: http://doc.akka.io/docs/akka/snapshot/general/jmm.html#jmm-s...

And because the JVM does global stop-the-world garbage collection, which makes soft real-time implausible because of the unpredictability of GC affecting your actors. Erlang has per-process heaps.

Basically the Erlang VM was created for this use case while the JVM was not, and its not something you can just add with a library.

edit: Also the lightweightness of Erlang processes compared to Java threads[1] and hot code upgrades.

[1]: http://i.imgur.com/hKMJ3HD.png

-----


> And because the JVM does global stop-the-world garbage collection, which makes soft real-time implausible because of the unpredictability of GC affecting your actors. Erlang has per-process heaps.

Not quite. Every environment needs some shared memory semantics. In Erlang that's done with ETS tables, which don't undergo GC at all. Java's GCs are so good now, that, if shared data structures are used sparingly, would still give you better performance than BEAM. Plus, you have commercial pauseless GCs, as well as hard realtime JVMs.

> Basically the Erlang VM was created for this use case while the JVM was not, and its not something you can just add with a library. edit: Also the lightweightness of Erlang processes compared to Java threads[1] and hot code upgrades.

The JVM is so versatile, though, that all this can actually be added as a library, including true lightweight threads and hot code swapping (see Quasar[1]).

Nevertheless, BEAM was indeed designed with process isolation in mind, and on the JVM, actors might interfere with one another's execution more so than on BEAM, but even on BEAM you get the occasional full VM crashes. If total process isolation is not your main concern, you might find that Java offers more than Erlang, all things considered.

[1]: https://github.com/puniverse/quasar

-----


Java/Scala do allow you to do bad things. So add the following to Hershel's question:

"Assume developers are non-malicious and will only pass immutable objects across actor/future boundaries."

Also, I'm not that familiar with Erlang's memory model, so I might be wrong on this. But as far as I'm aware the memory for a message in Erlang is shared between threads - it's only local variables that use private memory. This means Erlang will also need some sort of concurrent garbage collector - does Erlang's version not stop the world, or at least the messaging subsystem?

-----


> But as far as I'm aware the memory for a message in Erlang is shared between threads

Yes and no. Some large binaries (a specific Erlang data type, that can say represent a packet or block of data from disk), will be shared and reference counted when passed between processes instead of copied. They are immutable just like most datatypes in Erlang. These binaries have a specific GC algorithm that it just might take longer sometime for them to be reclaimed. But it seems all that could presumably be done via atomic updates to counters and references.

In general most messages are copied on send. So implementation wise GC is very simple then. On another level because data in Erlang is immutable, the fact that messages get copied is also an implementation detail! One could conceive another implementation of a VM that only passes references and immutable data on message send (well minus when it sends it to another machine, of course). But that would make GC a bit more tricky just like in case of those binaries.

-----


The large binary GC is actually pretty simple too: Shared binaries are refcounted; the references are in the process heap. When the references are GCed from the process, the shared binary can be freed. The reason that sometimes it takes a long time to free, is that some types of processes will get references to a large number of binaries, but not trigger a process garbage collection, leaving lots of binaries allocated in the shared space. Garbage collection for a process is only automatically triggered when the process heap would grow, so there are some common cases which result in bad behavior: processes that don't generate much garbage on their heap, but do touch a lot of binaries (often this is request routing); processes that grow their heap to some large size doing one kind of work, but then switch to another type that doesn't use much heap space, leaving a long time between GC; processes that touch a lot of binaries but then don't do any processing for a long time (maybe a periodic cleanup task).

Another common issues is taking references to a small part of a large shared binary.

-----


That's exactly right. Erlang has per actor heaps, and it's garbage collector only stops the actors that are being garbage collected. This is highly concurrent and is a great property to have when you're trying to keep your response times low.

-----


> But as far as I'm aware the memory for a message in Erlang is shared between threads - it's only local variables that use private memory

No, the messages are truly copied: http://jlouisramblings.blogspot.dk/2013/10/embrace-copying.h...

edit: the exception being large binaries apparently

-----


Even if developers are non-malicious, they aren't infallible.

-----


Hopefully with the upcoming Spores[1] feature in Scala, Akka will be able to enforce message immutability in some form. I was at Scala Days last week and the developer behind spores gave a great talk on the sort of immutability guarantees this feature will allow. Worth watching once it's posted online.

[1] http://docs.scala-lang.org/sips/pending/spores.html

-----


JVMs can have a separate heap for each thread. See the Avian JVM[1] for an example of this.

[1] http://oss.readytalk.com/avian/

-----


>> JVM does global stop-the-world garbage collection

Does it? Back when I was in HFT, we were definitely running a JVM with background thread GC.

-----


Only Azul's JVM has managed to create a pause-less garbage collector. They use some pretty cool tricks.

It is really a fantastic piece of technology:

http://www.azulsystems.com/zing/pgc

Even just marveling at the complexity and how they got it working.

Otherwise, besides those tricks, how would you do it when you have multiple threads accessing objects on a shared heap?

Erlang's VM is another even wonderful piece of engineering. Each little process lives in its own memory heap. Then pauseless garbage collection become trivial. It has many other really cool and unique features (hot code reloading, inter-node distribution, ability to load C code, etc etc...)

-----


A few simple ways: put shared data in PermGen, and rollover to a new process when memory gets low (erlang-style but at OS-level).

-----


Well I wouldn't say "roll-over" to the new process is exactly simple but it is a good trick though. Forking has its interesting dark cases that have to be handled. Inherited file descriptors, what happens to threads, signals and so on.

-----


Were you paying huge money for Azul? http://www.azulsystems.com/zing/pgc

-----


The metronome GC from ibm is predictable (not hard real time though).

-----


Is ConcurrentMarkAndSweep stop-the-world?

How hard are the limits of "soft" realtime?

-----


Erlang allows the actors to be spread across physical nodes. It's like a cluster OS, not just a language that uses an actor model.

Do any of the library actor models offer something like that?

-----


Akka certainly does. At the company I work for all our heavy lifting is done by an Akka cluster running on EC2.

We found that the difficulty didn't lay in getting Akka to work in a clustered fashion—that was simple—but rather in architecting our backend's work distribution mechanics so as not to overload any given node. I blogged about our experience with Akka: http://blog.goconspire.com/post/64130417462/akka-at-conspire...

-----


In principle, transitioning from multithreaded to distributed with Akka is just a matter of configuration. I've never put this to the test, but akka does make this claim.

-----


Microsoft Orleans if you are ready to use a cloud.

-----


Erlang is not just classes with a thread and a queue attached to it. Anyone can do it. It is also fault tolerance. How many of the actor libraries support creating large number of processes that have isolate heaps? How many have completely concurrent and pause-less garbage collectors?

The closer abstraction is probably OS processes + IPC. Then you get closer to the spirit of it. And Chrome browser and other software take that approach. It isolates faults. But well, you have to do a lot more work around it and those are not exactly light-weight. Erlang processes are only a few K of memory each.

-----


> The closer abstraction is probably OS processes + IPC.

The closer abstraction is probably more like a distributed fault tolerant OS + processes using network transparent IPC.

-----


The scheduler is preemptive. JVM doesn't have a preemptive scheduler so there are many situation where this is a huge plus.

Erlang's process are just threads pretending to be process which will spawn much faster than Akka.

I believe there are a few articles of Akka actor limitation versus Erlang's. I haven't delve deep into this but there are caveat with receiving message and how to handle it versus Erlang not having such caveat.

Coding in a language that isn't built with Actor/Concurrency in mind is a huge pain in the butt. Think of Javascript and Node.js and callback hells, which of course have push Javascript to adopt things such as future and etc..

Of course you can say Scala is built with concurrency in mind same with Clojure. But the underlying gears, the JVM was not compares to Erlang.

There are trade off between Erlang's VM and Java's VM. And if your requirement is a perfect match for either Erlang and Java you mind as well pick the best because coding against what the tool was intended for is just for people who enjoy pain and frustration.

-----


The value proposition of Erlang is a great deal more than "we have actors". Actors and message passing are "symptoms" of built from the core out for fault tolerant networked multiprocessing. If you need that you should prefer Erlang.

-----


What about http://commoncrawl.org/? Why not use it?

-----


It's very unlikely that commoncrawl.org will have access to full text papers, which is mostly based on expensive library/university subscriptions.

Before Scholar Ninja reaches maturity of version 1.0 though, we will be seeding the network with as many sources as we legally and technically can, with a strong focus on properly licensed open access content.

-----


Great project. I do wonder , why are 100 people involved ? is it very complicated ? or just the fact they volunteer so they can't give their 100% attention to this?

-----


The short answer is, both. There's a smaller core team that works more or less full time on this, and a lot of other part time volunteers. Personally, I volunteered temporarily on one of their research trips only (I'm unsure if I'm counted in the "over 100 volunteers" figure, though).

Edit: I forgot to respond to the second part. It is also very complicated - it has already required a lot of research that hasn't been done before, and a lot of angles need to be considered, not just in various fields of physics and chemistry but also biology, economics and law to mention a few.

-----


Teachers would probably have bigger problems.

According to clay christensen , the guy behind "the innovator's dillema" , more than 50% of colleges will go bankrupt soon(until 2020/2024).

-----


I found that pretty startling, so I googled and found this article:

http://www.bloomberg.com/news/2014-04-14/small-u-s-colleges-...

-----


YZF,why can't we start from an optimized FPGA - i.e. small memory blocks spread all around with massive bandwidth and low latency, and find a way to give decent enough access to the cpu to all that memory ?

And yes i know that the cpu will be the bottleneck, but it will be the bottleneck anyway.

-----


I think it boils down to various constraints. If you want high bandwidth low latency you need to be physically close on the chip. Presumably an existing chip is already optimized given those constraints and adding another component in means you need to trade something else off.

The other thing that I've seen which may or may not apply to the Intel case is that complexity in chip design can be managed more easily by having blocks that connect to standard interfaces. I.e. if you look inside the Xeon it probably looks like a bunch of different chips that were thrown onto the same die with some standard interconnects. Most of the optimization effort goes inside those blocks, e.g. inside a single core, and it's a lot more difficult to add an FPGA closer to the core vs. just throwing it somewhere else on the chip. That is the number of engineers in Intel who are intimately familiar with the innards of the x86 core design and are capable of making these sorts of changes is probably much much lower than the number who are capable of throwing in some external "block" onto the die and tie it into a standard bus.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: