I think there should be an UBER competitor ,that really helps cities build their own clone effectively.
If that's possible politically , it would be interesting : the cities would have regulatory power to win over UBER, this very critical business that's might just be the future of transport will be in public hands and hopefully will be guided to the good of the community, and drivers will be compensated fairly.
"The continuing trend towards fewer people being employed in manufacturing, and greater automation of service jobs, will continue: our current societal model, whereby we work to earn money with which to buy the goods and services we need may not be sustainable in the face of a continuing squeeze on employment. But since when has consistency or coherency or even humanity been a prerequisite of any human civilization in history? We'll muddle on, even when an objective observer might look at us and shake her head in despair."
He wrote very little about that subject, although there's decent likelihood that will be the issue(together with AI,VR and possibly medical innovation) that will make 2034 very different from out time.
I'd guess because Stross tends to be a very pragmatic / utilitarian sci-fi writer.
We suspect its going to be a thing, but nobody really knows what the death of human employment will mean, or if it will really happen. If it does, its probably much more of a black swan than "the internet of things", but as he noted with cars, sometimes things you discount end up revolutionizing the world, and stuff you thought was a killer, because of short range performance, ends up bland in the long run. Its kind of like stocks, everybody wants to buy FB or Google after they change the world, but its a whole nother story to figure that out beforehand.
You can guess that the death of employment for pay will be a huge event, but to say much more you have to start committing toward one of many possible paths beyond your event horizon. Maybe a revolution as robots steal our jobs, maybe boring and mostly like today, maybe endless freedom to create, or kind of pointless (cause we're all VR slaves or some other such thing)). The slope's there, but we can't see over the hill.
Aside: I dig that the later half reads like a love letter for one of my favourite, and one of the more durable languages out there. Almost 30 years and Perl's still quietly chuggin along.
The FAA's goal in life is to ensure safety in the air. Drones are obviously within their field of interest. When commercial drones take off, there will obviously be a lot more drones in the air than the recreational drones and toy aircraft now, since there is a lot of money to be made, while comparatively few people fly things for a hobby.
> I don't see something as potentially risky as drones running around unregulated.
Really? How about people hitting baseballs with bats unregulated? That's a heck of a lot more dangerous than most hobbyist drones, perhaps unless you're deliberately trying to do harm (in which case regulations don't matter).
It's risk assessment time. How many people are hitting baseball bats with unregulated bats today versus the number of drones that will be flying around cities delivering things all the time (and bumping into people on the sidewalks)? It takes little imagination to see that once drones are accepted as an efficient method of transportation, then companies will want to use them as much as possible, thus possibly creating a situation where we might want them to be regulated. Perhaps it's hard to see this happening from the few examples we've of hobbyist usage today but that is probably going to change (if they're accepted, etc, etc).
This downvote pattern in HN is really interesting. Instead of discussing ideas people seem to downvote based on how they agree with it or not. The argument's validity usually is dismissed if it goes against someone's view. And, of course, serious replies rarely follow. Talk about bias bubble SV :)
Heavy prime users? But why? They are already locked in their ecommerce system. I am sure try are not trying to compete with apple or samsung but want to increase the adoption of amazon.com. Which means they want non prime users to use this phone.
> And because the JVM does global stop-the-world garbage collection, which makes soft real-time implausible because of the unpredictability of GC affecting your actors. Erlang has per-process heaps.
Not quite. Every environment needs some shared memory semantics. In Erlang that's done with ETS tables, which don't undergo GC at all. Java's GCs are so good now, that, if shared data structures are used sparingly, would still give you better performance than BEAM. Plus, you have commercial pauseless GCs, as well as hard realtime JVMs.
> Basically the Erlang VM was created for this use case while the JVM was not, and its not something you can just add with a library. edit: Also the lightweightness of Erlang processes compared to Java threads and hot code upgrades.
The JVM is so versatile, though, that all this can actually be added as a library, including true lightweight threads and hot code swapping (see Quasar).
Nevertheless, BEAM was indeed designed with process isolation in mind, and on the JVM, actors might interfere with one another's execution more so than on BEAM, but even on BEAM you get the occasional full VM crashes. If total process isolation is not your main concern, you might find that Java offers more than Erlang, all things considered.
Java/Scala do allow you to do bad things. So add the following to Hershel's question:
"Assume developers are non-malicious and will only pass immutable objects across actor/future boundaries."
Also, I'm not that familiar with Erlang's memory model, so I might be wrong on this. But as far as I'm aware the memory for a message in Erlang is shared between threads - it's only local variables that use private memory. This means Erlang will also need some sort of concurrent garbage collector - does Erlang's version not stop the world, or at least the messaging subsystem?
> But as far as I'm aware the memory for a message in Erlang is shared between threads
Yes and no. Some large binaries (a specific Erlang data type, that can say represent a packet or block of data from disk), will be shared and reference counted when passed between processes instead of copied. They are immutable just like most datatypes in Erlang. These binaries have a specific GC algorithm that it just might take longer sometime for them to be reclaimed. But it seems all that could presumably be done via atomic updates to counters and references.
In general most messages are copied on send. So implementation wise GC is very simple then. On another level because data in Erlang is immutable, the fact that messages get copied is also an implementation detail! One could conceive another implementation of a VM that only passes references and immutable data on message send (well minus when it sends it to another machine, of course). But that would make GC a bit more tricky just like in case of those binaries.
The large binary GC is actually pretty simple too: Shared binaries are refcounted; the references are in the process heap. When the references are GCed from the process, the shared binary can be freed. The reason that sometimes it takes a long time to free, is that some types of processes will get references to a large number of binaries, but not trigger a process garbage collection, leaving lots of binaries allocated in the shared space. Garbage collection for a process is only automatically triggered when the process heap would grow, so there are some common cases which result in bad behavior: processes that don't generate much garbage on their heap, but do touch a lot of binaries (often this is request routing); processes that grow their heap to some large size doing one kind of work, but then switch to another type that doesn't use much heap space, leaving a long time between GC; processes that touch a lot of binaries but then don't do any processing for a long time (maybe a periodic cleanup task).
Another common issues is taking references to a small part of a large shared binary.
That's exactly right. Erlang has per actor heaps, and it's garbage collector only stops the actors that are being garbage collected. This is highly concurrent and is a great property to have when you're trying to keep your response times low.
Hopefully with the upcoming Spores feature in Scala, Akka will be able to enforce message immutability in some form. I was at Scala Days last week and the developer behind spores gave a great talk on the sort of immutability guarantees this feature will allow. Worth watching once it's posted online.
Even just marveling at the complexity and how they got it working.
Otherwise, besides those tricks, how would you do it when you have multiple threads accessing objects on a shared heap?
Erlang's VM is another even wonderful piece of engineering. Each little process lives in its own memory heap. Then pauseless garbage collection become trivial. It has many other really cool and unique features (hot code reloading, inter-node distribution, ability to load C code, etc etc...)
Well I wouldn't say "roll-over" to the new process is exactly simple but it is a good trick though. Forking has its interesting dark cases that have to be handled. Inherited file descriptors, what happens to threads, signals and so on.
Akka certainly does. At the company I work for all our heavy lifting is done by an Akka cluster running on EC2.
We found that the difficulty didn't lay in getting Akka to work in a clustered fashion—that was simple—but rather in architecting our backend's work distribution mechanics so as not to overload any given node. I blogged about our experience with Akka: http://blog.goconspire.com/post/64130417462/akka-at-conspire...
Erlang is not just classes with a thread and a queue attached to it. Anyone can do it. It is also fault tolerance. How many of the actor libraries support creating large number of processes that have isolate heaps? How many have completely concurrent and pause-less garbage collectors?
The closer abstraction is probably OS processes + IPC. Then you get closer to the spirit of it. And Chrome browser and other software take that approach. It isolates faults. But well, you have to do a lot more work around it and those are not exactly light-weight. Erlang processes are only a few K of memory each.
The scheduler is preemptive. JVM doesn't have a preemptive scheduler so there are many situation where this is a huge plus.
Erlang's process are just threads pretending to be process which will spawn much faster than Akka.
I believe there are a few articles of Akka actor limitation versus Erlang's. I haven't delve deep into this but there are caveat with receiving message and how to handle it versus Erlang not having such caveat.
Of course you can say Scala is built with concurrency in mind same with Clojure. But the underlying gears, the JVM was not compares to Erlang.
There are trade off between Erlang's VM and Java's VM. And if your requirement is a perfect match for either Erlang and Java you mind as well pick the best because coding against what the tool was intended for is just for people who enjoy pain and frustration.
The value proposition of Erlang is a great deal more than "we have actors". Actors and message passing are "symptoms" of built from the core out for fault tolerant networked multiprocessing. If you need that you should prefer Erlang.
It's very unlikely that commoncrawl.org will have access to full text papers, which is mostly based on expensive library/university subscriptions.
Before Scholar Ninja reaches maturity of version 1.0 though, we will be seeding the network with as many sources as we legally and technically can, with a strong focus on properly licensed open access content.
The short answer is, both. There's a smaller core team that works more or less full time on this, and a lot of other part time volunteers. Personally, I volunteered temporarily on one of their research trips only (I'm unsure if I'm counted in the "over 100 volunteers" figure, though).
Edit: I forgot to respond to the second part. It is also very complicated - it has already required a lot of research that hasn't been done before, and a lot of angles need to be considered, not just in various fields of physics and chemistry but also biology, economics and law to mention a few.
YZF,why can't we start from an optimized FPGA - i.e. small memory blocks spread all around with massive bandwidth and low latency, and find a way to give decent enough access to the cpu to all that memory ?
And yes i know that the cpu will be the bottleneck, but it will be the bottleneck anyway.
I think it boils down to various constraints. If you want high bandwidth low latency you need to be physically close on the chip. Presumably an existing chip is already optimized given those constraints and adding another component in means you need to trade something else off.
The other thing that I've seen which may or may not apply to the Intel case is that complexity in chip design can be managed more easily by having blocks that connect to standard interfaces. I.e. if you look inside the Xeon it probably looks like a bunch of different chips that were thrown onto the same die with some standard interconnects. Most of the optimization effort goes inside those blocks, e.g. inside a single core, and it's a lot more difficult to add an FPGA closer to the core vs. just throwing it somewhere else on the chip. That is the number of engineers in Intel who are intimately familiar with the innards of the x86 core design and are capable of making these sorts of changes is probably much much lower than the number who are capable of throwing in some external "block" onto the die and tie it into a standard bus.