The idea that faster parallel hardware will lead to a parallel software future is nice, but I don't really buy it. For example the move towards cloud computing is the big thing these days but its completely orthogonal to that. The cost savings from cloud services are driven by making highly parallel hardware look like an awful lot of one or perhaps two core servers for dozens of clients at the same time.
In other words the big ground breaking world shaking trend in computing isn't about running clever parallel applications on clever parallel hardware at all, instead its about leveraging that hardware to make good old single threaded applications run as cheaply as possible.
Everyone's niche in the industry taints their perspective; we’re clearly on opposite ends of this spectrum. Over the past 10 years, I have seen a shift from Unix/Linux x86 servers, to beowulf clusters, to GPU based clusters, and most recently a project with Intel Phi boards.
A common thread throughout this progression has been the lack of proper software and tools to parallelize the workload. The team is constantly on the lookout for new tools and is thrilled about upcoming technologies like Parallella.
In your field, where cheap CPU cycles is paramount, you may not see it. But for our team running the sims, your "good old single threaded" performance has been meaningless for years.
I was going to comment on that too. The typeface made the content a pleasure to read. Matz just seems like a happy, genuine guy from all the articles and videos I've seen him in.
I see the sparse fourier transform and other tweaks to the FFT creating decentralized p2p systems where video and audio sharing requires little bandwidth. Centralization is doomed, it's just too expensive to maintain now with the entire world getting new devices and connecting by the millions everyday. As for languages Lisp will still be alive! :)
Re: the article font, it looks fine on Firefox nightly build running on Debian wheezy
Well, in the spirit of the topic, here are my counter-predictions.
I'm betting that between now and quantum computers, memristors will play a significant role, and (as I understand them) they'll push us much further toward parallel computing than multi-core and device networking would. A friend of mine believes they will behave as a network of small computing units, so he's betting on an actor model. We'll see!
The book "Trillions" [1] talks a lot about computing future, and focuses very heavily on the idea of "device fungibility" with "data liquidity" - basically the idea that the computing device is insignificant and replaceable, as the computing work&data can move freely between them. When you consider how prevalent general-computing devices are-- microwaves, toasters, cars, phones, dish-washers, toys, etc-- this is pretty compelling. I highly recommend that book.
Now, I personally think localized connectivity&sync between devices, strong P2P Web infrastructure, and more powerful client participation in the network will alter the importance of vertical-scaling central services and give much more interesting experiences to boot (as things in your proximate range will factor much more largely into your computing network). "Cloud computing" as we have it now is really just renting instead of buying. Yes, you can easily spin up a new server instance, but it's much more interesting to imagine distributing a script which causes interconnected browser peers to align under your software. Easy server spin-up? Try no server! This means users can drive the composition of the network's application software, which should create a much richer system.
Considering privacy issues, I think it's an important change. Not only is it inefficient to always participate in centralized services and public networks, it's unsafe. P2P and localized network topologies improve that situation. Similar points can be made about network resiliency and single-points-of-failure-- how efficient is it to require full uptime from central points vs. minimal uptime from a mesh? I imagine it depends on the complexity of decentralized systems, but I'm optimistic about it.
Along with network infrastructure and computing device changes, I think the new VR/AR technology is going to flip computing on its head. Not only do we gain much more "information surface area" - meaning we can represent a lot more knowledge about the system - but we gain a ton of UX metaphors that 2d can't do. One thing I get excited about is the "full spectrum view" of a system, where you're able to watch every message passed and every mutation made, because in the background you can see them moving between endpoints, and, hey, that process shouldn't be reading that, or, ha, that's where that file is saving to.
So TL;DR: I say the future of computing is VR/AR, peer-connective, user-driven, and massively parallel.
Right now, datacenters are just starting to fully abstract the OS from the hardware -- physical servers run "hypervisor OSes", which can host several virtual servers and live-transfer them from physical host to physical host. And the hypervisor OS does little else.
Meanwhile, virtual servers handle all of the actual software tasks. Through advanced routing, these virtual servers remain connected regardless of which physical server hosts them, and even while being moved from one physical host to another. Thanks to virtual HDDs on SANs, these virtual servers can always reach their virtual HDDs, regardless of physical device failures.
Virtualization tech has already entered browsers. And browsers are slowly becoming the entire client, as we've seen with Chromebooks. It's just a matter of time for these movements to all collide, allowing for virtual servers to freely roam the spare memory and CPU space of all of your browser-OS-based computing devices.
This would be sort of like running Hyper-V on all of your users' Win8 desktops to host your email, directory, database, and information servers. Boom, no server!
Those that work in this area do not believe virtualization is the answer to our large distributed system cluster woes. They work great for cloud computing utilities that take customers, but are unnecessary overhead for the high performance stuff (Hadoop, MapReduce, Spark, etc...).
Interesting. Here's my prediction from my niche in the industry, where single core CPU scaling ground to a halt and we still lack the necessary tools to parallelize our jobs.
I see us at the beginning of this paradigm shift to multi-core. Both the tools and the theory are still in their infancy. But there are many promising advances being explored, such as GPUs, Intel Phi, new FPGA, projects like Parallella, and yes, memristors based neuromorphic computing.
The software side also requires new tools to drive these new technologies. I think traditional threading will be viewed as a stopgap hack and will be replaced by some form of functional, flow-Based, and/or reactive programming models.
10 years from now, I see writing thread safe apps in the same league as writing 6502 ASM code today.
JRuby is a very mature and widely used implementation by now. Granted, it's a bit isolated from the world of MRI libraries with compiled extensions, but compatibility story is getting better with every day. Besides, you are within arm's reach from the java library world.
That's the point though, as sad as it is, it's not "there" if it's not drop-in, given the stranglehold MRI has on the Ruby world.
You can run most applications with JRuby but to my knowledge, it always involves fiddling and tweaking.
For a Rails application, using the jdbc adapter or Puma/Trinidad instead of the regular servers for example. I am not saying this is a proper justification but it does prevent people from testing it further.
Also, I am still looking for actual and undeniable proof that using JRuby instead of MRI does actually bring better scaling/performance/memory management/tooling. So far, the examples I have seen are conclusive, but still limited to very particular use cases. That Heroku now officially supports JRuby is a big plus I would say.
This interview was not about Ruby. I don't know why Matz always starts his computing/programming journey from 1993 ? Its as if nothing existed before that magical year.
<rant>
This is hard to read, at least in firefox. Why the hell can't the author keep the font consistent throughout the article?(interview).
Its ok to print a picture or a two. But 5+ images, why ?
</rant>
Blog author here. I am sorry you found it difficult to read. Would you mind posting more information about your set up? (OS, Firefox version?) As I've just checked and the fonts should be Lucida Sans, Helvetica Neue or Helvetica depending on what's available in your system throughout.
In terms of the images - they were all from the original interview article (linked in my post).
P.S. I am not sure what gave the impression that the interview was about Ruby, as "Ruby" is not mentioned in the title.
Read my comment again. I said even though the article is not about ruby, Matz unfailingly starts from 1993. The headings in your article before every question makes it hard to read. The font size of the question is 3x smaller than the answer font size. Top that up with the images in between the answers.
My OS is Centos 6.1(32 bit) and the browser is Mozilla Firefox 10.0.1 .
I am sorry if this sounds aggressive. But very rarely do we get the chance to interview some amazing men. When you do(or even translate), please don't mess up.
Few days ago, a Donald Knuth interview appeared. It was a good read. May be I got carried away.
The validity of your criticism aside, some of which I don't agree with but will get to later, I think you could deliver with a lot more respect. Treating people in this way is unnecessary, whether or not you don't like how Matz was interviewed or how it was translated. You are right, you do sound aggressive, unduly so. If you're aware of that and you're actually sorry, why haven't you made steps to change it?
Secondly, he probably uses that as a starting point because that's when he produced the work he is known for. He isn't saying computing didn't exist before this date, it's just a frame of reference. "Back when I first invented Ruby 20 years ago...". What is wrong with this? Would you rather "Back when Guido invented Python 20+ years ago..." or "20 years ago"? It's an irrelevant detail.
If you're on CentOS 6.1 and your browser is Firefox 10, why are 5 images on a page such a problem? I didn't think it made things any harder to read at all and certainly didn't affect performance.
The increase of font size on the answers puts emphasis on the answers. I don't think this is a good idea but the intention is good and did not make the document anymore painful to read.
To finish, my eyesight isn't great and I read the article without my glasses.
Please man, you can deliver criticism without being a jerk. It's purely subjective and delivering with such disrespect makes you look arrogant, as though your opinion is law.
There was nothing in the title or article to indicate it would be.
>> "I don't know why Matz always starts his computing/programming journey from 1993 ?"
He mentioned it once when referencing how long Ruby had been around. He wasn't telling his personal journey, he was talking about the future of computing.
> I don't know why Matz always starts his computing/programming journey from 1993 ? Its as if nothing existed before that magical year.
What a strange criticism. On the one hand, he really doesn't talk about specific points in the past very much in this interview. One of the few times that he does, he mentions programming languages designed 50 years ago.
On the other hand, Ruby is the main thing Matz is known for, and the main thing he has focused on. It makes sense that his observations about computing would be tied to his experience with Ruby.
In other words the big ground breaking world shaking trend in computing isn't about running clever parallel applications on clever parallel hardware at all, instead its about leveraging that hardware to make good old single threaded applications run as cheaply as possible.