
The Future of Computing - An Interview with Yukihiro “Matz" Matsumoto - fredwu
http://fredwu.me/post/54175219257/the-future-of-computing-the-future-of-computer-programmers
======
simonh
The idea that faster parallel hardware will lead to a parallel software future
is nice, but I don't really buy it. For example the move towards cloud
computing is the big thing these days but its completely orthogonal to that.
The cost savings from cloud services are driven by making highly parallel
hardware look like an awful lot of one or perhaps two core servers for dozens
of clients at the same time.

In other words the big ground breaking world shaking trend in computing isn't
about running clever parallel applications on clever parallel hardware at all,
instead its about leveraging that hardware to make good old single threaded
applications run as cheaply as possible.

~~~
slacka
Everyone's niche in the industry taints their perspective; we’re clearly on
opposite ends of this spectrum. Over the past 10 years, I have seen a shift
from Unix/Linux x86 servers, to beowulf clusters, to GPU based clusters, and
most recently a project with Intel Phi boards.

A common thread throughout this progression has been the lack of proper
software and tools to parallelize the workload. The team is constantly on the
lookout for new tools and is thrilled about upcoming technologies like
Parallella.

In your field, where cheap CPU cycles is paramount, you may not see it. But
for our team running the sims, your "good old single threaded" performance has
been meaningless for years.

------
amalag
Don't listen to the naysayers. It was a nice interview and article. I found
the fonts perfectly readable and I liked the interspersing of images.

~~~
cantbecool
I was going to comment on that too. The typeface made the content a pleasure
to read. Matz just seems like a happy, genuine guy from all the articles and
videos I've seen him in.

------
dobbsbob
I see the sparse fourier transform and other tweaks to the FFT creating
decentralized p2p systems where video and audio sharing requires little
bandwidth. Centralization is doomed, it's just too expensive to maintain now
with the entire world getting new devices and connecting by the millions
everyday. As for languages Lisp will still be alive! :)

Re: the article font, it looks fine on Firefox nightly build running on Debian
wheezy

------
pfraze
Well, in the spirit of the topic, here are my counter-predictions.

I'm betting that between now and quantum computers, memristors will play a
significant role, and (as I understand them) they'll push us much further
toward parallel computing than multi-core and device networking would. A
friend of mine believes they will behave as a network of small computing
units, so he's betting on an actor model. We'll see!

The book "Trillions" [1] talks a lot about computing future, and focuses very
heavily on the idea of "device fungibility" with "data liquidity" \- basically
the idea that the computing device is insignificant and replaceable, as the
computing work&data can move freely between them. When you consider how
prevalent general-computing devices are-- microwaves, toasters, cars, phones,
dish-washers, toys, etc-- this is pretty compelling. I highly recommend that
book.

Now, I personally think localized connectivity&sync between devices, strong
P2P Web infrastructure, and more powerful client participation in the network
will alter the importance of vertical-scaling central services and give much
more interesting experiences to boot (as things in your proximate range will
factor much more largely into your computing network). "Cloud computing" as we
have it now is really just renting instead of buying. Yes, you can easily spin
up a new server instance, but it's much more interesting to imagine
distributing a script which causes interconnected browser peers to align under
your software. Easy server spin-up? Try no server! This means users can drive
the composition of the network's application software, which should create a
much richer system.

Considering privacy issues, I think it's an important change. Not only is it
inefficient to always participate in centralized services and public networks,
it's unsafe. P2P and localized network topologies improve that situation.
Similar points can be made about network resiliency and single-points-of-
failure-- how efficient is it to require full uptime from central points vs.
minimal uptime from a mesh? I imagine it depends on the complexity of
decentralized systems, but I'm optimistic about it.

Along with network infrastructure and computing device changes, I think the
new VR/AR technology is going to flip computing on its head. Not only do we
gain much more "information surface area" \- meaning we can represent a lot
more knowledge about the system - but we gain a ton of UX metaphors that 2d
can't do. One thing I get excited about is the "full spectrum view" of a
system, where you're able to watch every message passed and every mutation
made, because in the background you can see them moving between endpoints,
and, hey, that process shouldn't be reading that, or, ha, that's where that
file is saving to.

So TL;DR: I say the future of computing is VR/AR, peer-connective, user-
driven, and massively parallel.

1 [http://www.amazon.com/Trillions-Thriving-Emerging-
Informatio...](http://www.amazon.com/Trillions-Thriving-Emerging-Information-
Ecology/dp/1118176073)

~~~
stephengillie
Right now, datacenters are just starting to fully abstract the OS from the
hardware -- physical servers run "hypervisor OSes", which can host several
virtual servers and live-transfer them from physical host to physical host.
And the hypervisor OS does little else.

Meanwhile, virtual servers handle all of the actual software tasks. Through
advanced routing, these virtual servers remain connected regardless of which
physical server hosts them, and even while being moved from one physical host
to another. Thanks to virtual HDDs on SANs, these virtual servers can always
reach their virtual HDDs, regardless of physical device failures.

Virtualization tech has already entered browsers. And browsers are slowly
becoming the entire client, as we've seen with Chromebooks. It's just a matter
of time for these movements to all collide, allowing for virtual servers to
freely roam the spare memory and CPU space of all of your browser-OS-based
computing devices.

This would be sort of like running Hyper-V on all of your users' Win8 desktops
to host your email, directory, database, and information servers. Boom, no
server!

~~~
seanmcdirmid
Those that work in this area do not believe virtualization is the answer to
our large distributed system cluster woes. They work great for cloud computing
utilities that take customers, but are unnecessary overhead for the high
performance stuff (Hadoop, MapReduce, Spark, etc...).

------
UNIXgod
Thank you for the post and translation. I really enjoyed reading it.

------
knwang
Thank you for the translation Fred.

------
andyl
Ruby brought me back into programming. Thank you Matz.

I hope that the future of Ruby will include better support for concurrency.

~~~
VeejayRampay
I don't really see simple concurrency happening in Ruby. Unless either:

1) The design of MRI changes fundamentally in the next few years 2) More room
is made for alternative implementations like Rubinius or JRuby

~~~
xentronium
JRuby is a very mature and widely used implementation by now. Granted, it's a
bit isolated from the world of MRI libraries with compiled extensions, but
compatibility story is getting better with every day. Besides, you are within
arm's reach from the java library world.

~~~
VeejayRampay
That's the point though, as sad as it is, it's not "there" if it's not drop-
in, given the stranglehold MRI has on the Ruby world.

You can run most applications with JRuby but to my knowledge, it always
involves fiddling and tweaking.

For a Rails application, using the jdbc adapter or Puma/Trinidad instead of
the regular servers for example. I am not saying this is a proper
justification but it does prevent people from testing it further.

Also, I am still looking for actual and undeniable proof that using JRuby
instead of MRI does actually bring better scaling/performance/memory
management/tooling. So far, the examples I have seen are conclusive, but still
limited to very particular use cases. That Heroku now officially supports
JRuby is a big plus I would say.

------
never_again
This interview was not about Ruby. I don't know why Matz always starts his
computing/programming journey from 1993 ? Its as if nothing existed before
that magical year.

<rant> This is hard to read, at least in firefox. Why the hell can't the
author keep the font consistent throughout the article?(interview).

Its ok to print a picture or a two. But 5+ images, why ? </rant>

~~~
fredwu
Blog author here. I am sorry you found it difficult to read. Would you mind
posting more information about your set up? (OS, Firefox version?) As I've
just checked and the fonts should be Lucida Sans, Helvetica Neue or Helvetica
depending on what's available in your system throughout.

In terms of the images - they were all from the original interview article
(linked in my post).

P.S. I am not sure what gave the impression that the interview was about Ruby,
as "Ruby" is not mentioned in the title.

~~~
never_again
Read my comment again. I said even though the article is _not_ about ruby,
Matz unfailingly starts from 1993. The headings in your article before every
question makes it hard to read. The font size of the question is 3x smaller
than the answer font size. Top that up with the images in between the answers.

My OS is Centos 6.1(32 bit) and the browser is Mozilla Firefox 10.0.1 .

I am sorry if this sounds aggressive. But very rarely do we get the chance to
interview some amazing men. When you do(or even translate), please don't mess
up.

Few days ago, a Donald Knuth interview appeared. It was a good read. May be I
got carried away.

Sorry anyway.

~~~
jwdunne
The validity of your criticism aside, some of which I don't agree with but
will get to later, I think you could deliver with a lot more respect. Treating
people in this way is unnecessary, whether or not you don't like how Matz was
interviewed or how it was translated. You are right, you do sound aggressive,
unduly so. If you're aware of that and you're actually sorry, why haven't you
made steps to change it?

Secondly, he probably uses that as a starting point because that's when he
produced the work he is known for. He isn't saying computing didn't exist
before this date, it's just a frame of reference. "Back when I first invented
Ruby 20 years ago...". What is wrong with this? Would you rather "Back when
Guido invented Python 20+ years ago..." or "20 years ago"? It's an irrelevant
detail.

If you're on CentOS 6.1 and your browser is Firefox 10, why are 5 images on a
page such a problem? I didn't think it made things any harder to read at all
and certainly didn't affect performance.

The increase of font size on the answers puts emphasis on the answers. I don't
think this is a good idea but the intention is good and did not make the
document anymore painful to read.

To finish, my eyesight isn't great and I read the article without my glasses.

Please man, you can deliver criticism without being a jerk. It's purely
subjective and delivering with such disrespect makes you look arrogant, as
though your opinion is law.

