Yes the chart has errors, yes some of the vectors make no sense. Yes, your version of the chart would differ greatly (hell, his "enhanced" one differs significantly from the basic one.)
Cut the guy some slack. He put the work in and the result is a pretty cool look at how he makes sense of the way various languages interrelate and borrow from each other. There are others showing the way other people think [1] and I hope some of you will try your hand at it.
The enhanced version is completely different from the regular version. Not only are the reasons different, but a lot of the languages don't even inherit from the same sources (the regular version had javascript coming from java. wtf?). Just goes to show how arbitrary/silly it all is.
Still pretty awesome. Why did haskell come about? "Lazy evaluation is cool, yo."
'Still pretty awesome. Why did haskell come about? "Lazy evaluation is cool, yo."'
Hah. I added that edge way back when I was back in college. The "extended" graph was actually crowdsourced on the C2 Wiki, circa 2004, and it's pretty cool to see it here now.
Erlang actually came from needs in proprietary languages like PLEX and other stuff used within Ericsson. The ideas were to have better concurrency (not parallelism) and better fault tolerance (through distribution and whatnot), properties that were thought to be too hard to do with other languages at the time.
That it has lisp influences and was implemented in Prolog first speaks of the creators of the language, not what it attempted to fix in there. And I seriously doubt that fixing lisp syntax was an objective in the language, especially given Robert Virding, one of the original 3 creators of Erlang, now has his own lisp syntax on top of the BEAM VM as LFE (Lisp Flavored Erlang).
Very likely Perl didn't have integrated support for Japanese when this was generated. I'm not an expert, and this version was semi-crowd-sourced, so no doubt it's full of inaccuracies.
I came here to inquire about the same thing. The shared page has been last modified 2011-07-28 but currently Perl seems to handle Japanese fine, either via the Encode module (core) or alternatively Unicode::Japanese on CPAN.
Regardless, I'd be interested in knowing of any problems people have had with handling Japanese.
Years back Perl mustn't have had functioning unicode, because I recall programmers working with Japanese text having to use JPerl, a funkily patched version, instead.
Colin, would a reasonable implicit assumption be that if the problem being solved isn't a problem for me, then I should stick with the language in question?
I mean, for e.g., if I don't care that Python's not OO enough, I don't need to know Ruby, right? Right?!
p.s: in case it's not apparent, I will be using these graphs to justify some laziness on my part :)
There might be other reasons that are more important to you. e.g. if you've been using Modula-3, its syntax might be a reason to switch to Python, but in my mind, things like the iterator protocol and NumPy are better reasons. (Also, you've been asleep under a rock for fifteen years.)
In my case, I've been surprised at how much less syntactic overhead Ruby imposes than Python (despite the explicit "end"!), and how much it matters. Compare:
def mouse_clicked
x = mouse_x / @cellsize
y = mouse_y / @cellsize
if @cells[x][y] == :empty
set x, y, :wire
elsif mouse_button == LEFT
set x, y, :empty
else
set x, y, :electron_head
end
end
def mouse_clicked(self):
x = processing.mouse_x() / self.cellsize
y = processing.mouse_y() / self.cellsize
if self.cells[x][y] is Empty:
self.set(x, y, Wire)
elif processing.mouse_button() == processing.LEFT:
self.set(x, y, Empty)
else:
self.set(x, y, ElectronHead)
(Here ElectronHead and the like are assumed to be top-level empty classes in the module; the Ruby equivalent is a Lisp-style atom that doesn't need to be declared.)
To me, the Ruby version has a much higher signal-to-noise ratio.
Another thing: JRuby seems to be a lot more mainstream among Rubyists than Jython is among Pythonistas, which I think means it's a bit more solid.
Hi there...experienced ruby guy here who is doing python work at the moment. The nice thing I find about python, even in its verbosity relative to ruby, is that I always for the most part know what it's doing. With ruby, the impulse to overload operators and create dsls mean that I have to learn a new language (potentially) with every new project or library I choose to use. It's not immediately clear what code is really doing. With python, it usually is. Just my (quick) two cents.
You probably will, if you're anything like me. However, at the end of the day, I find the "require" statement in ruby to be the truly most irritating feature. You simply don't get much control over what is slurped into your code. Python's import statement is so refreshing in comparison. I feel more in control in python, and I say this as someone who really likes ruby.
I don't understand this code. In the Ruby example, mouse_x() appears to be a method on the current object, in the Python one it's a method on some other object (a module named "processing"?). Since the code examples are the same, I'd say whichever language I'm wrong about is the less clear one.
Yes. I don't have a Python version of Processing, so I thought about how the Processing API would best be exposed in Python — and it would probably be by calling functions exported by a module named "processing", rather than by inheriting your class from processing.App. So you're right about both languages.
I don't understand why you think the Ruby code has better SNR. The python code is two lines shorter. In python, you have to type the (), but that's not really "noise". I do hat the : at the end of python control statements. I always forget them, and it seems like the parser could do it's job just as well without them.
> I don't understand why you think the Ruby code has better SNR.
Because they say the same thing, but the Python code is much longer, if you count tokens instead of lines. It has about 28-30 more tokens than the Ruby code does, about three per line. Those tokens don't convey any information. They're just redundancy, i.e. noise. And there are a lot of them; these redundant tokens are something like a third of the code.
Now, redundancy can be useful (see e.g. http://www.paulgraham.com/redund.html) but it is costly, so you should keep it to the minimum that achieves what you want the redundancy to achieve.
Haskell removes not only the () that Ruby does, but also the commas. It also uses a simple precedence trick to remove almost all () in general (not just for functional calls).
I wanted to translate the code to Haskell directly, but the idioms don't map that well (self.cellSize would becomes something else entirely, and the comparison would not be fair).
I'm curious if you would still think of token count as a useful metric if the Ruby version turns out to be twice the size of the equivalent code in J or K due to "redundant" "noise" tokens. Programs should be written for people to read, and only incidentally for machines to execute.
I do think it's a useful metric, and I am often tempted by the low token count of Forth and APL dialects like J and K, but much of the APL reduction in code size comes not from token-count reduction, but from token-length reduction, which is much less praiseworthy in my book.
life←{ ⍝ John Conway's "Game of Life".
↑1 ⍵∨.^3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵ ⍝ Expression for next generation.
}
Unfortunately, my APL is pretty rusty, and Dyalog has added some major extensions to the language, so I'm not quite sure what that means, in particular the juxtaposition of 1 and ⍵ toward the beginning. I think it parses as
The Life rule (which is pretty optimal for being expressed in APL, given its array orientation) is slightly simpler than the Wireworld rule:
def run_wireworld_rule
old_cells = @cells
@cells = fresh_cells
each_coord do |x,y|
# This could be optimized somewhat by only recalculating
# the neighbors of dirty cells.
case old_cells[x][y]
when :electron_head
set x, y, :electron_tail
when :electron_tail
set x, y, :wire
when :empty
# do nothing; fresh_cells are all :empty
when :wire
case electron_head_count old_cells, x, y
when 1, 2
set x, y, :electron_head
else
# Don’t call `set` in this case so as not to mark
# the cell dirty for redrawing.
@cells[x][y] = :wire
end
end
end
end
# This could perhaps be optimized somewhat with a sum table.
def electron_head_count(cells, base_x, base_y)
count = 0
([0, base_x-1].max..[base_x+1, @nx-1].min).each do |x|
([0, base_y-1].max..[base_y+1, @ny-1].min).each do |y|
count += 1 if cells[x][y] == :electron_head
end
end
return count
end
def each_coord
(0..@nx-1).each do |x|
(0..@ny-1).each do |y|
yield x, y
end
end
end
The APL people claim that their programs are more readable because they're shorter, but I'm not sure how much I believe their claim. It's certainly true that other kinds of mathematical notation benefit enormously from brevity and consistency that permits mechanical manipulation, and I don't see why algorithms should be different, but empirically I have a lot less trouble with Ruby or Python (or even C) than with plain-English descriptions of mathematical equations.
If the program used them in several places, that would be an improvement, but it doesn't. It's maybe a little bit atypical, but nearly all of the things it calls from Processing, it calls in exactly one site: the above, plus color_mode, no_stroke, smooth, background, and rect.
It does call fill from four call sites, but that was because there's no reasonable way to return a list of four arguments from a method in Ruby, as far as I can tell.
That's great, thanks! And that works for method calls too. So now I can replace set_color_from @cells[x][y] with fill *color_from(@cells[x][y]), reducing the number of side-effecting methods by one, and now I'm calling every Processing function in at most one place.
Not only is it arbitrary, it could be cool if it was realistic. But "Not pure -> ML" ? Seriously ?
It should be
Turing machines -> Too awkward for modeling programming -> Lisp
Lambda calculus -> No typing -> ML -> Not pure -> Haskell
ML -> no objects/modules -> Ocaml
When the people who "designed" PHP did so, why did they think no existing language would do the trick? What was the problem with all the existing languages?
Everything -> not bundled -> PHP
Perl -> too weird -> PHP
It's hard to explain to a novice that in Perl 4/5, $x[42] dereferences @x, not $x (which are completely separate, except when they're not, due to *x). And PHP has a much more typical object system. I think we agree that most all of PHP's distinctive and non-mainstream design decisions have been bad ones, though.
> It's hard to explain to a novice that in Perl 4/5, $x[42] dereferences @x, not $x (which are completely separate, except when they're not because of (star)x).
(I don't know how to protect the star from starting an emphasise block. Escaping it doesn't seem to work.)
Sorry to nit-pick, but these may be particularly hard to explain because they're not true.
$x[42] isn't a de-reference at all, but an indexing. (Granted, it's more than a little confusing that it's indexing @x. This is 'fixed' in Perl 6.) $x->[42] is a de-reference, but of $x, not of @x (which isn't a reference anyway (I think unlike in Ruby)). @x and $x are always completely separate; typeglobs allow you to say something like (star)x = \@y, whereupon @x and @y are the same (but still $x and @x are completely different). I suppose you could claim that $x and @x are no longer completely different if you say something like $x = \@x, whereupon $x->[42] and $x[42] mean the same thing.
(On using *, the trick is not to put two in the same paragraph with words between.)
You're right, "dereference" was a poor choice of words because it means something more specific in Perl. And I think a programmer would need at least a year of experience before they could possibly understand your main paragraph, while PHP was dumbed-down enough to be approachable by amateurs.
Yes it can be confusing to a novice how Perl sigils work. Hence Larry Wall conceded this as an issue and so one of the reasons he went for invariant sigils in Perl6.
However there is logic in the madness. Perhaps Larry Wall/linguistic logic but none the less it does work! The best explanation i've see is the comment by "Matt S Trout" on devolving-sigils blog post (http://blog.fogus.me/2009/02/26/devolving-sigils/).
Below is the key sentence from mst comment:
Actually, perl sigils don’t denote variable type
– they denote conjugation
– $ is ‘the’, @ is ‘these’, % is ‘map of’ or so
– variable type is denoted via [] or {}.
Rasmus Lerfdorf (designer of PHP) did an interview where he explained PHP's origins http://www.twit.tv/floss12.
It's been a long time since I listened to the interview, but I think he created PHP to create simple websites. Wikipedia even says the original acronym was "Personal Home Page".
The problem PHP tried to fix (and largely succeeded) had very little to do with the language, and a lot to do with the deployment of products written in that language, as I understand it.
Your reference seems to miss the point. The question is - what was wrong with existing languages? Why was JavaScript developed instead of using an existing language?
Javascript was not created to "fix Java's 'scary syntax'"!! Java applets were self-contained features embedded in a webpage whereas javascript– argh! Why are we having this discussion!? :p
It's not a question of what the language was derived from, it's a question of what prompted it to be developed.
It's not where it drew it features from, or what its ancestor was, or what's it's most like. It's a question of what caused the developers to create a language, not where they got the new language's feature from.
This is definitely worth a bookmark. It must've taken some considerable time to create but is of great value. I find it to be a quick reference to see how the languages differ from each other. Very nice.
I think Ada should be linked from Pascal, with (like Modula) an arrow labelled "too wimpy for systems programming", or perhaps "programming in the large".
In this case, it actually seemed to solve it. A lot of the older lisps died off as a result of CL. Scheme and Dylan were really the only survivors for a while. IIRC.
I won't add everything everyone suggests, that's just silly, but I've added that. The whole thing is intended as a bit of fun and not to be taken seriously, but I thought that was worth adding, not least because it improved the layout 8-)
I'd say Java targeted to improve on C++ by abstracting the machine, and only took some cool features from Obj-C. Which by the way came from Small Talk.
Bullshit. Not to mean or anything. But bullshit. Most of the arrows are labelled with the author's post-hac, childish generalizations of language differences. JavaScript came from Java because of the scary syntax? No, it came from Scheme because of the scary syntax, Java's syntax was considered particularly comforting. There's no relation between Lisp and MacLisp. Lisp came from "Turing Machines"? Ruby came from Perl and Lisp (but not Scheme) and not from Smalltalk? In fact, seems like Smalltalk had no influence at all. It's a cool idea, but it forces one to trivialize language differences to a scary extent.
Perhaps part of his point if it's "just for fun", which in the broadest interpretation could be taken as "without any basis whatsoever" then it doesn't belong in front of a crowd of people who expect logical explanations for things.
Let's be honest—some of the descendants are a bit off (Java->Javascript, anyone?), and some languages have multiple inspirations not listed.
It's an interesting idea, and perhaps the starting point of actual research, but isn't yet trustworthy or sensible enough to be taken as gospel.
> some of the descendants are a bit off (Java->Javascript, anyone?)
The page explicitly notes that this is not an inheritance / 'paternity' graph, but rather a description of the (perceived) problems with one language that another language fixes (or tries to).
I don't think there's even any implication that any one language was created in response to any other, just that it shares certain aspects, and removes certain warts.
Yeah, actually I have noticed and was well aware of it when making my comment. The topic of this particular comment thread, however, was specifically about the first graph since the mentioned link's validity was being "defended" by saying the graph was not an "inheritance / 'paternity' graph".
In fact, I could have also mentioned myself that the link was taken off the second graph to further my case that it was just plain wrong. The only reason I did not was because I thought it was something so obvious it would not be necessary to state it.
Cut the guy some slack. He put the work in and the result is a pretty cool look at how he makes sense of the way various languages interrelate and borrow from each other. There are others showing the way other people think [1] and I hope some of you will try your hand at it.
[1] Personally, I really like the one O'Reilly did a few years back: http://oreilly.com/news/graphics/prog_lang_poster.pdf I suspect I tend to think more linearly than the OP