

The Perils of JavaSchools  - ahmicro
http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html

======
alexgartrell
An interesting semi-related follow up from the former Fog Creek VP of Eng (now
at Khan Academy): [http://bjk5.com/post/3340326040/in-any-language-you-want-
kha...](http://bjk5.com/post/3340326040/in-any-language-you-want-khan-academy-
interviews)

Not understanding the underlying fundamentals when programming in high level
languages can be really dangerous, but generally it's not. I kind of view the
ongoing development of Compilers, VMs, JITs, etc. as being somewhat parallel
to that of processors. At one point, optimizing with machine code was crucial
to obtain performant code, but with crazy-ridiculous superscalar processors
and all of the things that come with it, all you can really do is code C,
compile with -O2, and then guess and check on optimizations. I wouldn't be
surprised if this was the case in most high-level languages before long.

Don't get me wrong, I first read this in High School and this was the essay
that made me love C, and made me want to try to get an internship at Fog
Creek. I just think it's starting to show its age.

~~~
raganwald
I agree with you, but wonder if the following is the case:

Consider the layers of software abstraction counting upwards from 0 (the
machine code) to _n_ (whatever we’re working with now).

As we add more layers on top, layer zero becomes more and more irrelevant.
However, that doesn’t mean that n-1, n-2, n-3 and so on are irrelevant. If our
languages and tools are leaky abstractions, what we need is some kind of
uniform depth, and the problem with JavaSchools is that they teach a very thin
approach to programming.

It’s true that with new languages and tools and so on the specific layers Joel
is talking in about become irrelevant, but we could write this post today
about JavaScript programmers who don’t understand the fundamentals about the
Java object model or Ruby programmers who don’t grok how Modules or Lambdas
really work because they’ve been spoon-fed Rails.

A Ruby programmer may not need layers 0, 1, or 2 but I bet he does need R
(Rails), as well as R-1, R-2, R-3, and R-4 whatever they may be.

~~~
keithnoizu
I think the basic argument is that the ability to handle recursion and
pointers is a good litmus test for the ability to handle higher levels of
abstraction. I can kind of buy into this, if you utterly can not fathom
pointers how far can you really make it as a software developer. You can chug
along and make a decent salary for awhile, you can be an idea guy and have
lead a good startup but at some point your potential is limited. To the extent
that of I were engaged in the hiring process I would be very squeamish to
actually extend an offer.

------
Zimahl
"... churning out quite a few CS graduates who are simply not smart enough to
work as programmers on anything more sophisticated than Yet Another Java
Accounting Application, although they did manage to squeak through the newly-
dumbed-down coursework."

From a guy whose company isn't creating accounting applications but Yet
Another Bug-Tracking Application, Yet Another Source Control Application, and
Yet Another Remote Desktop Application. I generally like Spolsky's articles
about software but this one is a little too pot-kettle for my taste.

The plain fact is that the time of low-level languages where pointers are king
is over. You can get great performance without them, more specifically without
messing with memory at all. Of course, pointers still have their place and if
you want to go that route feel free, but you can do a lot without them.

As a disclaimer, I went to school where C was our weed out course and Java was
the new-hotness. On graduation I was in a university focus group that
specifically asked us if they should change from C to Java for the intro
course (I voted against the change). I like pointers and recursion but do not
see understanding those as defining a great developer. I've seen plenty of
people who were way more adept at pointers that couldn't design a fuctional
system if they tried.

~~~
mdwrigh2
> I like pointers and recursion but do not see understanding those as defining
> a great developer. I've seen plenty of people who were way more adept at
> pointers that couldn't design a fuctional system if they tried.

I think you're misunderstanding the argument a bit. The argument is that
people that don't understand pointers and recursion are less likely to be very
good software developers. This is not the same as saying that people that do
understand recursion and pointers will be good software developers. It's just
one of many metrics used to weed people during the interview process.

~~~
Zimahl
I'm not misunderstanding the argument at all. I work with a bunch of good and
great programmers and we don't touch pointers - at all. I would bet that
without heading to the internet and re-learning pointers they wouldn't be able
to do even a simple linked list with them.

I would think that unless you work in hardware, it might be quite hard to find
low-level coding jobs these days. That's because it's unnecessary in most
situations. We don't need to push bits around anymore and managing memory by
yourself is like starting a fire with flint and tinder. Yes, it can be fun,
useful, and interesting, but it is rarely necessary anymore.

------
tikhonj
This is a relatively old article; I think it misses one more rather important
point: functional programming is becoming increasingly more popular right now,
and Java is a language that seems to actually _discourage_ using a functional
programming style.

If you only ever use Java not only will you not know how to functionally
program (this can be learned on your own anyhow), but also, and more
importantly, you won't know when it's best to.

~~~
hga
This is one of my biggest beefs with MIT's new undergraduate curriculum: in
the panic over the post dot.com crash in enrollment (from 400 students per
class for decades to a bit less than 200) among other things orders came down
from on high to purge Scheme from it. So now it's Python and Java, and as you
might know Mr. Python hates functional programing.

(My biggest beef is personal, in that I'm by nature a scientist and what was
most interesting in the department has been officially purged, although a cut
down version of 6.001 (SICP) is still being taught in January, and you can
even earn credit if you're one of the first N to sign up (purely a resource
constraint there).)

(Also BTW, enrollment has climbed back up, 305 sophomores are declared EECS
majors this fall term.)

Problem is, the multi-core future is very much here today and as far as I know
functional programming remains the best practical way to address SMP/cNUMA
systems (well, you can always go to actors but then you end up copying a lot
of data if you're not functional and in general they haven't caught on like
their proponents had hoped, except that of course that style is used for less
tightly coupled multi-processor systems like today's big supercomputers).

------
akavi
It's somewhat amusing to read this essay while sitting in Yale's CS 323 class.

I do wonder, though, how much of this essay is Spolsky's investment bias (Ie,
It must be worthwhile, because if not I suffered for nothing.)

------
oacgnol
To preface my comment, my background is not in CS. I majored in biomedical
engineering at UT Austin, where I chose a track that required a fair amount of
programming.

In my curriculum, I was required to first take 2 courses using assembly
language, then another course in C, and then a data structures course using
Java. Is the growing standard nowadays to teach with Java from the get-go?

~~~
arghnoname
It depends on the school, but it generally seems true that most don't start at
the lower levels of abstraction. What I've seen is that it is fairly common to
start with Java, but to also have a course in C as a requirement (sometimes
including some light exposure to assembly).

For computer engineering students who need to work closer to the metal,
starting with assembly or C is a lot more common.

------
lucasjake
The problem with this post has always been it starts out from the outset with
a title that implicitly puts down a large segment of students.

Had Joel dug deeper into how someone could solve this problem for themselves,
rather than brush off those who learned in this environment, this post would
have been great. Instead, it comes off elitist and dogmatic.

~~~
mindstab
Isn't the implication to solve it for yourself you just need to go out and
learn some other languages on your own time? that's how I took it and that's
what I've always done. I've always been aware of the incompleteness of my
academic education

------
silverbax88
I've seen this article before, and on one hand, I get it. On the other, I
think it's a dangerous trap to automatically think that people (in any
profession) should know how to build something in detail from the ground up
just to make it work.

Let's take a really great, experienced physician. Does he need to know how to
make an X-Ray machine in order to perform neurosurgery? At what point does
something, once required, become commonplace so that technology and expertise
can evolve?

~~~
Mvandenbergh
I don't think that's such a good example. Doctors don't build medical
equipment, nor do they do anything in that class. Coders write software and
the operating system, compiler, VM, etc. that they use as tools are also
software. If we want to compare with the doctor, we can say that a programmer
doesn't need to understand how junctions of doped semi-conductors behave, nor
how that can be used to build gates, nor how those gates can be used to build
a processor. The lowest level ever required is to understand the behaviour of
the processor (how that behaviour comes to be is a black box). This is because
that's the lowest abstraction layer that absolutely doesn't leak.

~~~
silverbax88
I really did not want to respond to this, but the issue here is that
programmers writing software does not justify re-learning problems that have
already been solved.

------
jrockway
I think the real peril is that sub-optimal technology has become standard
because "it's easy to hire people that know it". What's worse is that this is
a positive feedback loop, since Java is the most popular language "in
industry", students demand that they be taught it. This makes it more popular
in industry, and the cycle continues.

I'm glad Java introduced automatic memory management to the average workaday
programmer. I'm not glad that one of the better languages hasn't replaced it
yet.

~~~
mey
Having started on QBasic as a kid, cut my teeth in high school on
Pascal/C/C++, continued with C/C++ in college and the first part of my career,
then transitioned to Java, then expanded into a blur of languages
(Ruby/Python/Perl/C#/Scala/Groovy/Boo) I keep coming back to Java when I want
to accomplish most of my tasks.

    
    
        I prefer Java's environment compatibility over C#'s.
        I prefer Java's execution speed over Python/Ruby.
        I abhor Perl's syntax, and would rather use Ruby/Python/Groovy to solve Perl problems.
        Java's greatest strength is it's JVM, cheap, reliable, versatile.
        C/C++ force me to deal with a class of memory management bugs that I would rather not, even as a seasoned programmer who understands pointers/references/diamond problem/template meta programming/data structures.
        Non-type safe languages force me to move things the compiler is doing for me, into a test infrastructure.  I'm not removing the compiling phase, just renaming it to unit tests.
        Lamda functions/co-routines and no-side effects are interesting, and makes threaded programming easier, but need more exposure to the subject, it's more a hobby then a tool in my belt at this point.
        
    

So I come back to Java

Let me ask you, what do you perceive as a "better language", and for what
field?

~~~
jrockway
Terrible formatting.

A better language would facilitate more reuse. A better language would have a
real type system. A better language wouldn't explicitly discourage
unintentional extension points.

In Java, there is no way to reuse code other than by subclassing something.
This is the second-worst possible way to reuse code, because you often don't
want to reuse code in the form "make me a thing that's oh kind of like that
other thing". The best way to reuse code is through composition; either
something like interfaces with some implementation in them ("roles"), or by
automatic delegation. If I "has a" foo, then I should be able to delegate some
subset of foo's methods to myself. This allows me to use some of foo's
implementation without requiring foo's API to become my own API. (If foo does
some role fooable, then if I delegate the right methods, I also become
fooable. Then anything that takes a fooable can now accept me.)

The type system should make this standard. You should never be able to type a
variable with a concrete type; only abstract interfaces. This way,
extensibility is mandatory rather than a "best practice".

As for type systems, it should be something deeper than "there are some random
primitive types, and everything else is a subclass of Object". That's
worthless, because null is a subtype of every type, but violates Liskov. This
means that I can't ever be sure my code will actually work at runtime. Haskell
fixes this with the Maybe type. By making nullability explict, the compiler
can check that you handle it where it needs to be handled.

Finally, a better language wouldn't be so uptight about privacy. If you don't
want to support some API, don't add it to an interface that you implement.
Document that you don't want someone to call it. Most of Java's syntax focuses
on access levels: private, default, protected, public, final. Who cares? If I
want to reimplement a private method, it's probably because I know what I'm
doing. It's the langauge's job to make the programmer's intent clear, not to
stop me from violating that programmers's wishes. Larry Wall said it best:
"Perl doesn't have an infatuation with enforced privacy. It would prefer that
you stayed out of its living room because you weren't invited, not because it
has a shotgun."

Anyway, Java makes writing flexible and reusable software very difficult. This
is why you see so much brittle and outdated software written in Java. Compare
this to Emacs Lisp, which doesn't enforce any privacy ever. Much of Emacs is
over 30 years old, and it's still "state of the art". Food for thought.

~~~
mquander
_The type system should make this standard. You should never be able to type a
variable with a concrete type; only abstract interfaces. This way,
extensibility is mandatory rather than a "best practice"._

I worry. Java has checked exceptions. The stated purpose of this feature is to
make callers consider explicitly the different exceptions that might arise
from what they're calling, and deal with them appropriately.

The end result? There is an endless reserve of lazy people who don't do that.
They catch exceptions and throw them out. The feature doesn't work and there
are a bunch of no-op (or worse) try blocks everywhere that clutter everything
up and handle errors in inappropriate ways. At least, that's my experience
working with other people's Java code.

So, suppose you had to deal in interfaces everywhere. The goal of this feature
would be that programmers think about how to separate and generalize the
potentially-reusable functionality in the code they're writing. Would that
actually be the result? I sort of suspect that the actual result would be a
million interfaces that have thirty unrelated methods because they were
copied-and-pasted from the class that someone made. IFooable would be equally
as unreusable as ConcreteFooable ever was.

Basically, I don't think that restrictions like that are effective to stop
people from structuring their code awfully. Anyone who understands the concept
behind the restriction would have probably written nice, reusable code anyway,
and everyone else will just hack around it with trial and error until their
program compiles. (It's the same reason I agree with you about having the
compiler enforce encapsulation.)

~~~
jrockway
You have a good point. What this should really be is a mandatory convention.

Ultimately, I have no interest in making it easy for bad programmers to
program. This is Java's biggest problem -- it claims to help prevent bad code,
which people want, but it doesn't actually do that. Its "bad code prevention
features" actually have the opposite effect; they makes writing good code an
ordeal that requires great patience, skill, and tools.

The major problem with Java is that because code is so difficult to write,
people are afraid to delete it. Deleting code is the best possible thing a
programmer can do, and Java discourages by causing programmers to value bad
code. That's what's terrible about it.

(One big problem I see in the programming field in general is that everyone
tries to put their prototypes into production. Then they wonder why their code
is hard to maintain and it has a lot of bugs. The answer: you didn't know
enough about the problem you were trying to solve when you started, so now you
have a mess. Throw it away and start with complete understanding. Then you'll
be able to build your application correctly. Call it refactoring if you want
to, but the only thing that the prototype was for was to teach you about your
problem and how to solve it. Now do it for real.)

------
BonoboBoner
"And thus, the ability to understand pointers and recursion is directly
correlated with the ability to be a great programmer."

Wouldnt a few decades ago the statement have been:

Recursion an pointers are not enough, to be a great programmer you absolutely
need to understand X,Y.

------
RJF
Pointers - ok. But can't you (or are you better off not to) do recursion in
Java? Why is that exactly?

~~~
william42
Recursion isn't worthwhile in Java because of 1) a lack of tail-recursion and
2) the clunkiest possible syntax and most restrictive possible semantics for
lambdas.

~~~
esrauch
This is a very strong statement. AFAIK most languages don't have do tail-
recursion in general (Even most implementations of common lisp, and Clojure
only if you use a special keyword).

Recursion is worthwhile in certain cases in any language where the problem
naturally calls for recursion; things like DFS and BFS are a lot uglier
without recursion. Lambdas and first class functions really aren't necessary
for a lot of situations where recursion is natural, and in those scenarios it
is clearly worthwhile in Java.

