
Why I hate Java - adgasf
http://warp.povusers.org/grrr/java.html
======
tieTYT
> Why does this matter? It matters for the exact same reason why memory leaks
> are bad in general: They consume more memory than necessary.

Thing is, this doesn't usually matter. I have never gotten an out of memory
error from a leak in Java. Now compare that to all the development time I've
saved by not having to deal with pointer arithmetic. I consider it a huge win.
It's all about the type of apps you're making.

~~~
majewsky
When I play Minecraft on my notebook, I first shut down all nonessential
system services (I have a handy line in the shell history for that). This
allows me to get around 45 minutes out of Minecraft (instead of 30 minutes)
before it gets struck by the OOM killer.

~~~
tieTYT
I've used tons of programs that have problems and crash for various reasons.
Is this an argument against the language? I don't think there'd be any left to
use with this line of thinking.

Besides, there's too many variables in your anecdote. Is it a laptop from
1995? Is the OOM from a bug that could/should be fixed?

~~~
majewsky
The notebook is from 2012 and has 4GB RAM. Minecraft stands out because most
other programs of similar complexity (e.g. Portal 2) work fine.

I've heard a story that Minecraft's RAM consumption got a lot worse after
Notch stepped down. The new developers refactored the code for OOP best
practices (such as passing a 3D coordinate as an object rather than "int x,
int y, int z"), which tremendously increased the number of allocations and
thus GC pressure and memory usage. So it's fair IMO to blame this to a
language. Having good practices lead to such consequences is terrible design.

------
kronos29296
I don't know about the other things but Even Bjarne Stroustrup calls multiple
inheritance the wrong way to go. I don't think I'm better than him. So I
disagree on that. The other stuff - Well all languages are imperfect. That's
why we have so many of them.

------
lispm
> One problem with garbage collection is that it needs to see the entire
> memory used by the program even if the program is not using big parts of it
> for anything.

The first generational GC was proposed for Lisp in 1983. See
Liebermann&Hewitt. The heap is divided into generations based on the lifetime
of objects. Typically only the youngest generation, which is kept small, is
scanned. This is based on the observation that a lot of objects are only
short-lived. Thus a GC does typically does only need to look at a fraction of
the memory.

Also GCs might want to use regions of similar objects. That way only those
regions need to be looked at, which may free up space for an ohject that wants
to be allocated currently.

------
westoncb
Probably worth jumping up a level in the URL to:
[http://warp.povusers.org/grrr/](http://warp.povusers.org/grrr/) —and noting
that this comes from the guy's "What grinds my gears" page, which also links
to a "Why I hate C" article.

------
carsongross
Java is a decent language for its time, with a bad default tool set and a
development culture that leans very heavily towards J2EE-style verbosity
(javascript is trending that way now too, w/ angular, react, etc).

The JVM, however, is a national treasure.

------
yincrash
"Why I hate garbage collected languages"

~~~
adgasf
It would be interesting to see a language that implements RAII but provides a
GC as a fall-back.

~~~
zokier
Rust went through that stage at one point. It had language-level support for
GC, first by having @-pointers (e.g. @i32 would have been pointer to GC-
managed int) and later when sigils were removed it had short-lived Gc<T> type.
iirc GC support was finally completely removed during the crunch to 1.0.

I'd point out that afaik rustc never got a proper good GC implementation
(which probably weighted heavily in the decision to remove it before 1.0), so
@T/Gc<T> were mostly plain old refcounted pointers with some smoke and
mirrors.

See [http://words.steveklabnik.com/pointers-in-rust-a-
guide](http://words.steveklabnik.com/pointers-in-rust-a-guide) for example
(section "Managed pointers")

------
maxxxxx
I ma not sure about the rest of the points made but I agree about the lack of
deterministic destructors to free up external resources. C# has the same
problem. It would be really good if the destructor ran when the reference
count of an object goes to 0. This would make a lot of code that deals with OS
resources much cleaner.

~~~
zzalpha
C# has a using block that makes resource deallocation explicit syntax, which I
_far_ prefer to implicit destructor behaviour.

That said, we need IDEs that sound the klaxons if a Closeable is not placed in
a using statement.

~~~
maxxxxx
I find the using syntax pretty ugly especially once you have several objects.
Some keyword like "autodispose" may help. Managed C++ has this. Objects that
are allocated with the stack syntax get disposed at the end of the scope.

~~~
zzalpha
Again, that creates implicit behaviours that can be extremely surprising.

You can deal with multiple Closeables by enhancing syntax.

But tying deallocation to disposal creates action at a distance that I'm
really not a fan of, personally.

Resource allocation and deallocation can take time, can block, and can fail.
Those semantics make them a terrible fit for constructors and destructors.

~~~
btschaegg
As a counterpoint to this:

\- Making it explict has the two distinct problems that a) the code for it
must be written each time (I regularly encounter code that ignores
Disposables) and b) is closed to change (adding a dispose semantics to a class
essentially creates bugs throughout your codebase). Those two, I deem far
worse than a destructor that is blocking.

\- On the C++ side: If you encapsulate the resource properly, you're not
really binding the two together (at least on one level of abstraction), since
you're basically just managing a handle. Note that it is also really easy to
split both up (by introducing some NULL-handle) and only dispose the resource
in the destructor if it hasn't been disposed yet (of course, that is more bug-
prone, but so is IDisposable).

\- With move semantics (or std::swap before C++11) this also allows you to
dispatch "cleanups" to other contexts, if necessary, which gives you exactly
the same possibilities _where necessary_ (of course, this also being explicit
in those cases).

\- Failed resource allocations in constructors should not be a problem (either
use exceptions or NULL-handles).

\- Failed resource deallocations are a problem in any implementation I know of
and likely not "conquered" easily in the near future.

So, to sum it up, I think C++ gives you the better tooling because you can
design the appropriate API yourself. C#'s implementation is very set in it's
way and has a couple of problems that would be nice to avoid (e.g. the "you
can't free resources in a finalizer" thing).

I'd prefer an API that does the right thing by default (albeit slow) and lets
me optimize where needed instead of one that has little benefit (for most
scenarios) but makes the code more error-prone. That means essentially: If you
implement IDisposable in C++ and dispose in the destructor where necessary,
you get the best of both approaches.

The remaining arguments that then can be made against C++ are either bad code,
abstraction problems and/or cumbersome libraries.

------
Analemma_
Is there a date on this? This feels like a (1997) article at the latest.

~~~
foota
It predates generics at least.

~~~
Retric
Generics don't solve the TypeDef problem. It's the old, I am probably never
going to need anything over 2 billion, but I am not 100% sure.

~~~
foota
I was talking about the following:

"Now, in Java, if you use a data container provided by the language, you are
forced to upcast all the time."

Which is solved by generics.

~~~
Retric
Last I checked, if you store an int in a generic it get's converted to an
object which takes more memory to store.

So, the syntax is similar, but the actual behavior is not because int's can't
be null, but Integers can for example. This costs 8 ish bytes of memory.

~~~
foota
That is true. It looks like there is some work being done to potentially relax
this constraint,
[http://openjdk.java.net/jeps/218](http://openjdk.java.net/jeps/218)

------
burntrelish1273
Risking discussing religiousity:

Java as a language has some rough-edges: multiple inheritance and verbosity.
Personally, I'd use Kotlin if there were a requirement to run on the JVM.

In terms of the JVM:

Erlang VM is under-appreciated: each "process" (not an OS process and lighter
than an OS thread) has its own heap so there's no global GC pauses, share-
nothing and let-it crash doesn't take out Erlang VM.

For anyone stuck on JVM, look at Azul's free Zing for shorter/less GC pauses
and a generally faster JVM.

------
sanbor
What about the point of gc solving the issue of your app crashing or security
issues everywhere because a malformed input ends up crashibg/executing code?
In the debian security mail list almost every week there is a path for that
situation so looks like it's pretty common.

~~~
AstralStorm
GC does not solve any kind of a crash. (In fact might cause some extra NPE or
out of memory crashes.) Safe pointers do. That is a separate feature of Java
from GC.

~~~
nayuki
A precise garbage collector requires safe pointers. A precise GC allows the
copy/move/compact strategy, whereas a conservative GC only allows mark-and-
sweep.

------
TheGuyWhoCodes
Should add 2006 to the title...

