
Optifine dev on performance problems in Minecraft 1.8 - 10098
http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/1272953-optifine-hd-a4-fps-boost-hd-textures-aa-af-and?comment=43757
======
comex
> \- All internal methods which used parameters (x, y, z) are now converted to
> one parameter (BlockPos) which is immutable. So if you need to check another
> position around the current one you have to allocate a new BlockPos or
> invent some object cache which will probaby be slower. This alone is a huge
> memory waste.

In other words, Java desperately needs value types so that this kind of simple
abstraction needn't cause any overhead.

~~~
jzwinck
My takeaway was more like "Developers need to understand the strengths and
weaknesses of their chosen platform, and be cautious not to degrade
performance during refactoring."

Certainly it would be much easier to just go back to the (x, y, z) convention
than to change the Java language.

~~~
soup10
I agree with this. Sounds like the newer devs are not as
experienced/knowledgable optimizing java as notch was.

Article also highlights the challenges of using a GC language for high
performance games. GC works against you most of the time and you're better off
statically allocating as much as possible.

~~~
eropple
I dunno, in this particular case, naivete and experience actually look kind of
the same. =) It's in that middle ground where you _know_ idiomatic Java where
you might make this mistake. The naive programmer who isn't comfortable with
OO and the experienced programmer who knows when to decompose OO might make
very similar code here.

~~~
Vendan
yeah, it's kinda hilarious. Notch has been bashed before cause he didn't write
the code to the "Java Community Standards". It gets rewritten more towards the
standards? Boom, performance sucks. So much for those "Standards"!

~~~
eropple
Well, no. Those standards exist for a reason. For the overwhelming majority
ofJVM projects, they are a wise set of choices.

Having seen his work, and his tendency to reinvent the wheel (his original
iteration of the Minecraft website used flat files for storage, which is
really weird when you're writing Java and H2 is right there), I don't think
he's a terribly good Java programmer. Gets stuff done, but doesn't work Smart,
to use a Spolskyism. He happened to be right in this case. And that's cool,
but throwing away best practices so contemptuously because they aren't
applicable in corner cases is at best foolish.

------
MichaelGG
This kind of thing is where I really hate the big GC systems, the JVM and the
CLR. Many times, a ton of efficiency could be gained with some simple tricks.
For instance, "OO style" code really loves newing up objects, even if they're
simple containers. In .NET, this means a heap allocation and garbage, even if
the little object is immediately picked apart and never used again.

Example, the String.Split function takes an _array_ of chars to split on. So
every time split is called, normally, a new heap alloc happens for no good
reason. Functions should be able to provide some simple "pure" annotation and
let callers stackalloc. In fact, the CLR already supports this - it's just
super cumbersome to get at. The JVM supposedly can do this in JIT, but this
post seems to indicate it's not effective.

Secondly, it seems there it often a unit of work where you could allocate from
an arena, and then throw it all away together. I know this is more involved,
as something could leak out by accident in code. But perhaps the arena could
be a hint to the GC "if nothing points into this entire arena dump it in one
chunk". Maybe GCs are too fast to benefit from this.

Really, it's s bit if what Rust will accomplish with the borrow checker. I
know for a fact (measured) that adding a bit of such management to .NET would
be a major boon in certain high allocation scenarios.

Edit: The evangelism around these languages doesn't help. There's a big push
to leave it to the JIT, that the runtime knows best. But in truth, they seem
to still have fairly suboptimal codegen. Even inlining is poorly handled. For
some idiotic reason, they still JIT, and have to make a time/speed tradeoff.
Even if it's an program that you're going to execute repeatedly, the installer
has to go out if it's way to pre compile. And even then, the pre compiler
doesn't do a lot more, and MS warns people it might be worse, because the
runtime knows best. I guess no credible competition leads to not putting tons
of resources on things.

~~~
voltagex_
Whoa. I'm assuming I'm not doing enough String.Splits to worry about
performance, but what's my alternative? Looping through the string myself?

~~~
MichaelGG
That's just one example. There's plenty of APIs that force garbage to be
created for no good reason. For string.split what I've done when it was
critical, is to statically allocate an array for each type of split.

For other APIs, I'd create a state object for each request or piece of work
that contains assorted buffers and other temp objects, then pass it around as
needed. Ugly, but at high processing rates, every allocation counts.

------
gchpaco
From OP: "There are huge amounts of objects which are allocated and discarded
milliseconds later."

Any mature generational GC should handle this with literally zero overhead.
Assuming it is not doing a nursery gc every millisecond, all those objects
should die in the nursery almost immediately. It has been well understood for
at least twenty years how to do that in O(live objects), and the JVM has for
all its many faults a very good garbage collector. So I am quite skeptical.
This is also at odds with empirical evidence which is that going from 1.7 to
1.8 with the same world improves framerate.

Now the JVM gc has about a million tuning parameters and most hardcore
Minecrafters have a witches brew of tuning that they run with. It's far from
impossible that those tuning parameters are totally inappropriate with 1.8.
But the GC should handle this fine.

~~~
collinvandyck76
So, the nursery is a certain size. If you are filling it up continually you're
going to be spending a lot of time managing it. Not only that, but if your
rate of allocation is high enough you might inadvertently promote a number of
objects that have not gone out of scope yet to the tenured section which in a
less demanding allocation scenario could have been collected from the nursery.

~~~
x0x0
nursery gc should be very fast, particularly if most objects die; this issue
sounds like it needs more investigation

also, it sounds like the devs should be doing some testing on typical user
machines, instead of higher powered dev boxes

~~~
Skinney
Nursery gc on my machine (Macbook Air 2013) is 10ms.

~~~
gchpaco
Nursery gc is proportional to the size of the live object set (here alleged to
be 0) plus a constant overhead. It's not a simple thing to benchmark.

~~~
Skinney
Live memory 25Mb in my case, sorry, kinda important number to mention :P

------
Skinney
This post seems to be based on certain erronous assumptions.

First, he seems to believe that "size of allocated memory" == "longer
collection time", this is not true, especially when, as he says, most of the
allocated memory is short lived. A GC only scans live memory and considers the
whatever hasn't been scanned as garbage. If most of your memory isn't live, as
seems to be the case here, collection should be relatively consistent,
regardless of allocated memory or the memory available to the JVM. Increasing
the memory available should actually increase performance, because the JVM can
run collections less often.

It seems to me that what the devs should do (instead of waiting for a proper
struct implementation, like .NET has, on the JVM, which would avoid these
problems) is to make BlockPos mutable and store unused objects in a
cache/buffer. This might be (barely) slower than just allocating the memory,
but used correctly it will trigger fewer collections as you allocate way less.

~~~
personZ
_instead of waiting for a proper struct implementation, like .NET has, on the
JVM, which would avoid these problems_

[http://blogs.msdn.com/b/ericlippert/archive/2010/09/30/the-t...](http://blogs.msdn.com/b/ericlippert/archive/2010/09/30/the-
truth-about-value-types.aspx)

And it's worth noting that escape analysis, which the JVM currently has and
.NET does not, is the best of both worlds -- the developer doesn't need to
decide or make such macro optimizations, but the runtime can choose, based
upon the lifetime of the object, whether it should be a stack or heap
allocation.

I once thought the whole value type thing was a superior choice of the .NET
team, but now it seems that the everything is an object, just make the VM
smarter, was the better choice.

~~~
Skinney
Problem is that escape analysis can't always do a good enough job, it doesn't
catch everything. Lead JVM developers have stated this themselves, which is
why Orcale is exploring how to implement value types on the JVM. Naturally,
value types doesn't magically solve things either (I'm well aware of the
shortcomings as described in that article), but for this particular use case,
it seems like the perfect solution.

~~~
personZ
_Lead JVM developers have stated this themselves_

Where? Who?

Yes, of course they explore options and alternatives, and no solution is a
panacea, but this debate is well over a decade and a half old: This is not new
ground. And while we're in a world where Microsoft is saying "give it up with
the whole value / reference type thing" (the value/reference type is an
implementation detail), Java went ahead and optimized the existing platform to
a pretty good degree, in many (but not all) cases, leaving it to a pretty good
GC to pick up the pieces.

Just to be clear, I'm no Java booster. I've spent a good chunk of my career
heavily involved with .NET. But for all of the talk about the superiority of
the .NET platform and its illustrious value types, it's Java that quite
soundly takes the performance crown when they go head to head.

~~~
Locke1689
_And while we 're in a world where Microsoft is saying "give it up with the
whole value / reference type thing" (the value/reference type is an
implementation detail)_

No. First, they are semantically different, but second -- it's an
implementation detail that provides better performance when used correctly.
So, exactly what the parent was describing.

This is not to say that the CLR couldn't also optimize better, but personally
I'm just voting for Project N.

~~~
personZ
It's an implementation detail that _may_ provide better performance, but often
provides no measurable difference at all (yes there is a profound semantic
difference, and such is the foundation for the mechanism in the first place,
but that hasn't been the context of this discussion). The parent was
contrasting it against a JVM that does both better garbage collection, and
actual escape analysis, so the comparison borders on absurd, and it is a
statement made with zero context.

~~~
Locke1689
I can tell you that in Roslyn it is a very important component of our
optimization strategy. System.Collections.Immutable.ImmutableArray is an
obvious example.

As an extremely high performance managed application that competes directly
with native code, it seems to be quite a good proxy for similar applications,
including Minecraft.

------
cromwellian
Don't worry, Microsoft will fix it when it's rewritten in C#. :) I kid, I kid
<I hope>.

Actually, I wish Minecraft had been open-source like Notch was originally
talking about. The community could fix most of these issues, in lieu of
introducing new features.

~~~
MichaelGG
According to the post, the problem is passing objects as parameters instead of
individual values as separate arguments. Each time they do this, that
allocates an object. C# has value types, unlike Java. So the BlockPos
structure wouldn't need an allocation, just some stack space.

~~~
bhouston
Why the heck did they decide to pass around objects instead of parameters?
This sounds weird, sort of like an internal a framework that has gone too far.

~~~
eropple
It sounds like something coming from a really weird misunderstanding of Java
and its perf characteristics. For most Java projects, immutable, semantically-
useful objects make a lot of sense (indeed, Scala does this all over the
place). For a game, not so much; you're going to end up object-thrashing all
day and unlike on the web you have very hard latency requirements that make
this extremely undesirable.

~~~
e12e
I've not really looked much at how java handles memory, but I thought the
basic pattern for all Objects was pass-by-reference: the method gets a pointer
to the object. Is it a means of guaranteeing immutable access that such a
pointer becomes a pointer to a (deep?) copy of the object allocated on the
heap?

~~~
eropple
They're pass-by-ref, but if you have to create an object _every time you want
to refer to a block, anywhere_ , you're gonna have a bad time.

With mutable objects, like libgdx does for its vector classes and similar, you
can do object pooling to reuse them and avoid the GC thrash.

~~~
e12e
I thought the general idea with a voxel/block world was that "world state"
could/should be a 3d array/"matrix" of blocks (possibly with empty blocks
collapsed to null references)? So of you're using objects, that's what you'd
put in the array. I can see how an array of 8 bit ints (basically an enum of
block type) would be a lot more compact and/or that some kind of packed
representation would be (on the face of it) vastly more compact - but if
that's what they're doing, wouldn't it make much more sense to stuff the
abstraction into static methods (functions) rather than marshall (full)
objects left and right?

~~~
eropple
Yeah, now how do you define a location in that voxel space?

In Java, you can pass XYZ and parameters or you can create an object (in this
case, BlockPos). Every object is heap-allocated, and the heavy use of
transitory objects (like, say, every frame) creates a lot of garbage that'll
need to be cleaned up.

~~~
e12e
Right. I was just thinking that the most straightforward, naive, approach in
java was to have an array of Objects (so you'd pass around those references),
and if you were going to pack everything into a more "primitive" structure
(like an array of int8s) it wouldn't make much sense to marshal those into
objects, but rather go for a more traditional data structure approach and
stuff the complexity into some static methods... or failing that at least use
some local, long-lived "proxy"-objects with raw attributes:

    
    
        MyVoxel here = new MyVoxel(x,y,z,normal...);
        MyVoxel there = new MyVoxel(); // "null" voxel
    
        for // some x,y,z
          here.xyz(x, y, z);
          for // some offset_x,y,z
            there.xyz(x+1,...)
            here.distance(there) // or whatever
    

Which I suppose is essentially caching objects.

The idea of having lots of "new" in (inner) loops... just sounds really weird.
Even outside of sub-ms-game-land. I'm guessing even assigning (as opposed to
just using) primitive values to an object probably can have some nasty effects
wrt. needless copying -- but it should probably be a lot more predictable than
a GC hit -- possible to optimize, and might not trash CPU cache as bad as one
might fear.

I suppose it's quite easy to end up in a mess when trying to do a
straightforward rewrite from a static/non-java-oo style to a more java-oo-
style -- without some careful thought as to what's actually going on...

[edit: looking at teamonkey's reply further down... I think I've seen some
code in this style, creating lots of short-lived objects. Is it considered
idiomatic java, or an anti-pattern? (or both? ;)]

~~~
eropple
When I write Java games, I use libgdx and its (trivial) object pooling, so
yeah, I find this really really dumb. I can't give you a satisfactory answer,
because I don't know. :-)

In non-game contexts, I honestly only start caring about GC behavior when
things go bad. If I'm writing Scala (as I generally do on the JVM), I'm
creating a lot of objects as intermediate steps here and there. But for the
most part, my computer is fast enough and the JVM's escape analysis smart
enough that it isn't a huge deal.

------
PavlovsCat
Slightly off-topic but not really: to achieve a stable 60 fps in Javascript,
you pretty much have to avoid creating temporary objects as much as you can.
For example, I found this talk both very scary and interesting, and would even
say anyone who codes Javascript (or maybe even any language that has a GC)
should watch it or a similar one:

[http://www.youtube.com/watch?v=Op52liUjvSk](http://www.youtube.com/watch?v=Op52liUjvSk)
("The Joys of Static Memory Javascript", by Colt McAnlis)

This is also handy:
[http://stackoverflow.com/a/18411275](http://stackoverflow.com/a/18411275)

That sure was news to me, and I see it never addressed outside of games. Which
is understandably in a way, but I really think there should be awareness, so
that it can be an actual choice to let the GC do it, and not just the only way
we know how.

------
needusername
So much bad advice, in general run Flight Recorder or Censum.

> With a default memory limit of 1GB (1000 MB) and working memory of about 200
> MB Java has to make a full garbage collection every 4 seconds otherwise it
> would run out of memory.

Only if all of the 200 MB make it to old gen.

> Why not use incremental garbage collection?

Nobody should be using -XX:+CMSIncrementalMode it exists for platforms with
only one hardware thread.

> the real memory usage is almost double the memory visible in Java

Huh? Yes, Java uses more memory than heap and a copy-collector means half the
memory of the heap is unused but I have trouble understanding this.

I was in a JavaOne presentation 2013 when the presenter mentioned that
Minecraft runs System.gc() in a thread all 500ms and decided I'll never touch
this.

------
10098
This is why I'm not so quick to dismiss manual memory management. The way it's
done in C++ is has always made more sense to me in terms of preventing leaks
while introducing minimal overhead. It's true that Java can outperform
standard allocators due to issues like fragmentation, however for a lot of use
cases, especially in games, it's better to write your own (simpler and more
efficient) allocators anyway (i.e. stack allocator or object pool).

~~~
gear54rus
Frankly I never even understood why would you want to write any performance-
critical code and leave memory management to someone else. It's like it's
_bound_ to be slower.

Yet Java becomes more and more popular everywhere, sadly. Used for anything
and far from best for anything.

~~~
Skinney
Java becomes more and more popular because the cases where you would benefit
greatly from manual memory management, are getting fewer. It's also more
easier to avoid mistakes in Java than C/C++, in my personal opinion. Rust
could change this though.

------
nraynaud
I'm extremely skeptical of this explanation, because new objects don't put
pressure on the GC if they don't survive because of the scavenger on the first
generation.

edit: they only put pressure if you go edit an older object and put a
backwards pointers from an older generation towards a younger.

------
SquareWheel
It's an interesting post. In my experience 1.8 loads extremely quickly (chunk
loading), and my framerate is much higher than 1.7. I haven't noticed memory
usage being any different but I haven't watched closely.

Possibly it's better for mid/high-end systems, but harder on low-end?

------
readerrrr
So they are writing code similar to this:

    
    
      object array
      for( large )
      {
        object n = new object //heap allocation
        if( n == array[i]) //do something with n
            //do stuff
        //n is not needed anymore
      }

~~~
Afforess
Not quite. The post describes the usage of a "BlockPos" vector that describes
the x,y,z (and presumably rotation/yaw/pitch) for a world position. I think
previously they were using primitive integers and floats, and have migrated to
using an immutable object instead. Because of the amount of coordinate lookups
each engine tick, this generates a vast number of objects.

~~~
Groxx
It also makes this claim, which seems suspect[1]:

> _So if you need to check another position around the current one you have to
> allocate a new BlockPos or invent some object cache which will probaby be
> slower. This alone is a huge memory waste._

[1] or expose .equals(x,y,z) (assuming BlockPos is just an object wrapper
around [x,y,z]). Granted, it kinda defeats the purpose of a completely-
encapsulated object, but ya gotta do what ya gotta do when it comes to
performance.

~~~
thedufer
> So if you need to check another position around the current one

Key word "around". If a coordinate object is immutable, then doing operations
on a neighboring one requires another allocation for the new coordinate
object.

Unless you're suggesting having a coordinate version of every function that
operates on a BlockPos, in which case there's no point at all in BlockPos
existing.

~~~
Groxx
That's basically what I'm suggesting. Use it only on hot paths you've
profiled, maintain purity elsewhere. Java pretty much _requires_ you to make
concessions if you're trying to get maximum performance. Similarly: it has a
garbage collector, but there are plenty of places where an object pool will
out-perform it. Deliberately breaking / re-implementing part of the language
for performance reasons.

------
gizmo686
I have minimal experience with compilers, but couldn't alot of these objects
be dealt with using some sort of 'compile time memory management'. Essentially
have the compiler notice a point in the code after which point a given object
provably has no references to it, and insert an instruction in the bytecode to
immidietly dealocate that object. If the compiler can prove that an object
will be dealocated this way, it can also mark it such that the GC knows to
ignore it.

Is this type of system already implemented in java (or similar languages), if
not, what are the drawbacks of this approach.

~~~
wittrock
This is called reference counting, and it's nigh-impossible to do quickly in a
nondeterministic system.

[https://en.wikipedia.org/wiki/Reference_counting](https://en.wikipedia.org/wiki/Reference_counting)

See this for why it's slow compared to other schemes:
[http://www.cecs.uci.edu/~papers/ipdps06/pdfs/1568974892-IPDP...](http://www.cecs.uci.edu/~papers/ipdps06/pdfs/1568974892-IPDPS-
paper-1.pdf)

~~~
adrusi
Reference counting is a runtime operation, an alternative to mark-and-sweep as
a GC algorithm that allows for deterministic deallocation at the cost of
memory overhead. While the net time overhead will probably always be greater
than mark-and-sweep's (although there are optimizations which can make them
quite similar), mark-and-sweep has the disadvantage of causing infrequent,
long pauses rather than a predictable uniform slowness. Reference counting is
used in Python (which also has mark-and-sweep to detect reference cycles), C++
in the form of shared_ptr<T>, Rust as Rc<T>, and objective c/swift (and
certianly many more).

Compile time memory management refers to things like escape and ownership
analysis. Escape analysis finds locals that never escape the scope they're
allocated in, directly or indirectly, and allocates them on the stack rather
than the heap. It's used in openJDK and probably other major JVMs, and
required by the Go standard. Ownership analysis verifies that there only ever
exists one live reference to an object in memory, so that a deallocation can
be statically inserted whenever it leaves scope, so it doesn't become garbage.
I have only seen it used in languages where there are explicit ownership
annotations, such as Rust and C++.

To be sure, these are not the only forms of compile time memory management,
but they're probably the most versatile.

------
collinvandyck76
The fact that BlockPos is immutable is unfortunate, otherwise one could just
pop an instance in a ThreadLocal and mutate it whenever it needed it. Better
yet just make it an interface.

I wonder if it's common to run Minecraft in a profiler or something like that
regularly over at Mojang. I used to do that a lot with this one app I used to
work on and would routinely be surprised at what was actually going on under
the hood.

------
skybrian
A summary of a Minecraft developer's responses on Reddit:

[http://www.minecraftforum.net/forums/mapping-and-
modding/min...](http://www.minecraftforum.net/forums/mapping-and-
modding/minecraft-mods/1272953-optifine-hd-a4-fps-boost-hd-textures-aa-af-
and?comment=43777)

~~~
DanBC
> Well then people are going to be even more p __ __d when we finally cut
> support for people running hardware that doesn 't support anything better
> than GL 1.x. At some point you have to make the hard decision to stop
> supporting hardware that is anemic by the standards of 3-4 years ago, let
> alone today, and quite frankly I don't feel that Minecraft should have to
> bear a burden of technical debt and a lack of forward progress in the code
> base just because a handful of people are unable or unwilling to upgrade
> their machines.

That's a sucky attitude towards early adopters. Especially since most
Minecraft players are young and don't get to control which machines they use.

Especially when you combine it with Intel bugs in graphics drivers that cause
Minecraft to crash on opening - this can happen on reasonably powerful
laptops.

------
chaostheory
> This is the best part - over 90% of the memory allocation is not needed at
> all. Most of the memory is probably allocated to make the life of the
> developers easier.

Makes sense to me.

"Silicon is cheap while carbon is expensive."

i.e. Machine time is cheaper than developer time (to a point).

~~~
ANTSANTS
It could make sense if you are developing a server-side application where you
own all the machines that it will run on and can take a cost-benefit analysis
of dev time vs. server cost.

It makes absolutely no sense when you are selling a program to users that will
use it on a wide variety of hardware, from high-end to decade old. That's a
great example of a selfish externality, asking _millions_ of players all
around the world pay for expensive new machines (even top of the line ones are
going to have problems managing 200 megabytes of garbage per second without
dropping frames) just to make your job easier.

~~~
chaostheory
> That's a great example of a selfish externality

It's only selfish if they had the resources from the beginning which they
didn't. Players also constantly want a slew of new features. Maybe the
developers just couldn't juggle both performance improvements and new features
simultaneously. Maybe not enough people cared about the lag vs new features?
It's not like Minecraft is a competitive FPS where lag matters a lot more.

> asking millions of players all around the world pay for expensive new
> machines

First the Minecraft still doesn't have heavy hardware requirements.

Second the cost of computing has been trending downward for decades.

[http://www.freeby50.com/2009/04/cost-of-computers-over-
time....](http://www.freeby50.com/2009/04/cost-of-computers-over-time.html)

> even top of the line ones are going to have problems managing 200 megabytes
> of garbage per second without dropping frames) just to make your job easier.

Programming Java isn't easy especially when you have an existing code base.
It's even messier when you give precedence to performance.

If this is so easy, why not just make a better clone to fix the problem?
Minecraft isn't the only sandbox game anymore. If people are that unhappy
there are plenty of alternatives today.

~~~
ANTSANTS
>It's only selfish if they had the resources from the beginning which they
didn't.

As the post mentions, early versions of Minecraft allocated significantly less
garbage. You know, the versions programmed by one guy, before they had
millions and millions of dollars and a whole team working on it.

Yes, Minecraft has changed quite a bit since 1.3, when Notch abdicated the
development throne, but really not _that much_ considering it's been over 2
years and that Minecraft is a generation's Mario. Most of the "slew of new
features" came from mods (still no official modding API, either!)

>Maybe not enough people cared about the lag vs new features? It's not like
Minecraft is a competitive FPS where lag matters a lot more.

You don't understand. The "lag" you described is a constant delay between
input and response. An example would be vsync, which in most PC
implementations trades an extra frame or two of latency for smooth, tearless
video. The problem I was describing is jitter, which is variable delay. One
frame may take 10ms, the next may take 100ms+, due to the garbage collector.
This is extremely noticeable and annoying when playing Minecraft, because the
game locks up temporarily every few seconds and makes even simple actions like
walking in a straight line extremely choppy and disorienting. Aside from being
unpleasant and breaking "immersion," it's even gotten me killed when a GC
cycle kicks in just before a jump and the game misses the jump input. It
absolutely affects the experience.

>First the Minecraft still doesn't have heavy hardware requirements.

It kinda does, if you want smooth performance. You can play it on a toaster,
sure, if you ratchet the draw distance down and are ok with sub-30 FPS and
lots of GC-induced jitter.

My point was simply that if you make your game perform worse than it could,
you're pushing the cost onto the players, and while this is a normal part of
the march of computing, at some point it stops being acceptable and you're
hurting the experience for the average player. IMHO, considering how
ridiculously profitable Minecraft is and how little the fundamental gameplay
has changed over the years, it stopped being acceptable a long time ago.

>Second the cost of computing has been trending downward for decades.

That's nice, but little kids from all backgrounds love Minecraft. You can't
just ask an elementary school kid in the projects playing Minecraft at his
community center on a donated computer past its prime to upgrade.

>Programming Java isn't easy especially when you have an existing code base.
It's even messier when you give precedence to performance.

Again, the code performed reasonably well before. The new devs "cleaned it up"
without measuring the performance impact of their style changes, and now
everyone is paying for it.

>If this is so easy, why not just make a better clone to fix the problem?

Ah, the old "you can't complain because I don't see you doing any better"
schtick. Take it easy, I don't think it's so controversial to be perplexed
that one of the most successful video game franchises of all time is managed
so poorly; for example, you'd think hiring the Optifine guy would be a no
brainer.

~~~
chaostheory
How do you know how difficult it is to fix the technical debt of the code
base? Have you seen the source?

> It absolutely affects the experience.

1\. See above.

2\. MS will probably fix it eventually once they fully transition

> Again, the code performed reasonably well before. The new devs "cleaned it
> up" without measuring the performance impact of their style changes, and now
> everyone is paying for it.

See my first line.

> Ah, the old "you can't complain because I don't see you doing any better"
> schtick.

It's a lot easier to criticize than it is to build something.

------
wtetzner
This seems like a situation that Scala's value classes were designed for. You
could have something like a BlockPos that's represented by a long. Accessing
x, y, or z would use bit arithmetic to extract the values from the long.

------
ilaksh
In other words, every kid who knows how to code thinks they can make Minecraft
faster and better than the actual Minecraft developers. Pretty old story.

1.8 performs a lot better than the old version.

~~~
ANTSANTS
The person who wrote this post is the developer of OptiFine, a Minecraft mod
that significantly improves and stabilizes performance by optimizing the game
at many levels. In order to make it, they absolutely needed to understand the
game engine at a level comparable to the Mojang developers.

