Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Minecraft clone in 2500 lines of C - even supports multiplayer online (github.com)
287 points by fogleman on Dec 10, 2013 | hide | past | web | favorite | 149 comments



Cool. From the same author of Minecraft clone with Python: https://github.com/fogleman/Minecraft

The python version uses only 894 lines of code: https://github.com/fogleman/Minecraft/blob/master/main.py


Woah, i just began python programming for sysadmin scripts, didnt even know one could do that with python! impressive.Is there any other game made with pyglet that are open source?


I don't know about pyglet, but there are lot of PyGame games to explore.


Not to be a party pooper, but there doesn't seem to be much in the way of error checking, null pointer checking, etc. e.g. Apparently library calls like calloc() and fopen() never fail. I guess if your goal is "few lines of code" it's understandable to leave that stuff out.

If your brain has the tendency to automatically go into "code review" mode, I recommend against browsing the source :)


>Apparently library calls like calloc() and fopen() never fail.

No, rather: even if they fail, nobody cares. The game will crash, and that's it.


What should you do if calloc fails? Any solution I can think of requires an architecture that does not belong in a game.


Abort and terminate? Jokes aside, error checking in c is important since if you don't do it you get "On error resume next" behavior.


What does malloc() return if it fails?

    volatile char *x = 0;
And what happens if you write to that address?

    *x = 69;
The answers to these questions, whilst not defined by the C language, are portable enough.


What about the more common situation where the thing to be allocated is a (potentially large) array? These aren't always accessed sequentially. And the offset might be (indirectly) controlled by untrusted data from the wild.


The difference is in the error message you get. Adding "if (x == NULL) abort()" will display a backtrace near the error. To understand the cause of the segfault, you will have to follow the data flow in your favorite debugger.


Well, you probably shouldn't just throw up your hands and knowingly invoke UB for starters.


Calloc returns NULL on error - the program dies on access. I am not trying to win an argument here. I am genuinely interested in finding a good strategy to deal with memory allocation errors.


15-odd years ago that was still a lingering point of debate in programming, before computers had lots of memory and disk to spare, so I eventually wrote my own memory manager for C++, since do-it-myself is my default approach to everything. At the time I was using a Metrowerks IDE, and surprisingly, my memory manager outperformed theirs by a fair bit (and allowed for some nifty debugging of other common problems, like memory leaks).

I set it up to support multiple heaps in the application, so that the application could have a heap for, say, network support, and a separate heap for something else. The memory manager used a modified best-fit algorithm and generally didn't have problems with fragmentation. If the application ran out of memory in one heap, it wasn't completely crippled -- a malloc error in the buffers heap wouldn't prevent a dialog from being displayed from the UI heap.

It also had a "reserve" heap that could be made available to the application in particularly dire situations, and since the blocks in each heap grew towards the heap's table, it had a function that could analyze and compress the table to squeeze a few more bytes out of the heap.

There was very little overhead and it worked nicely.


I am genuinely interested in finding a good strategy to deal with memory allocation errors

Try allocating less memory and switch to a slower but less memory intensive approach. Try freeing up some memory from other places where it isn't needed right now and deal with the performance hit of reallocating that memory later.

Of course both of these approaches require a fair amount of architectural changes, but neither is unreasonable.


No; the reasonable answer is to terminate and either log the failure or restart the process if it is critical.

Use process isolation to handle recovery from OOM situations if automated recovery is required.

A process that has run out of memory is likely in either of two situations; the process has an unfixed memory leak, or it is working with an input that is too large for the memory resources of the system it is running on, in which case it is likely thrashing. In both situations, the best way out is to terminate; in the former, regular restarts can still keep the system as a whole functional, in the latter, hanging on will just make sure the system keeps swapping.

In situations where it is not to restart, it's better not to have dynamic memory allocation at all. But fault tolerance is generally a better strategy, see e.g. Erlang; systems should be designed so that processes can be restarted.


The architecture cost of this is _huge_; it's only worth doing on reliability-critical systems, where you might consider banning dynamic memory allocation altogether.

I think it's perfectly reasonable for quick tech demo projects to never check anything. fopen() is probably worth a little bit of checking, but not malloc().


You could either try to free memory elsewhere, or die with a nice error message.


Nonsense.

The application running out of memory isn't the responsibility of the application.

If this is a job for anyone, it's a job for the operating system, however malloc() fails infrequently enough that it hasn't yet been worth it.

Or does your minecraft implementation in C pre-allocate memory for its nice error message at startup, resorting to direct I/O operations against video ports if that allocation fails?


I thought it was common ground for C-game-devs to check how much memory the game needs total, allocate it at start and handle it's management manually (for performance reasons).

Wouldn't this also allow for creating a simple out-of-memory error on start?


Nonsense.

Well, it depends. Writing a Minecraft clone? Yeah, you can probably just give up. That doesn't mean you shouldn't be checking for failure. Check, if failed, pop up a message, log something, and die.

However, not all applications are created equal. If I'm writing a (for example) safety critical piece of code then I may not have the luxury of just exiting (I may not be using dynamic memory allocation at all either, but that's neither here nor there.) It may be more beneficial to my users (or absolutely required) to attempt to recover.

Not all software can just exit on a whim, but obviously this is a small portion of the software that exists.


pop up a message, log something

People forget that even printf calls malloc. Logging probably isn't an option unless you are doing something special.


Yeah, that's a good point. It certainly gets hairy real quick.


Another dependency here but I use APR pools. You can keep your allocations to a defined scope then which makes free'ing easy and you can also have some guarantees inside the pool that it will have some bytes to give you. Less error checking and expensive malloc'ing required inside critical sections then.


Undefined behavior doesn't mean "your program will crash".


I think in this case, unlike say a network server, failures are not the end of the world. Those particular checks are quite useful though when it comes to debugging as working out where that pointer got shafted isn't all that easy once you've stepped past it so failing early is a good plan.

Either that or it's operating how I do which is putting asserts everywhere and when it's stable, removing them. I do this with macros though.


CMake Error at CMakeLists.txt:20 (find_package): By not providing "FindGLEW.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "GLEW", but CMake did not find one.

  Could not find a package configuration file provided by "GLEW" with any of
  the following names:

    GLEWConfig.cmake
    glew-config.cmake

  Add the installation prefix of "GLEW" to CMAKE_PREFIX_PATH or set
  "GLEW_DIR" to a directory containing one of the above files.  If "GLEW"
  provides a separate development package or SDK, be sure it has been
  installed.

-- Configuring incomplete, errors occurred!

~

:(

(Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux)


Welcome to the pile of busted that is CMake. If you do it for a living, you'll even get to learn their wonderful psuedo language, so you can parse through 450 lines of Find*.cmake just to figure out that all it does is check ENV_VAR0 and then fall back to a hardcoded string- even though the documentation/header claims it checks ENV_VAR1.


This.

CMake is another barrier to entry.

what happened to a nice simple makefile? Been using just standard makefiles for literally 20 years.

The only thing I do is put a checkdeps dependency on which tests for the presence of dependencies, like autoconf but without the eye poking. Have built everything from small network servers to Motif behemoths doing only that.


You get no (compatible) make on MSVC. More imporantly, whith nontrivial dependencies to third party libraries, self-coded makefiles just break down. Cross-platform handling of dependencies is fragile and makes drudgery. You're liable to do worse than CMake - after wasting a lot of time and energy.

CMake is more an alternative to GNU autoconf and pals, with the difference that it has first class support for things like MSVC and XCode.


I don't build for windows so that's moot for me. Well I do but not C.

I've quite happily built makefiles that work on Linux, FreeBSD, Solaris and HP/UX that aren't fragile or laborious. It just takes appropriate knowledge of the platforms. I've built some stuff on OSX (small SDL project) and it wasn't too obtuse apart from some paths.

CMake, much as all of these things, works up to a point at which point it tries desperately to kill you at every available opportunity.


Is nmake not compatible with make? I thought that it was. I haven't programmed C/C++ on Windows at all though, so I have no experience with this.


Microsoft "nmake" is unrelated to the AT&T-originated Unix "nmake", which you may be thinking of.

(people rarely even attempt nontrivial Makefiles that are portable between BSD and GNU makes though...)


Not really- that, and it is incredibly feature poor.


Whenever I see a project with CMake it generally puts me off as I know it's liable to require some intervention from me to get it just building.


It drove me crazy when I realized that CMake "achieves" being cross-platform by having you write tons of IF/ELSE conditions for every single platform you decide to support. My (naive?) understanding of cross platform was that it would abstract away most of those platform specific details.


That's what I thought when we tried to move one of our projects to CMake. By the second platform, we'd given up.

Not only that, some of it doesn't actually work properly and it refuses to compile itself on HPUX.


That's pretty much the same for GNU autotools, by the way.

Beyond the common platforms (linux, freebsd and sometimes cygwin), autotools break regularly, because they are not tested as well as on linux. Macros provided by libraries are even more prone to these issues.

I remember I had huge issues on IRIX to make even simple projects compile, so much that "irixfreeware" had an unsupported/patched automake that fixed some of these issues if you could have the luxury of re-generating the configure. Add to that that autotools broke backward compatibility of the configure.in scripts very often, and this is why debian has all versions of automake starting from 1.4 coexisting in the repository, and my patched automake was next to useless.

CMake is generally less painful if you learn it a bit. For the same level of underlying knowledge, CMake is way easier to "fix" when the build breaks, because it's a single tool, with an API which is quite consistent and doesn't involve a lot of magic. Fixing a configure failure with automake is hell by comparison. Fixing a Make failure with automake is also harder.

That being said, I generally dislike CMake. The syntax is just horrible (did they actually try to copy automake here?). The build system is still convoluted, and the output is basically unmodifiable. It doesn't "help" you much in porting to other operating systems either, really, like you say.

For some people, CMake just provides a convenient way to build the software by exporting a VS project file. I would never recommend to export Xcode projects by contrast, since you can just as well use the normal Makefile approach on OSX.

Writing correct tests for feature discovery in automake is very hard. I cannot count the times some tests fail to properly detect an include/library/path because of a different compiler version and/or because the script writer tried to mess with CFLAGS/LDFLAGS. The same in CMake is saner on that front, at least.

But did I already say I don't like CMake?


> Fixing a configure failure with automake is hell by comparison.

autotools. never again.

I had the fortunate pleasure of fixing a package on CentOS. The program needed to copy/write files in /tmp and back which meant configuring and compiling SELinux policies. This was such a pain in the ass to get working in autotools.

Cmake is a breeze by comparison.


CMake abstracts the generation of platform specific files like Xcode projects, makefiles and Visual Studio projects. This does not mean that you have to write tons if-defs.


galaktor isn't talking about #ifdefs, galaktor is talking about CMake-language special casing.


I was going to contribute a VS2012 and VS2008 project, but honestly I saw CMake and just kind of groaned.


FindGLEW.cmake is a standard CMake module from v2.8.10. Previous versions do not have this file thus cannot build the project. You can install a newer version of CMake or get this file from CMake git repo and copy it to /usr/share/cmake-2.8/Modules/


On ubuntu 13, I had to

  sudo apt-get install cmake libglew-dev xorg-dev
The readme mentions this, too:

  sudo apt-get build-dep glfw


Right, did all that. ;)

The last step yields:

  Reading package lists... Done
  Building dependency tree       
  Reading state information... Done
  E: You must put some 'source' URIs in your sources.list
Debian (#!) over here.

EDIT:

Even after updating my /etc/apt/sources.list, and having the build-dep step run without error, I still get that original message. :(


Try compiling cmake yourself from the latest source at: http://www.cmake.org/cmake/resources/software.html

I was having the same issue on Ubuntu until I uninstalled cmake and compiled from source.


It builds and runs fine on current Debian testing.


Compiles out of the box on Slackware 14.1 64 bit. I had to run cmake in "wizard mode" (cmake -i), I kept all the defaults except changing to lib64.


This looks awesome. The code is very clean and well laid out. I am disappointed that there are literally no comments in the source code though. Hopefully he will add comments, because a lot of what he is doing is not super obvious, at least not to me.


I think everything up until main.c:909 is setup. The while look at 909 is an event loop of sorts. Between that and line 1024 is receiving the network code, IIRC. There is some GL code after that to render GL, it looks like.


A similar Minecraft "clone" in C++. The author writes blog articles about the development, very interesting read!

http://sea-of-memes.com/LetsCode4/LetsCode4.html


Minetest is a C++ based Minecraft-style simulation with a clean API for modding -- with a nice little ecosystem of mods.

(I am not affiliated with them. I just downloaded it a few months ago and have been tinkering with it.)

http://minetest.net/


Learn to read code instead of comments. It's super straight forward C.


What the code is doing is clear. The why (and especially the `why not') could benefit from some commentary.


I wish more programmers understood this concept. Yes, we can read the code for the what, which is why we don't need you to tell us you are adding 2 to X. Why is a different matter altogether, and that is where a comment can shine light on the scenario.


Thanks for elaborating on my thought. The `why not' is often as important (or even more important!) then the `why'.

By `why not' I mean, briefly explain what other alternative approaches there were, and why you chose the trade-offs you went for, instead of some other set.


Exactly that

Coders: don't give me any BS like x = x + 1 /* Increment x */

Rather, tell me, if it's not obvious, WHY are you doing that


There should rarely be any comments on why x is being incremented unless there is something particularly clever about why. Commentary should be reserved for program blocks where something non-obvious is happening or where something obvious is happening for non-obvious reasons, IMHO.


To increment x


Why do you need to increment?


Because it needs to be bigger.


Then make it times 2. Or add 5.


To me it ruins the surprise, uncommented code is like an uncommented novel, once you've grok'd it the whole thing starts to make sense.

Comments are like coles notes.

It's good as professional courtesy but for personal projects... seems like a waste of time.


>it ruins the surprise

Huh? In the first place, I do NOT want any surprise at all.


Until you go back to it in a year's time and can't remember any of the reasons for your design decisions.


Besides, I genuinely find myself hilarious so the attempts at pithy commentary tend to make the whole "christ what was I thinking" part of looking at my old work less painful.


A novel is nothing but comments. The code was already executed in the author's brain.


I read c or assembler code like I read a newspaper, but comments are needed even in my own code to understand the higher lever design.


I shouldn't have to do a linear scan of the code to find what I'm searching. Like a book has headlines, code should have comments to indicate what's what.


Are you a C programmer?


Maybe her or she should come to your house and explain the code to you one on one.

Install the code and play around with it to see what it does. This person was very generous to make this code public and you are disappointed that it didn't meet your standards.


Some days ago there were comments about Notch being a bad programmer. And when I see posts like this I think that's unfair.

2500 lines of C doesn't mean a thing. I could do it in 1 line when I removed all line breaks.

Did the programmer created the concept, interaction design, graphic design? No, Notch did.

So maybe Notch does write bad code but I think being a programmer is more than just writing code. In the end Notch got the job done and people are enjoying the result.

But yeah! great to see another Minecraft clone. It's always nice when people share there knowledge!


That's not the point. Thing like this can inspires wannabe game developers.

Reading through large source code can be very discouraging for someone that want to start programming. Things like this show that with a little bit of Math and few libraries you can bootstrap a little universe.


> So maybe Notch does write bad code but I think being a programmer is more than just writing code. In the end Notch got the job done and people are enjoying the result.

But by that logic Farmville is a great game, too, and it also makes the script authors of some TV series objectively great authors.

Is success noble per definition, and sanctifies the means, or does the nobility of means sanctify the success they achieve? For me it's the latter, never the former.

Mind you, I said nothing about Notch being a great or bad programmer/designer, I simply disagree with the logic.


>2500 lines of C doesn't mean a thing. I could do it in 1 line when I removed all line breaks.

Actually it means a lot -- since it has proper line breaks.


Agree. Notch turned his code into hundreds of millions of dollars, so not much attention should be paid to people calling him a bad programmer.


He didn't created the concept but took the ideas from Dwarf Fortress and he says it :) http://notch.tumblr.com/post/227922045/the-origins-of-minecr...

But still minecraft makes a 2012 macbookpro overheat like no other game, I get 80°C while Minecraft alternative clients/servers barely make it to 70°


If you had bothered to actually read the linked article, he claims to have been..."inspired" by infiniminer more than any other game.

" But then I found Infiniminer. My god, I realized that that was the game I wanted to do"


I personally found this clone to be compelling for a few reasons:

1) FOSS means I can play with the code, which I can't with Minecraft 2) I'm trying to re-learn C, and having some relatively well done modern examples that are of reasonable scope that I can understand is really useful 3) It runs really fast on my laptop

Its not going to replace Minecraft at all, but I don't personally feel that's the point.


I saw this livestream and I bet everyone though he was a great programmer. Even though he was writing code similar to what he was already writing: http://www.youtube.com/watch?v=yM0MYoEU2-k

Really, who are we comparing him against?


I don't think I can get a good idea as to the quality of his code from this video.


Hmm, but if you found out later that Quake's codebase was ugly, would you think less of John Carmack? I love beautiful code, but its an ideal to be achieved... It can't be as important as the product itself.


I think an easy visual upgrade to get rid of the jaggies would be to (optionally) turn on MSAA (multi sample anti-aliasing). It doesn't drop the lo-fi look IMHO but still looks ways better. MSAA is real easy to enable, create your window with a MS buffer:

/* I'm not entirely sure if this is the right call for GLFW * I normally use SDL, you can usually go up to 16 with the * samples */ glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 4);

Enable multisamplimg before drawing anything:

glEnable(GL_MULTISAMPLE);

Thanks for the source, I'll have a read through!



that's the first thing he does on github?


greedy notch


It would appear that placing block 15 (the clouds) can cause some problems. If you disconnect while in a player created cloud it causes you to fall through the ground permanently.


not to be a parade-rainer (because this is really cool!), but it's not really a 'clone'. it's really just a cube rendering engine with a handful of textures and a barebones python socket server.

in my very humble opinion, a minecraft clone would have (at least stubbed out):

- ui. if you've never made a game, you'd be amazed at the amount of code ui takes up. even ugly ui. good lord.

- mobs. while you can data-drive a lots of your mobs, something like minecraft still has vast swaths of business logic to drive interactions with everything from creepers to mating wolves to the ender dragon.

- crafting. again, you can get a long way with data-driving your crafting system, but you've still got to build in all the mechanics that control your crafted items behavior among all the other various items in the game.

and none of that counts any of the polish that publishing a game requires. as a guy who's shipped games, it's really (even more so than traditional enterprise software) a case of the last 20% of the work takes 80% of the time allotted to the entire project.

in my experience, that's where "indie devs" fall down. they solve the hard, interesting problems (look! i've got an environment rendering! boom! the combat code works!), but then lose interest and chase other shiny things when it comes to the rote, painful tasks (wait... i've got to data-bind the hud? and handle socket disconnects without crashing? and the camera behavior? the animation has to blend? what about handling a network timeout on the dynamically loaded textures? you want to deliver "news and updates" to the client? from what server? and you want a cms with that? oh, and write the checkbox code for options screen? and qa insists we can apply and skip the first 30 levels so they can test 31 without playing the entire game? but, but, but... look! the environment loads on my machine! what kind of video card do you have?)

anyway.

sorry about the rant.

this really is super-cool. just more of render-plumbing and not really a clone.


I found a similar project that fits also your definition of "clone". The guy coded it in C++ and created a cross platform framework with render path for OpenGL and DirectX. He even compiled it to JavaScript and WebGL via Emscripten. And it has some initial multiplayer support and he also wrote his own GUI framework. Interesting read:

http://sea-of-memes.com/


The minetest project could use a contributor with shader experience. ;)

http://minetest.net/


Shameless plug: my gameplay video of the latest version of Minetest. https://www.youtube.com/watch?v=ss9kAQCAzVc


What is minetest trying to do better than Minecraft?


Better performance, especially on cheaper machines through use of C++

Easy extensibility via Lua scripting, no need for Java disassembly

Free Software


Cool. I really like the idea. I wonder how a Haskell version would compare in performance.

I've wanted a leaner and simpler multiplayer Minecraft for some time.


During an early stage of development, I ran minetest on OpenBSD on a netbook with Atom N450. It ran smoothly with a low-ish view distance.

Last I checked, Minecraft was nonfree and depended on some binaries plus did an OS check, so running it on OpenBSD would've been a nonstarter.

I'm sure performance on N450 would've been miserable too; if it could run at all with just 1G of RAM.


The out-of-box experience is pretty minimalistic. You want to install some plugins to get some wildlife.


This is badass.

Recently I'm discovering how little C it takes to create a decent game with OpenGL. Well-written, straightforward C is a phenomenally powerful language.


Agreed. Minecraft definitely isn't as smooth as this.


Same here, had the same idea and started a small toy engine which I work on from time to time: https://github.com/warfare/prototype


One starts to think - what if minecraft was made in C from the start? Would it perform better?


Difficult to say. Writing in C doesn't magically make code run faster, and it's entirely possible to get abysmal performance compared to a better implementation in a high-level language.

That being said, it probably wouldn't be too difficult to make something better in terms of performance than minecraft, because its perceived perf sucks now as far as I can tell from running it on my machine: sometimes rendering lags, I regularly see mobs "suspended" in the air while the terrain renders (and yes, this happens within "old" chunks too, so I can't say that terrain is being generated or something like that), and I'm not even on weak hardware, it's a gaming laptop that runs modern AAA titles smoothly. Not sure what the source of these problems is, maybe my drivers, maybe mojang screwed up. I didn't notice these things happening on earlier versions which I ran on a fairly weak linux machine with a crappy intel graphics chip.


>Difficult to say. Writing in C doesn't magically make code run faster, and it's entirely possible to get abysmal performance compared to a better implementation in a high-level language.

That's the theory, but in practice, the kind of programmer that is competent enough to pull it off in C, is also the kind of programmer that will give it better performance than the one writing it in some not very game-suitable higher level language (say Python or Java).


It is still difficult to say. IMHO C at scale requires often trading performance for architectural simplicity. E.g. relying on function pointers for indirection (or switch) or passing things by value / deep copy to simplify memory management are techniques which would often degrade performance than boost it, relative to Java. In real big code, you have to do some higher-level abstractions or your project is going to be unmaintainable spaghetti mess, and it is arguable if C lets you build them in the most performant manner.


I've never really seen any Java project for high performance stuff outperform a C/C++ based on.

Visualization tools, video processing tools, multimedia tools, games, etc -- all Java examples I've seen come worse off that the C/C++ equivalents.

It's true that in large scale C you often trae performance for architectural simplicity.

But in a higher level you trade performance for everything, even the most basic operation has some added complexity layer. And you don't even have as much say as where you'd trade performance for architectural simplicity in, as you do in C/C++, since some parts you just can't do without.


And I've seen a few C++ project which after translating to Java got some big performance improvements.

Java's Collection.sort outperforms C's qsort significantly (and is very close to C++ std::sort for general cases). Java's GCed pointers outperform C++ shared_ptr significantly. Java's dynamic dispatch works in worst case with the same performance as C++'s one, but often works much faster. Java's cost of array bound checking is lower than in C++. Java's immutable strings are safe to pass by pointer everywhere, while C++ ones need often copying. Building anything close in performance e.g. to immutable Scala Vector or parallel collections is downright impossible in C/C++, because of lack of GC and shared_ptr is a performance no-go. There are plenty of places where high level abstractions are better implemented in high level language, than in low-level one.


Same...until the GC kicks in. Then the app locks up for 60+ seconds at a time every few hours, at which point the developers start moving anything memory intensive out of Java to avoid the pauses, and/or reduce their severity (c.f. Cassandra). At that point, is it really Java? You might as well use a scripting language to integrate with all that C...


Yes, Cassandra is Java, even if a few tiny parts were moved off-heap. And no, the off-heap parts aren't programmed in C nor C++, it is still Java all the way down.

Scripting language + C is a valid solution for many problems, IMHO often much better than a jack-of-all-trades-master-of-none solution like C++.


> Then the app locks up for 60+ seconds at a time every few hours

Given that the GC can generally process 1GB of live objects per second, I'd love to see an example where you have been working with 60GB+ heaps.


And this is not taking into account that modern GCs do most or all of the work either without stopping the mutator (CMS, C4) or incrementally (G1), in smaller chunks. Times of STW collectors are long gone.


I bought Minecraft very early, and the more "features" that they added the worse the performance got. Adding shaders and better lighting as they are doing now, will make my gaming computer go to a crawl completely. Though it's quite known that opengl performance on java is lacking and it does not seem to have any easy fixing.


Generally it's really difficult to write pointer chasing code in pure C.

Why C programs run fast is because a single line of C does so little, therefore the only way to write C effectively is to make you code not do much. Since you don't have GC you have to think about how to structure your code in a such a way to minimize allocations. When you minimize allocations your program (and data) tends to reside and stay in cache.

All of this leads to programs that do very little and thus are very fast. If you made a C version of minecraft that does all the things the java? version (waste time scanning objects to see if they are in use, looking up vtables, etc) then the C version will run just as slow.

Alternatively if you write C style Java it tends to run about as fast as C.


Very good answer, thanks for that!


how would you write C stye java ?


0P probably means don't create/use a lot of classes, use a lot of static methods, globals, memory maps, arrays, enums, primitives, and try to reuse objects. Other than that totally uninformed answer, I dunno.


When you write a line of java think how would I write this in C.

If you'd write it the same in C then keep going, if not rethink your solution.

eg. Don't use the string class, use an array of char, Don't use objects, etc.


Of course it would, but that's not the point. By and large it does not matter and has not mattered that Minecraft performs poorly. Notch enjoyed making it and users have enjoyed playing it from the beginning and that combined with the pace it was reacted led it to be exceptionally successful.

Would better performance have made Notch/Mojang any more money? Would it have made Minecraft any more fun? Maybe… but only a little.

That small benefit from a concentration on performance could reasonably be believed to have huge costs in productivity, schedule, and might even have put the entire project at risk of failure. The argument for pace vs. performance certainly doesn't fly everywhere and there are some places where I'm very glad a huge amount of time is spent on performance and correctness.

In this case, however, I think Minecraft is much better because it is worse.

That said, this project is super cool :)


For what it's worth when I tried MineTest [1] and Minecraft on my oldish laptop (ThinkPad X61s) MineTest performed significantly better. Determining how much of it is due to the choice of programming language would require a more in-depth performance analysis.

[1] A FOSS Minecraft clone written in C++, http://minetest.net/.


A clones like this have the big advantage of knowing what the end result should look like in advance. That can aid design decisions regardless of language.


Probably, if done well.

I also think it would be a terrible idea, because the modding scene wouldn't exist, and the modding scene is a significant part of minecraft.


Why would the modding scene not exist if it was written in C?


The level of cooperation between Mojang and the mod makers in the very oldest days is very handwavey and before my time.

If both sides cooperate there is no real difference. If the devs don't cooperate with the modders then its a little harder with C than java.

One obvious issue relates to shipping a thousand jar files and a modder simply overwrites one. You don't need gcc's linker like you would with C.

Another interesting issue is I think the same jar file ships for all OS. If mojang shipped compiled binaries for each OS, you'd end up in situations where its hard to reach critical mass for mods. So maybe industrialcraft would only exist for mac, railcraft for PC, and buildcraft for linux, which would be too bad as they interoperate reasonably well as univeral jar files on all OS.

There is an area of architectural interest for a HN startup or whatever. Its true that GCC was a big deal to install back when I set up SLS linux off 3.5 inch floppy drives in '93 and the "C" set of SLS floppy disks was a considerable stack of disks and time, perhaps half a box of disks (five or so?). In 2014 a game installing its own captive copy of GCC (or whatever) would cost only a couple more seconds of download time. In 1980 "startup time" on an atari 2600 conditioned users to expect 2 to 3 second boot and startup times for a game, but in 2014 we've gradually trained users to expect perhaps 10 to 15 minutes total for OS boot, navigate past console ads, load a game, advertisement/commercials for dev houses, cutscenes, so don't tell me that in 2014 users wouldn't tolerate running a C linker for 30 seconds as the first step of running a game. Link while all the advertisements run, or something like that, and no one would even notice. It would make modding somewhat easier and patches somewhat smaller. I'm not talking about inventing the idea of shipping libraries, I'm talking about compiling them at part of starting the game (or whatever) up. Its an idea worthy of consideration that scales a lot better in 2014 than any time in the past.


Yes, dramatically.


has anyone test this? how's the performance?


Just tried it and the peformance is fast, it loads in seconds and looks really good. My friends are already installing it and we are trying get the multiplayer working. http://i.imgur.com/Mzjxhpp.png


What desktop environment is that? The icons look like Gnome's but I've never seen a theme like this.


Looks like Gnome with the Phosphene theme.

https://github.com/hdni/Phosphene


This is easily the best theme I've ever seen. Where/how did you find this?


Thanks, i now found my new Theme!


Oh boy, looks gorgeous.

FOSS needs more of this kind.


What window manager is that?


Silky smooth. Easy to build too, so long as you have cmake.


Works pretty good on croutonized Acer C710 Chromebook under debian/sid chroot.


Builds easily on OS X using provided instructions. Loads quickly. Renders at least 30fps, probably more.


Time to plug Minetest. The core is written in C++ and a Lua API is exposed for extensions. https://github.com/minetest/minetest


Just because it has blocks and you can break them, does not make it a 'minecraft clone'. Like we do not think a rich text editor to be a replacement for Microsoft Word.

Let's see a redstone implementation in survival mode :)


This is amazing. Good job!


Kinda getting sick of Minecraft. All those clones and remakes.

Plus I don't understand my Notch is so highly praised. On the one hand he claims to be an indie dev champion but turns around and makes lucrative exclusive deals with Microsoft. If he really cared he wouldn't choose his wallet over his principles. Sorry but that is how I feel.


What exclusive deals? MC is still available for any OS that will run Java. That doesn't seem very exclusive. Mojang recently said they're going to release on PS4 too.


Weren't those deals done after Notch stopped working on Minecraft?


Don't hate, congratulate


My make also fails due to Glew. Can anyone provide some premade binaries?



client.c: 145

struct sockaddr_in address; ... memset(&address, 0, sizeof(address));

Can someone explain me why you need to set memory of sockaddr_in to 0 ? (Even if &address haven't been touched).


it's a local variable, and local variables do not get initialized in C.


But the initialization of the variable is right after the memset:

    address.sin_family = AF_INET;
    address.sin_addr.s_addr = ((struct in_addr *)(host->h_addr_list[0]))->s_addr;
    address.sin_port = htons(port);
Why is it useful the put 0 before setting those values?


It's good practice in general. There might be other fields that were not explicitly set, or the struct may expand in future versions, or you have union fields/bitfields (e.g. you explicitly set the low order bits of some union field to some value, thinking the high order bits are all zeroes, but because you neglected to zero them out first, they're filled with garbage bits. when you read the union field as a whole you'll get incorrect values).


Thanks for the explanation!


Compiles and works amazingly well on Porteus.


Looks like it's under the MIT License.


Someone should hook up LuaJIT. :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: