The python version uses only 894 lines of code: https://github.com/fogleman/Minecraft/blob/master/main.py
If your brain has the tendency to automatically go into "code review" mode, I recommend against browsing the source :)
No, rather: even if they fail, nobody cares. The game will crash, and that's it.
volatile char *x = 0;
*x = 69;
I set it up to support multiple heaps in the application, so that the application could have a heap for, say, network support, and a separate heap for something else. The memory manager used a modified best-fit algorithm and generally didn't have problems with fragmentation. If the application ran out of memory in one heap, it wasn't completely crippled -- a malloc error in the buffers heap wouldn't prevent a dialog from being displayed from the UI heap.
It also had a "reserve" heap that could be made available to the application in particularly dire situations, and since the blocks in each heap grew towards the heap's table, it had a function that could analyze and compress the table to squeeze a few more bytes out of the heap.
There was very little overhead and it worked nicely.
Try allocating less memory and switch to a slower but less memory intensive approach. Try freeing up some memory from other places where it isn't needed right now and deal with the performance hit of reallocating that memory later.
Of course both of these approaches require a fair amount of architectural changes, but neither is unreasonable.
Use process isolation to handle recovery from OOM situations if automated recovery is required.
A process that has run out of memory is likely in either of two situations; the process has an unfixed memory leak, or it is working with an input that is too large for the memory resources of the system it is running on, in which case it is likely thrashing. In both situations, the best way out is to terminate; in the former, regular restarts can still keep the system as a whole functional, in the latter, hanging on will just make sure the system keeps swapping.
In situations where it is not to restart, it's better not to have dynamic memory allocation at all. But fault tolerance is generally a better strategy, see e.g. Erlang; systems should be designed so that processes can be restarted.
I think it's perfectly reasonable for quick tech demo projects to never check anything. fopen() is probably worth a little bit of checking, but not malloc().
The application running out of memory isn't the responsibility of the application.
If this is a job for anyone, it's a job for the operating system, however malloc() fails infrequently enough that it hasn't yet been worth it.
Or does your minecraft implementation in C pre-allocate memory for its nice error message at startup, resorting to direct I/O operations against video ports if that allocation fails?
Wouldn't this also allow for creating a simple out-of-memory error on start?
Well, it depends. Writing a Minecraft clone? Yeah, you can probably just give up. That doesn't mean you shouldn't be checking for failure. Check, if failed, pop up a message, log something, and die.
However, not all applications are created equal. If I'm writing a (for example) safety critical piece of code then I may not have the luxury of just exiting (I may not be using dynamic memory allocation at all either, but that's neither here nor there.) It may be more beneficial to my users (or absolutely required) to attempt to recover.
Not all software can just exit on a whim, but obviously this is a small portion of the software that exists.
People forget that even printf calls malloc. Logging probably isn't an option unless you are doing something special.
Either that or it's operating how I do which is putting asserts everywhere and when it's stable, removing them. I do this with macros though.
Could not find a package configuration file provided by "GLEW" with any of
the following names:
Add the installation prefix of "GLEW" to CMAKE_PREFIX_PATH or set
"GLEW_DIR" to a directory containing one of the above files. If "GLEW"
provides a separate development package or SDK, be sure it has been
(Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux)
CMake is another barrier to entry.
what happened to a nice simple makefile? Been using just standard makefiles for literally 20 years.
The only thing I do is put a checkdeps dependency on which tests for the presence of dependencies, like autoconf but without the eye poking. Have built everything from small network servers to Motif behemoths doing only that.
CMake is more an alternative to GNU autoconf and pals, with the difference that it has first class support for things like MSVC and XCode.
I've quite happily built makefiles that work on Linux, FreeBSD, Solaris and HP/UX that aren't fragile or laborious. It just takes appropriate knowledge of the platforms. I've built some stuff on OSX (small SDL project) and it wasn't too obtuse apart from some paths.
CMake, much as all of these things, works up to a point at which point it tries desperately to kill you at every available opportunity.
(people rarely even attempt nontrivial Makefiles that are portable between BSD and GNU makes though...)
Not only that, some of it doesn't actually work properly and it refuses to compile itself on HPUX.
Beyond the common platforms (linux, freebsd and sometimes cygwin), autotools break regularly, because they are not tested as well as on linux. Macros provided by libraries are even more prone to these issues.
I remember I had huge issues on IRIX to make even simple projects compile, so much that "irixfreeware" had an unsupported/patched automake that fixed some of these issues if you could have the luxury of re-generating the configure. Add to that that autotools broke backward compatibility of the configure.in scripts very often, and this is why debian has all versions of automake starting from 1.4 coexisting in the repository, and my patched automake was next to useless.
CMake is generally less painful if you learn it a bit. For the same level of underlying knowledge, CMake is way easier to "fix" when the build breaks, because it's a single tool, with an API which is quite consistent and doesn't involve a lot of magic. Fixing a configure failure with automake is hell by comparison. Fixing a Make failure with automake is also harder.
That being said, I generally dislike CMake. The syntax is just horrible (did they actually try to copy automake here?). The build system is still convoluted, and the output is basically unmodifiable. It doesn't "help" you much in porting to other operating systems either, really, like you say.
For some people, CMake just provides a convenient way to build the software by exporting a VS project file. I would never recommend to export Xcode projects by contrast, since you can just as well use the normal Makefile approach on OSX.
Writing correct tests for feature discovery in automake is very hard. I cannot count the times some tests fail to properly detect an include/library/path because of a different compiler version and/or because the script writer tried to mess with CFLAGS/LDFLAGS. The same in CMake is saner on that front, at least.
But did I already say I don't like CMake?
autotools. never again.
I had the fortunate pleasure of fixing a package on CentOS. The program needed to copy/write files in /tmp and back which meant configuring and compiling SELinux policies. This was such a pain in the ass to get working in autotools.
Cmake is a breeze by comparison.
sudo apt-get install cmake libglew-dev xorg-dev
sudo apt-get build-dep glfw
The last step yields:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: You must put some 'source' URIs in your sources.list
Even after updating my /etc/apt/sources.list, and having the build-dep step run without error, I still get that original message. :(
I was having the same issue on Ubuntu until I uninstalled cmake and compiled from source.
(I am not affiliated with them. I just downloaded it a few months ago and have been tinkering with it.)
By `why not' I mean, briefly explain what other alternative approaches there were, and why you chose the trade-offs you went for, instead of some other set.
Coders: don't give me any BS like x = x + 1 /* Increment x */
Rather, tell me, if it's not obvious, WHY are you doing that
Comments are like coles notes.
It's good as professional courtesy but for personal projects... seems like a waste of time.
Huh? In the first place, I do NOT want any surprise at all.
Install the code and play around with it to see what it does. This person was very generous to make this code public and you are disappointed that it didn't meet your standards.
2500 lines of C doesn't mean a thing. I could do it in 1 line when I removed all line breaks.
Did the programmer created the concept, interaction design, graphic design? No, Notch did.
So maybe Notch does write bad code but I think being a programmer is more than just writing code. In the end Notch got the job done and people are enjoying the result.
But yeah! great to see another Minecraft clone. It's always nice when people share there knowledge!
Reading through large source code can be very discouraging for someone that want to start programming. Things like this show that with a little bit of Math and few libraries you can bootstrap a little universe.
But by that logic Farmville is a great game, too, and it also makes the script authors of some TV series objectively great authors.
Is success noble per definition, and sanctifies the means, or does the nobility of means sanctify the success they achieve? For me it's the latter, never the former.
Mind you, I said nothing about Notch being a great or bad programmer/designer, I simply disagree with the logic.
Actually it means a lot -- since it has proper line breaks.
But still minecraft makes a 2012 macbookpro overheat like no other game, I get 80°C while Minecraft alternative clients/servers barely make it to 70°
" But then I found Infiniminer. My god, I realized that that was the game I wanted to do"
1) FOSS means I can play with the code, which I can't with Minecraft
2) I'm trying to re-learn C, and having some relatively well done modern examples that are of reasonable scope that I can understand is really useful
3) It runs really fast on my laptop
Its not going to replace Minecraft at all, but I don't personally feel that's the point.
Really, who are we comparing him against?
/* I'm not entirely sure if this is the right call for GLFW
* I normally use SDL, you can usually go up to 16 with the
* samples */
Enable multisamplimg before drawing anything:
Thanks for the source, I'll have a read through!
in my very humble opinion, a minecraft clone would have (at least stubbed out):
- ui. if you've never made a game, you'd be amazed at the amount of code ui takes up. even ugly ui. good lord.
- mobs. while you can data-drive a lots of your mobs, something like minecraft still has vast swaths of business logic to drive interactions with everything from creepers to mating wolves to the ender dragon.
- crafting. again, you can get a long way with data-driving your crafting system, but you've still got to build in all the mechanics that control your crafted items behavior among all the other various items in the game.
and none of that counts any of the polish that publishing a game requires. as a guy who's shipped games, it's really (even more so than traditional enterprise software) a case of the last 20% of the work takes 80% of the time allotted to the entire project.
in my experience, that's where "indie devs" fall down. they solve the hard, interesting problems (look! i've got an environment rendering! boom! the combat code works!), but then lose interest and chase other shiny things when it comes to the rote, painful tasks (wait... i've got to data-bind the hud? and handle socket disconnects without crashing? and the camera behavior? the animation has to blend? what about handling a network timeout on the dynamically loaded textures? you want to deliver "news and updates" to the client? from what server? and you want a cms with that? oh, and write the checkbox code for options screen? and qa insists we can apply and skip the first 30 levels so they can test 31 without playing the entire game? but, but, but... look! the environment loads on my machine! what kind of video card do you have?)
sorry about the rant.
this really is super-cool. just more of render-plumbing and not really a clone.
Easy extensibility via Lua scripting, no need for Java disassembly
I've wanted a leaner and simpler multiplayer Minecraft for some time.
Last I checked, Minecraft was nonfree and depended on some binaries plus did an OS check, so running it on OpenBSD would've been a nonstarter.
I'm sure performance on N450 would've been miserable too; if it could run at all with just 1G of RAM.
Recently I'm discovering how little C it takes to create a decent game with OpenGL. Well-written, straightforward C is a phenomenally powerful language.
That being said, it probably wouldn't be too difficult to make something better in terms of performance than minecraft, because its perceived perf sucks now as far as I can tell from running it on my machine: sometimes rendering lags, I regularly see mobs "suspended" in the air while the terrain renders (and yes, this happens within "old" chunks too, so I can't say that terrain is being generated or something like that), and I'm not even on weak hardware, it's a gaming laptop that runs modern AAA titles smoothly. Not sure what the source of these problems is, maybe my drivers, maybe mojang screwed up. I didn't notice these things happening on earlier versions which I ran on a fairly weak linux machine with a crappy intel graphics chip.
That's the theory, but in practice, the kind of programmer that is competent enough to pull it off in C, is also the kind of programmer that will give it better performance than the one writing it in some not very game-suitable higher level language (say Python or Java).
Visualization tools, video processing tools, multimedia tools, games, etc -- all Java examples I've seen come worse off that the C/C++ equivalents.
It's true that in large scale C you often trae performance for architectural simplicity.
But in a higher level you trade performance for everything, even the most basic operation has some added complexity layer. And you don't even have as much say as where you'd trade performance for architectural simplicity in, as you do in C/C++, since some parts you just can't do without.
Java's Collection.sort outperforms C's qsort significantly (and is very close to C++ std::sort for general cases). Java's GCed pointers outperform C++ shared_ptr significantly. Java's dynamic dispatch works in worst case with the same performance as C++'s one, but often works much faster. Java's cost of array bound checking is lower than in C++. Java's immutable strings are safe to pass by pointer everywhere, while C++ ones need often copying. Building anything close in performance e.g. to immutable Scala Vector or parallel collections is downright impossible in C/C++, because of lack of GC and shared_ptr is a performance no-go. There are plenty of places where high level abstractions are better implemented in high level language, than in low-level one.
Scripting language + C is a valid solution for many problems, IMHO often much better than a jack-of-all-trades-master-of-none solution like C++.
Given that the GC can generally process 1GB of live objects per second, I'd love to see an example where you have been working with 60GB+ heaps.
Why C programs run fast is because a single line of C does so little, therefore the only way to write C effectively is to make you code not do much. Since you don't have GC you have to think about how to structure your code in a such a way to minimize allocations. When you minimize allocations your program (and data) tends to reside and stay in cache.
All of this leads to programs that do very little and thus are very fast. If you made a C version of minecraft that does all the things the java? version (waste time scanning objects to see if they are in use, looking up vtables, etc) then the C version will run just as slow.
Alternatively if you write C style Java it tends to run about as fast as C.
If you'd write it the same in C then keep going, if not rethink your solution.
eg. Don't use the string class, use an array of char, Don't use objects, etc.
Would better performance have made Notch/Mojang any more money? Would it have made Minecraft any more fun? Maybe… but only a little.
That small benefit from a concentration on performance could reasonably be believed to have huge costs in productivity, schedule, and might even have put the entire project at risk of failure. The argument for pace vs. performance certainly doesn't fly everywhere and there are some places where I'm very glad a huge amount of time is spent on performance and correctness.
In this case, however, I think Minecraft is much better because it is worse.
That said, this project is super cool :)
 A FOSS Minecraft clone written in C++, http://minetest.net/.
I also think it would be a terrible idea, because the modding scene wouldn't exist, and the modding scene is a significant part of minecraft.
If both sides cooperate there is no real difference. If the devs don't cooperate with the modders then its a little harder with C than java.
One obvious issue relates to shipping a thousand jar files and a modder simply overwrites one. You don't need gcc's linker like you would with C.
Another interesting issue is I think the same jar file ships for all OS. If mojang shipped compiled binaries for each OS, you'd end up in situations where its hard to reach critical mass for mods. So maybe industrialcraft would only exist for mac, railcraft for PC, and buildcraft for linux, which would be too bad as they interoperate reasonably well as univeral jar files on all OS.
There is an area of architectural interest for a HN startup or whatever. Its true that GCC was a big deal to install back when I set up SLS linux off 3.5 inch floppy drives in '93 and the "C" set of SLS floppy disks was a considerable stack of disks and time, perhaps half a box of disks (five or so?). In 2014 a game installing its own captive copy of GCC (or whatever) would cost only a couple more seconds of download time. In 1980 "startup time" on an atari 2600 conditioned users to expect 2 to 3 second boot and startup times for a game, but in 2014 we've gradually trained users to expect perhaps 10 to 15 minutes total for OS boot, navigate past console ads, load a game, advertisement/commercials for dev houses, cutscenes, so don't tell me that in 2014 users wouldn't tolerate running a C linker for 30 seconds as the first step of running a game. Link while all the advertisements run, or something like that, and no one would even notice. It would make modding somewhat easier and patches somewhat smaller. I'm not talking about inventing the idea of shipping libraries, I'm talking about compiling them at part of starting the game (or whatever) up. Its an idea worthy of consideration that scales a lot better in 2014 than any time in the past.
FOSS needs more of this kind.
Let's see a redstone implementation in survival mode :)
Plus I don't understand my Notch is so highly praised. On the one hand he claims to be an indie dev champion but turns around and makes lucrative exclusive deals with Microsoft. If he really cared he wouldn't choose his wallet over his principles. Sorry but that is how I feel.
struct sockaddr_in address;
memset(&address, 0, sizeof(address));
Can someone explain me why you need to set memory of sockaddr_in to 0 ? (Even if &address haven't been touched).
address.sin_family = AF_INET;
address.sin_addr.s_addr = ((struct in_addr *)(host->h_addr_list))->s_addr;
address.sin_port = htons(port);