Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft's forthcoming Minecraft Education Edition is written in C++ (zdnet.com)
319 points by ingve on Jan 26, 2016 | hide | past | favorite | 331 comments



Micro optimized by John Carmack himself who volunteered (he was like a free consultant for while) for the task because he believed that's the best that could be done to help VR. He first had to personally persuade both hierarchy at FB & MS. So bold.

From : http://venturebeat.com/2015/09/24/how-john-carmack-pestered-... :

“I was willing to do just about anything,” he said. “On the phone I said that if this doesn’t happen, I’m going to cry. This will just be so terrible. This will be the best thing that we can do for the platform. But there are some problems that compilers can’t solve.”

It turns out that the solution was to get the top executives from Facebook and Microsoft together.

“Mark [Zuckerberg] and Satya [Nadella] were able to sit down and make sure that the deal happened,” said Carmack.


This is amazing. John Carmack has become like the Lone Ranger of code optimization. He shows up out of nowhere, makes your game run 20 times faster, then rides off into the sunset. "Who was that masked man?" "Oh, he's John Carmack."


It's the 'Mysterious Optimizer' Perk. In Fallout4 there is a perk you can unlock called: Mysterious Stranger. When you're stuck in a gunfight, this stranger has some probability of showing and assisting you. John Carmack to help me optimize my code ... awesome!


>In Fallout4

He's actually available in Fallout 1, 2, 3, 4, and New Vegas. :)


It's based on old spaghetti westerns like Shane, Pale Rider, etc with Clint Eastwood. He's always given the title akin "Mysterious Stranger" or "Man With No Name".


Which in turn is based on the protagonist from Yojimbo and Sanjuro, the "nameless rurouni."


I feel like this needs to be an xkcd


Followed by a five-comic arc leading up to a duel between Carmack and Stallman.


Wouldn't they be on the same team, begrudgingly? id games have always gotten GPL releases.


And I remember reading somewhere that Stallman was cool with games being closed source.


Wasn't that why Michael Abrash was hired by id way back when?


He's Michael Abrash's padawan, gone full Jedi.


If Carmack only made Minecraft 20x faster, that would be extremely disappointing. That's just the low-hanging fruit.


Judging Minecraft by its blocky world is very unfair and shortsighted.

Game world is not only de-facto voxels. Its transparent, masked and oblique voxels that interact with each other or other entities, like farm blocks by water blocks, lava blocks with burnable blocks nearby, plants growing up, hostile mobs spawns and lines of sights, mobs pathing as they follow player, fellow mobs or random points of interest for wandering, etc ect.


What makes you think I was judging Minecraft solely based on its low-detail graphics? Yes, its rendering code has historically been comically inefficient to the point that third-party hacks have made huge differences in frame rate and usable draw distance, but that's hardly the only thing Minecraft implements poorly.

The combination of using Java and being coded naively means that there's a huge amount of pointer chasing and unnecessary dynamic memory allocation that can be easily cut out without even needing to dip into the data structures and algorithms literature to try for asymptotic improvements. There's no SIMD and no concern for cache locality. Last time I checked, it didn't even look like terrain generation was done in a separate thread from the interactive game logic, let alone in multiple threads. The game only simulates dynamics within ~128 blocks of a player, but if you've got two players miles apart they're still simulated on the same thread.

I do fully understand that the dynamic modifiability of Minecraft's game world means that it is always going to be hungry for memory bandwidth and will be latency-sensitive. But that doesn't mean it has to be inefficient. The Pocket Edition (not Java) already showed that huge improvements are possible, because they were necessary to get anything resembling Minecraft onto mobile platforms. There's a lot more that can be done when you're back on a platform that has sufficient RAM capacity and a ton more CPU power relative to RAM speed. There's a lot of latency that can be eliminated and a lot more than can be hidden from the user without breaking the game. The development process meant that hardly any of the early fundamental implementation decisions were made with the much later features in mind.


I think notch's success proves that it's better to make something than whine about how it's implemented


Imagine a gentle world in which the person you're responding to is merely commenting on the relative ease with which Minecraft could be optimized, and pointing out some of the reasons for this, without casting aspersions on anybody or complaining about anything. Would such a world necessarily look different from ours? Would it be inconsistent with the comments in this thread?


It's not even commenting that "Minecraft -could- be optimized", it's that "Minecraft -has- been optimized (for the mobile platforms)".


Only if your barometer of success is a monetary one. I happen to like playing Minecraft when I'm not making something and killing it and iterating (yeah bruv), and it makes my future computer from beyond the moon chug at times. (I don't point the blame at Java nearly as much as at the naivete of its development, however, and it has gotten somewhat better over time.)


I used to develop my apps on 5-10 year old computers for precisely this reason. Their limited cache, RAM, etc would show both hotspots and trouble spots fast because the app visibly couldnt perform well (or crashed lol). Ignoring microoptimizations, algorithmic and architectural changes benefiting old PC often made new one scream with performance.


I've worked in computer graphics for a long time and to be honest, I've never seen any performance problems that you're describing. Rendering is smooth even with complex scenes.


Minecraft already runs fine. Making it 20x faster is great, but beyond that, you're running into PCMasterRace type optimizations that only 6 people in the world can see.


Minecraft only runs fine on the fastest hardware. I've got an i7-4790k that runs at 4.4GHz for single-threaded tasks. Just flying through a minecraft world in a straight line in creative mode needs 80% of one core to keep pace with the world generation. No laptop can run the game smoothly as-is. That 20x performance improvement is needed just to get into the territory of reasonable performance on the kind of low-end hardware kids typically have available.


> Just flying through a minecraft world in a straight line in creative mode needs 80% of one core to keep pace with the world generation.

But that's the absolute worst case, nobody actually plays the game like that.


People don't spend most of their time in-game that way, but they definitely do it. Especially when looking for an interesting places to set up.

Multiplayer gets much worse though, god forbid you have several people exploring new terrain on a server.

And god forbid you play with any of the mods that add faster railways[1] or jetpacks[2] or boots that make you walk faster[3], etc. It's a bummer that the optimizations are coming to the less moddable version, MC is a great platform for people to build new gameplay on.

[1] http://ftbwiki.org/Railcraft

[2] http://ftbwiki.org/Simply_Jetpacks

[3] http://ftbwiki.org/Boots_of_the_Traveller


Or, you can make it run on old hardware, and new underpowered hardware like netbooks, Raspberry Pi, and phones.


Note that the article is about a C++ rewrite of Minecraft rather than the Java version.


I think he may be the best refutation against people who don't believe in 10x engineers. Imagine you want to create a VR prototype and get to hire 100 average programmers with some experience in the industry, and you're up against Carmack with a month until your demo...


Carmack is just insanely productive. He had a working prototype of the GearVR Netflix app three days after he started working with Netflix to develop it, and pretty much finished the app a week later.

Have a read about how he developed the app:

http://techblog.netflix.com/2015/09/john-carmack-on-developi...

It's almost beyond belief what he managed to accomplish in one and a half weeks. Hacking the Netflix video decoding system, fine-tuning the UX control heuristics, working around DRM limitations, optimizing the power draw and thermals, and much more, all on top of implementing the very polished app itself.


He explained in a talk how after Netflix sent an engineer to work with him, the poor lad had to quit of exhaustion and go back home. Carmack's colleague said something like "hey you broke the Netflix guy"



Yeah, the YouTube is better quality, though.

https://www.youtube.com/watch?v=Ti_3SqavXjk&t=55m50s

This link will start playing as Carmack tells the "broken Netflix engineer" story, while recounting how he finished the app in that incredibly short timeframe.


Thanks for sharing. I wound it back to the start and started to watch it.

John made an interesting premise of the gap VR is trying to fill. Essentially, it is filling the gaps that is in your real life with VR. Hmm...Would it better to strive for improvement in your real life?


Virtual reality is perhaps an unfortunate moniker, because it encourages a dichotomisation of reality into "virtual" and "real", and conceiving of these as opposing, or at least orthogonal forces. Thought of in this way, virtual reality seems to promise a compelling-but-ultimately-empty facsimile of reality, the ultimate fulfilment of the escapist dream.

There's a repeating motif in reactions to disruptive technology, where the technology is first viewed from an oppositional mindset, which makes sense, because any disruptive technology will steal time away from other Old Activities that existed before the technology, and people who aren't early adopters only look at the decrease in the time spent on Old Activities.

We saw this motif of oppositional reaction in how people viewed the internet: people are spending so much time in cyberspace that they won't know how to effectively navigate the real world; people are having fantasy cyber-lives instead of spending time in the real world; he's meeting someone he met online, he must not know how to interact with people, etc. But social networks descended on society in an incredibly short period of time, and worked their way into the furthest corners of our lives. The oppositional mindset gave way to an integrative one, where the notion of a "cyberlife", as distinct from a "life", is simply misplaced- the internet is simply a part of Reality, sans prefix and with a capital R, instead of being boxed up in the conceptual category of "the Cyber".

There was another motifical recurrence when smartphones entered the fray. The oppositional critiques were voluminous and eloquent: we're spending so much time texting we're forgetting how to speak to each other; every crack in every interaction is plastered over with the ritualized and mutually fraudulent "notification check", signposting the way to the unravelling of the social fabric..., etc.; you can find the Real World up there, when you hold your head high, with dignity, and not down there, with your head bowed, staring transfixed at a shining rectangle, face ghost-like, bathed in the soft pearlescent glow of vapidity. But at some point, the integrative mindset arrived. It's hard to maintain the oppositional mindset when you're sitting in a restaurant that you just found on Yelp, and are chatting to your friend on WhatsApp, only to have them sit down in front of you. The handoff between "smartphone life" and "real life" is seamless. Smartphones have inextricably woven themselves so far into our lives, that if you ask someone how their smartphone life compares to their real life, they'll just give you a strange look. Smartphones are just a part of life.

I think VR/AR could go in this direction, as just another arrow in our technological quiver. If we start looking at things like social VR, which has the potential to reshape the way we interact remotely, or how architects are today routinely using VR to demo to clients, it's not impossible to believe that the integrative mindset could eventually overcome the oppositional mindset in terms of how we think about VR.


> The video surface was a little more problematic. To provide smooth playback, the video frames are queued a half second ahead, tagged with a "release time" that the Android window compositor will use to pick the best frame each update. The SurfaceTexture interface that I could access as a normal user program only had an "Update" method that always returned the very latest frame submitted. This meant that the video came out a half second ahead of the audio, and stuttered a lot.

> To fix this, I had to make a small change in the Netflix video decoding system so it would call out to my VR code right after it submitted each frame, letting me know that it had submitted something with a particular release time. I could then immediately update the surface texture and copy it out to my own frame queue, storing the release time with it. This is an unfortunate waste of memory, since I am duplicating over a dozen video frames that are also being buffered on the surface, but it gives me the timing control I need.

Oh, Mr. Carmack...

http://developer.android.com/reference/android/graphics/Surf...

http://developer.android.com/reference/android/graphics/Surf...

And I only started coding for Android last month. I guess I'm an 11x programmer. :) ducks


Those APIs aren't available from the NDK, without an extra JNI performance hit.

Also I bet they don't behave 100% the same way, in regards to timing, across devices.

In graphics programming at the level Carmack develops, every ms counts.


He explicitly said he was using SurfaceTexture. That is a Java API, there is no SurfaceTexture in the NDK.


Somehow I don't see him using Java, but if he said so, then ok.


I don't think people disbelieve that 10x engineers exist. Just that 98% of those who claim to be one are wrong.


10x engineers don't exist in a process heavy workplace (eg: scrum agile). They're more like 2x engineers.


Yeah, there's no way you could have a John Carmack in an enterprise. They'd go absolutely bat poop crazy.


Right. Like almost any other subject, there will be a handful of people at the top and below them many more who are likely very good at what they do, but will probably never achieve what the top x% do.

I consider myself one that's very good, but I look at people like Carmack and realize I will never be one of them...that won't stop me from trying to though!


I disbelieve.

That said, I don't contend that some people are not at some point 10 times more productive than others. I contend that anyone can be just as productive, and often are, were or will be, although the manner in which they should be productive is absolutely not clear-cut. Society as a whole shapes how we act. Diversity in its composition allows what we rely upon.

For instance, I would not be as productive at work if I had to take care of a few children on my own. Similarly, I am extremely unproductive while gardening, and don't intend to spend the time to improve.


You're making a far stronger claim than denying that some people are not 10x more productive than others. You're claiming that anyone can be just as productive as anyone else.

That seems to either be such a strong claim that it's trivial to refute, (i.e. a newborn baby, or a quadriplegic with severe mental retardation, etc.), or too nebulous and ill-defined to be of any use (i.e. said quadriplegic is productive in cultivating character in carers?)


You're practically coming out and saying that all humans are equal and the only productivity differences can be attributed to other factors. That's a ridiculous position to take.


And extremely easy to disprove at this point with brain scans vs capabilities. We know enough about the brain now to understand that some people are born with advantages (or disadvantages), whether that's in memory or social skills.


To be fair, he gets to do things he is really into, compared to mentioned 100 average programmers that probably works on some less stellar projects, often bogged down with red-tape and middle management o-plenty


He "gets" to do it because he became one of the best at it. I'm certain anyone that becomes the best at something can make money doing it!


He admits that he works long hours to get that productivity though[0]. He's probably 10x better than many who would work the same hours as him, but I'd be willing to bet if he limited himself to 40 hours a week he wouldn't still be 10x better. (Not trying to make any judgements either way, just an observation & prediction.)

0 - https://twitter.com/ID_AA_Carmack/status/684475417726595072


Not all 10x programmers are created equal.

Some of them leave a disaster in their wake. They're fast because they're sloppy, and don't take the time to refactor their new code into something maintainable. The team pays the price for that later.

I'm not suggesting John Carmack is one of those bad 10x developers, but they do exist. Just something to keep in mind.


Such 10x programmers are perfect for some situations, for example building a prototype.

It may be wise to leave it as a prototype and build V2 from scratch after that point - taking many of the best parts from the earlier experiment. I've been on the former end enough times where I'd rather the next team of engineers didn't use it as the base. I often learn many lessons from building V1 which would be a strong base for rebuilding it properly.

Erlang creator Mike Williams: “If you don’t make experiments before starting a project, then your whole project will be an experiment”.

Unfortunately many startup founders don't see the dev process as experiment-driven, rather as a monolithic beast the continually needs evolution and refinement. The opportunity costs of rebuilding a prototype-quality app using lessons learned from it vs building on top of it are real - rebuilding is not always as risky as many people make it out to be. Context matters.


People like Carmack are only relevant for extremely technical problems such as VR.

I don't think Carmack would have the productivity of 100 average guys if he was working on a typical CRUD business application.


People with ability that does not go beyond CRUD are not relevant at all. A generic CRUD interface can be auto-generated by a machine.


If you have exactly one individual to hold up as proof of your point, for all practical intents & purposes, you've just proven the opposite.


Carmack is such a boss. If your interest is piqued every time you hear about him, make sure to read Masters of Doom:

http://amzn.com/0812972155


My single favorite non-fiction work. (Probably better than any fiction I've read, too) Even if you're not a gamer, this book has such a great story and has so much to offer when it comes to software development in general.


Same here. I read this every year -- it motivates me like no other work or fiction or reality. I bought a few copies for friends, and it seems to have similar effects on them. I can't recommend it enough.


You may also like "Racing the beam", which is not so long on human interest but has a lot of great technical, social and business perspective on the Atari era.


I listened to the audiobook for around 15 minutes and just couldn't see what's so good about it. I think it was the reader's tone and voice (Wil Wheaton) that made it sound... is "rad" the word? I might try reading it in eBook form instead.


I found it fun to listen to, but the author had this weird habit of foreshadowing everything as if there was always some huge earth-shattering reveal around the corner.

"And this wouldn't be their last obstacle moving in to the lake house … [Next chapter]: Also, John stubbed his toe on a door."

Between that and Wil's enthusiastic reading — buying in to each and every one of those sure-to-be-disappointing wind-ups — it was kind of exasperating. But I'd say it's worth it, regardless. I'd agree it's probably way better as an eBook. There was lots of interesting stuff, and some glorious nostalgia :)


I didn't like Wil Wheaton's reading of Ready Player One either. The main character came off as, I don't know, whiney I guess. I have nothing against Wil Wheaton himself. The reading just didn't do it for me.

Masters of Doom is a great book though. I read it a couple years back. I recommend picking up a copy if the audiobook isn't working out.


> The main character came off as, I don't know, whiney I guess.

Well, the main character is whiny, and I found the book to be terrible, so I don't think your opinion is too off base.


I feel the same way. Which is odd considering how many people suggested it on /r/audiobooks. But I found the nostalgia pandering to be a bit much.


hear hear. I cannot honestly understand why that book is so popular.


Shit, I just blew coffee all over my laptop at reading that Wil Wheaton voiced it. I wouldn't be able to hear anything but a starry-eyed Mary Sue Crusher rendition of the book. Nothing against the dude but the association is strong.

Read it. It's worthwhile.


Is that because that's just the primary association you still have for him, or because that's just what came to mind? I'm wondering because Wil Wheaton has had quite a bit of positive exposure over the last decade, enough so that even while I don't really follow the stuff he does all that closely (or at all), I still hear about it enough that my primary association for him is no longer the ST:TNG character


I read the ebook. I skipped a lot of it and just read the interesting parts. I would recommend skimming it.


It was probably the voice that was annoying - I read the paper copy and really enjoyed it. If you like books like Hackers, Crypto, Soul of a New Machine, What the Dormouse said etc. you'll probably like it.


Ugh! Thanks for the warning. I can't stand Wil Wheaton.


Was so disappointed to find out he did the audio book version. Unlistenable


Give it another chance. It's worthwhile.


I just finished this book 5 minutes ago. Can't recommend it enough.


You might also enjoy the Blizzard book:

Stay Awhile and Listen: How Two Blizzards Unleashed Diablo and Forged a Video-Game Empire [1]

[1] http://www.amazon.com/Stay-Awhile-Listen-Blizzards-Video-Gam...


Great book. I wish there was more literature in this vein. Some other fun reads from the Blizzard team at http://www.codeofhonor.com/blog/


Here are a couple of additional things to read if you liked those: * Jordan Mechner - of Prince of Persia / Karateka [1] * Ready Player One - fiction - but in the same vein [2]

[1] Jordan's website w general links: http://www.jordanmechner.com/

The Making of Karateka - 1982- 1985 : http://www.amazon.com/Making-Karateka-Journals-1982-1985/dp/...

The Making of Prince of Persia - Journals 1985-1993: http://www.amazon.com/Making-Prince-Persia-Journals-1985/dp/...

[2] Ready Player One - http://www.amazon.com/Ready-Player-One-Ernest-Cline/dp/03078...


If you want to watch the Oculus connect talk where Carmack discusses this it's online here: https://www.youtube.com/watch?v=Ti_3SqavXjk

Carmack talks are fun to watch - I think it's worth checking out.


When will people realize you just need to put Carmack on the stage at 4pm and let him go until he tires himself out? You just shouldn't put an end time on a Carmack talk.

Shit, maybe there just needs to be CarmackCon where it's just a start time for his keynote which just sort of seamlessly blends into a hackathon.

Or maybe we just need to start a petition for no end time on his talk at Connect 2016.


From the linked article: "...when Facebook acquired Oculus in July 2014, Notch “blew up about it,” as Carmack puts it. Notch referred to the social media company as “creepy” and publicly stated that it wasn’t the partner he was envisioning when he backed the original Oculus Rift when it was just a Kickstarter project."

The more I read about Notch, the more I'm convinced that Microsoft's acquisition of Mojang and getting him out of the scene was the best thing for Minecraft. He strikes me as an incredibly fortunate but oddly emotional individual who was way over his head once his creation took off.


To be fair to Notch, a huge chunk of Oculus fans were devastated by the unexpected Facebook acquisition, not just him. /r/Oculus was in flames the day it was announced. Notch put it this way "I did not chip in ten grand to seed a first investment round to build value for a Facebook acquisition." Gamers at large were really disappointed because Oculus was a games focused device and then suddenly it's owned by a social networking company whose only reputation for games is being a place for friend-pestering, "free"-to-plays (the exact opposite of what most self ascribed "gamers" want for the future of gaming). It makes sense that they would be worried. I think Oculus has done a good job allaying those fears but it makes sense that oculus fans would cautiously optimistic.


How is it fair for Notch to be upset at Oculus being bought by Facebook when he sold his company to Microsoft shortly afterward? Not to mention he sold simply so he could check out and retire whereas Oculus sold because they actually needed to -- Notch's tiny little $10k investment wasn't even remotely enough to jumpstart the VR revolution.

As far as I can tell, FB has treated the Oculus project with incredible respect, and not a single fear has come true.

I understand there was legitimate reason to worry over the Facebook acquisition, but I still strongly believe that Notch (and many others) took it too far.


> How is it fair for Notch to be upset at Oculus being bought by Facebook when he sold his company to Microsoft shortly afterward?

Not only did he address this ("I’m aware this goes against a lot of what I’ve said in public. I have no good response to that."), his public posts about it were fairly measured after his first response (which was mostly a few negative tweets):

> I have the greatest respect for the talented engineers and developers at Oculus. It’s been a long time since I met a more dedicated and talented group of people. I understand this is purely a business deal, and I’d like to congratulate both Facebook and the Oculus owners. But this is where we part ways.

It's silly to expect consistency in people's reactions to everything -- I doubt you are "fair" with every reaction you have to things that affect you -- but, regardless, "I still strongly believe that Notch (and many others) took it too far" is an (ironically) excessive reaction to what actually happened.


> "I still strongly believe that Notch (and many others) took it too far" is an (ironically) excessive reaction to what actually happened.

I don't agree at all. Your quote concludes with an attempt to completely sever ties with Oculus. I do not believe this was in any way a measured or reasonable reaction at the time. The nature of the underlying action is not changed by the fact that it happened to be phrased in a relatively respectful way in that particular post.


> Your quote concludes with an attempt to completely sever ties with Oculus.

While technically true, that again makes it sound dramatic for no apparent reason. Choosing to not work with a company because you're uninterested in being associated with what they do is not something beyond the pale, even if many others have no problem with that thing.

To take a topical example, did Jonathan Blow "completely sever ties" with Microsoft because he has The Witness on PS4 and not the Xbox One because Sony actually cared about indie developers at the time he was looking at consoles to develop on? I guess you could say that if you wanted some click bait (and I'm sure more than a few gaming blogs did), but don't pretend that it's somehow getting to the "nature of the underlying action".


I understand your point of view, but the Minecraft deal with Oculus wasn't even particularly public or well-known at the time (they had only been in discussions for two weeks before the acquisition, and it wasn't official yet). Notch "announced" it very dramatically by announcing that it was cancelled.

To me, this feels like an action that was intended to stir controversy, but I will admit I could be wrong. You may see it differently.

As for Jonathan Blow, I'll refrain from commenting because I don't know anything about that situation.


I probably would have pre-ordered the new Oculus if it wasn't now owned by Facebook. Now I will wait to see if any Facebook login is required, and how Facebook treats the Oculus ecosystem.


Facebook has a pretty good record of non-interference with acquisitions, especially in the user-facing sense that you're describing. It will be even more true for Oculus, because Oculus was not a competitor to Facebook like Instagram, and, to a lesser extent, WhatsApp.

Facebook buying Oculus was a two-billion dollar bet on virtual reality being the next big platform, and to have that bet pay off, Facebook has to play the long game, and let Oculus and the virtual reality market grow.

Facebook isn't stupid or myopic enough to require a Facebook login to use Oculus products. It makes absolutely no sense when you consider the reasons that Facebook bought Oculus.


I don't tend to make bets on large companies with dominant market positions acting in their own best long term interests.


Why not? That's a very strong statement. I could think of a countless number of bets that trivially match your criteria (i.e. even odds Facebook will not stop selling ads this year), that you'd be crazy not to take.


There are lots of reasons. But I looked at your comment history and decided not to jump on the "interpreting discussion threads as logic statements" train with you. Apologies.


Fair enough. Do you think informal discussions benefit from a focus on thematic engagement, rather than logical minutiae?


You're comparing something known (the Rift as offered up for pre-order by FB-owned Oculus) with something unknown (the Rift as it might have been offered up for pre-order by an independent Oculus).

Many details might have been different in this alternate history: price, timeline, quality, bundled games, future games as funded by Oculus, and more.

Unfortunately I don't know if any of us really have the information necessary to make a truly correct comparison.


Of course we don't have that information. If Oculus had not been bought by Facebook, it could have been bought by anyone include shudder Disney. Who knows.

My excitement for and willingness to spend money on the Rift was greatly decreased by the Facebook acquisition. It is of course still quite possible that the Facebook acquisition was a good move that will benefit everyone.


The device will stand alone without Oculus software (besides drivers), so you definitely will be able to use it without a Facebook login.


> As far as I can tell, FB has treated the Oculus project with incredible respect, and not a single fear has come true.

They haven't even released a consumer product yet. It's awfully early to say that it's being handled either well or poorly.


Yes they have -- the Samsung Gear VR. You wouldn't even know that it was associated with Facebook in any way. The Oculus Store is accessed via your Oculus account. There's literally no mention of Facebook anywhere (I'm not sure as I haven't used the particular app, but I haven't heard of the Oculus Social app on Gear VR being tied to Facebook either).

If Gear VR is any indication, then the fear that Facebook is going to mess with things obnoxiously is unfounded. I don't think we'd have as polished a product as we'll get with the retail Oculus version had the Facebook deal not happened.


That's a good point, and I agree with you to an extent. But there have already been plenty of opportunities for Facebook to muck things up since the acquisition. The fact that they haven't done so in any way does count for something in my mind.

Also, I think the increasing pressure from HTC and Sony is showing pretty convincingly that Oculus would have been in a lot of trouble if they didn't receive solid financial backing from a larger partner like FB. The Vive has already pulled on ahead of the Rift in terms of offering a much more versatile tracking solution in the form of the Lighthouse stations. PlayStation VR is the sleeping giant in the room that nobody's really talking about.

Oculus is not going to be able to rest on their reputation for long and still win the VR game. VR is slowly becoming high stakes; it's not a harmless little Kickstarter campaign anymore.

If it's true that they're not making any money on the $600 CV1, then it seems to me that they desperately needed FB or a similar partner.

Ultimately, we're faced with a dilemma where the deeper we go into the Oculus + Facebook timeline, the further we move away from being able to predict what might have happened if Oculus had never been acquired. If FB does something to upset people (like requiring FB login), there might be a lot of outrage, but nobody will be able to say for certain that Oculus could have even remained competitive without FB. It won't be long before we reach a point where nobody will be able to competently know what we should be comparing FB-owned Oculus to in this hypothetical alternate timeline where Oculus was never bought.

The good news is that if Facebook does seriously muck things up there are great competitors who will gladly take your money instead. I just don't think there's any sense in condemning the Rift based on some hypothetical future wrongdoing from a company that has so far treated the platform quite well, especially taking into account the fact that we don't really know how well Oculus would have fared without the acquisition.


For one, Microsoft actually makes and sells games and consoles. It's always been a big part of their DNA.


Because of people like you. He wanted out. He didn’t want to be internet famous anymore, being questioned by smug know-it-alls. And good on him. Would have done exactly the same in his place, heck, even sold to much, much, much worse companies than Microsoft.


I'm not trying to attack anybody. I sympathize very much with the stresses of being placed unexpectedly into the public light, and I did not and will not criticize his decision to sell.

Regardless, I wasn't trying to ask why Notch sold. I meant to ask why it was not fair for Oculus to sell if it was fair for him to sell. Asking this question is fair game since Notch chose to make his condemnation of the acquisition exceptionally public.


> why it was not fair for Oculus to sell if it was fair for him to sell.

It probably felt unfair to him because Oculus was kickstarted by gamers (like Notch), and Minecraft was self-funded.

Since the backers (gamers) felt like they were stakeholders on Oculus, they probably were disappointed that Oculus was bought by Facebook, rather than Bethsida or Valve.


> How is it fair for Notch to be upset at Oculus being bought by Facebook when he sold his company to Microsoft shortly afterward?

From personal experience, I think we often are most vehemently angry about behaviors we feel like we are dangerously close to falling prey to. The closeted gays who are white-knuckling a righteous straight life are often the most vocally opposed to people who give in to homo temptation.

I suspect Notch was wrestling with those issues for years, and trying to harden himself against temptation. Selling to Microsoft wasn't necessarily a compromise to his values. It's possible that deal was fundamentally different to the Oculus deal in some way that matters to Notch. But it makes sense to me that he'd have strong opinions about how to go about doing something he was also struggling to figure out how to do.


People are entitled to their opinion. I think the issue is that due to Notches success people attribute more weight to what he says. In reality, I think his comments were just those of another Occulus fan who'd hoped things would head in a different direction.


I don't see a distinction between raising kickstarter money and later selling to Facebook vs early access money and later selling to Microsoft.

Notch promised Minecraft would eventually be released open source when he was raising early access dollars, and he seems to have completely reneged on it with the sale to Microsoft.

Oculus got acquired by Facebook and still afterwards released the complete hardware schematics for DK1, the kickstarter project, for anyone to use on Github.

The Facebook acquisition still probably wasn't a good thing; they seem to be want to turn what are essentially Monitors+tracking into game consoles with exclusives etc. You'll have a headset from one company and have to buy the equivalent spec headset from another just to get access to all the games. It's going to be like a future where Skyrim 2 were only released for BenQ monitors. You may already have a monitor with a panel made to the same spec in the same fab, but now you have to have an extra monitor you hook up only occassionaly to play some of your games.

They've said they might port their store to other PC headsets, but have made no real commitments.


Do you really think not having Facebook around would have meant the Oculus wouldn't have gotten an exclusive store similar to an Apple, Valve, Google or Amazon store?

Essentially you are future-worrying about something that historically has not happened. The only thing that comes close is the whole Microsoft/XBOX indie parity clause, which they eventually just took away.

My guess is that Oculus will not be the only game in town (Vive, and I'm guessing many other challengers) and that other HMD will be competitive enough where Oculus cannot do something completely unreasonable such as forcing developers to only develop for their store. Strategy wise it would be terrible too as that splinters a market that may or may not actually be mature enough to splinter.

PC Gamers are the biggest targeted market for HMDs (in terms of already having the hardware and the desire to buy one) and generally don't stand for this kind of thing. Most people don't object to Steam as a store because they haven't done something bad as to lock out a game.


It's the exclusive games I was talking about, not the concept of an app store.

>Oculus cannot do something completely unreasonable such as forcing developers to only develop for their store.

I'm not future-worrying, they already have 20+ titles that they have paid for store exclusivity with.


I'm not saying I love exclusives but options does Oculus have for making a profit if it isn't from hardware sales?


They should do it from hardware sales. Just like most other PC peripheral manufacturers. They had to solve a chicken-and-egg problem to get things started, so it is somewhat excusable. But precedents like this tend to stick around.


Seeing as the Vive and Playstation headsets are looking competitive with Oculus and likely more to follow, it does feel that trying to profit from hardware alone is going to be a difficult strategy for sustainability. Given Steam has so many devoted followers as well, I think Vive's partnership with Valve is going to be tough to compete with if you want to rely on hardware sales.

I'm not saying exclusives are the solution but I'm curious how Oculus plans to profit given the competition is already heating up.


They should have kicked ass on product. Instead they got caught with their pants down at the Vive announcement.

And PSVR is going to be competitive with the Oculus recommended spec machines, which are 3-4X as expensive, due to smart choices in panel (less nominal resolution but more sub pixels saving 25% of performance for comparable quality; 60fps reprojected to 120Hz, saving another 33%). Both choices combined give it nearly a doubled performance boost relative to the hardware it is running on, allowing a similar quality of experiences for potentially close to a third of the total cost.

And Oculus won't even have motion controllers until potentially the end of next year.


> They should have kicked ass on product. Instead they got caught with their pants down at the Vive announcement.

Amusing how everyone seems to think that Vive will be the best product on earth while Oculus will be "just Oculus", "not kick ass", "not great" etc. with nobody having either one so far. The power of "We love Valve" vs "We hate Facebook" seems to be really interesting from a PR/Marketing perspective.


I have a Vive and I've tried Oculus CV1. They are very, very, similar. 90hz, 1200x1080 per eye, low persistence OLED with global update, large fresnel lenses.

Until you get to input and mobility: Xbox gamepad vs motion controls. No contest. Yoga-mat scale standing without full rotation vs. room scale, walking around with 360 rotation: again, no contest.

Oculus wins on weight so far, though I haven't tried the Vive Pre yet.


Yeah, I don't think he was really prepared for or interested in all this fame. He wanted to be just another experimental indie dev, but every time he made a game for a game jam or something, the gaming media would explode with "NOTCH MAKES NEW GAME" stories and his piddly little three-day speed-typing game or whatever would get 100 times as much attention as the entire rest of the jam. I imagine that would wear, after a while.

From recent news, it sounds like he's now given up trying to be normal and is trying out being an over-the-top hedonistic rich guy, and doesn't enjoy that either but doesn't know how to stop.


To be fair, the majority of HN comments on the Oculus acquisition also took a similar upset tone, more than I've ever seen (except for maybe the announcement of Angular 2), so Notch's opinion only seems odd out of context.


In fairness to Notch and Minecraft, he'd stopped having anything to do with Minecraft since like a year after the public beta was released. In fact he once said that the first time he'd played Minecraft since handing it over to Jens was when the acquisition was announced, and even then it was the Console version, not the PC version.

You're very much correct however, he himself will admit that he definitely prefers just making small games with small fanbases, and that he's not really suited to being a public figure.


Man, would I ever love to see some before and after code.


every worry I had about this product is now less of a worry: The man who wrote QuakeC is now on the job.


I chuckled at this: "In schools and colleges that use Office 365, students will be able to log on to Minecraft using their Office credentials." Because everyone things "how can I log into this game? Oh yeah, with my office credentials." :-)

That aaid, one of the more interesting debates I participated in at Sun was Bill Joy's insistence that interpreted Java would be "faster than C++." From what I recall of his argument, it was that understanding the semantics of the program and just in time compilation would allow the JVM to run only the code that was needed in a smaller resident set with fewer context switches. At the time I was arguing against that, saying that a compiled version of Java could be a useful systems language but the interpreted version would not.

And even with some really really amazing hotspot technology on the JIT compiler, I don't think Java was ever faster outside of a few synthetic test cases that did no useful work.

So it really doesn't surprise me that a C++ version of minecraft would out perform a Java version, but it would be much more interesting if they included a JVM for the mods, so that the core was fast and the mods were portable.


> I chuckled at this: "In schools and colleges that use Office 365, students will be able to log on to Minecraft using their Office credentials." Because everyone things "how can I log into this game? Oh yeah, with my office credentials." :-)

I know this was meant to be a joke, but this was more an issue in wording by Schofield (the writer) than anything else. A Microsoft account is pretty much global across all their products. So it's also their email account, their OneDrive account, their Azure account... I'm assuming he picked Office 365 simply because that's the product he thinks is most likely to be already used in the school.

As for the optimization; the JIT's main optimization from what I've researched is inlining. Which is really beneficial for Java because you tend to end up with a nest of getter method calls all to get a single field somewhere, and wouldn't it be nice if the JIT could just inline that field for you? I suspect that, especially if performance is at stake, it's less typical to end up with patterns like that in a C++ codebase.


Yes, this is a good strategic move by Microsoft. All of my kids have Google apps accounts that their schools set up for them. They are required to use Google docs, etc., so they can work on stuff at school and at home. I assume that a good portion of those kids will continue to use Google apps as a default for a long time. Similar to a strategy Microsoft used to keep Office a standard: the dirt-cheap student version.

If Microsoft can get schools to use Office 365 instead of Google apps, then $2.5 billion for Minecraft was cheap.


Sure but first MS needs a Google Classroom competitor because that's probably the other major Google product that schools are using.


> I chuckled at this: "In schools and colleges that use Office 365, students will be able to log on to Minecraft using their Office credentials." Because everyone things "how can I log into this game? Oh yeah, with my office credentials." :-)

If you're a teacher, it's a huge boon to be able to re-use existing accounts. You really don't want to make dozens of classes of kids set up and remember new IDs and passwords for a single application.


To be honest, a lot of people have slammed the Minecraft codebase, claiming it's badly written and kind of a slow mess. I can't recall the citations, but based on the discussions when I read those aspects, I wouldn't put aspersions on Java alone in this case.


It was a one-man hobby project turned into one of the biggest video games. I doubt many codebases would work well after growing so far past their original aspirations.


It was never well-written, though. It was good enough to work, but it was never good code.


But it got the job done. That means way more than whether he used a factory or not.


I remember when I played it, there were many mods that optimized the game, usually offering over a 10x speedup with no noticeable difference in graphics or detail.

I definitely think that most of the speed increase is coming from the fact that Minecraft is being rewritten, and not which language it's being rewritten in.


I remember reading a blog post where somebody talked about how they severely decreased Minecraft's performance after refactoring and replacing function calls with separate X, Y and Z parameters to using a single Position object, which would of course in many instances have to be allocated on the fly. I can't seem to find it anymore, though.


Yes, something that should have at a minimum caused no harm to performance (and possibly helped, such as by allowing for SIMD) and been a win for code readability instead introduced an extra layer of pointer chasing and heap allocation, because Java doesn't have structs. See https://news.ycombinator.com/item?id=8485180


Indeed. They are adding value types though. It'll take years but once done, this kind of problem shouldn't crop up again. They've also been doing work on better escape analysis.

The JVM can do a lot of optimisations C++ apps don't benefit from, but I think the loss from not having value types more than drowns them out.


That's the one, thank you.


Sometimes you just have to get things done. The world probably wouldn't have Minecraft if the author had worried about "the proper way to architecture java code" that he read on some ranty blog somewhere.


Version 1.0 is always more sloppy than version 2.0, because with version 2.0, you have the luxury of 20/20 hindsight.


The real reason for chuckling here is the implication that anyone would use O365 for personal use. I'm stupid enough to use it and have almost decided to write-off the money I paid and go elsewhere. This is what $70/yr gets you:

1. ActiveSync! This is the only reason I'm still putting up with this garbage.

2. My company also uses O365, extensively. We have a wiki of sorts in OneNote. I can't contribute to it. Logging into the corp account and clicking "open in desktop" results in an error that claims that I don't have a license. I do. On an on-premise licensed domain-joined machine. I don't on my own O365 account but I didn't log on with that.

3. No layman interface. Want to disable password expiry for your 89yr old father? You'll need to download and install a powershell plugin before you can start reading documentation.

4. Inexplicable login problems. No UI to enter my login email because I had logged into a temporary SP online site I was debugging with. Later I found a "forget this login" link under the temporary login; restoring my access to any other account.

5. "Live login? Microsoft login? We don't know, tell us every time. Oh! You can't use Live with this service anyway!" I no longer know which of the two accounts my stuff is on.

TLDR; O365 is great for corporate/business. I'd recommend it. For personal stuff stay the hell away. They really want to give O365 to kids?


They're giving it to /schools/. Schools have pretty much the same needs as corporations/businesses, in terms of security levels, etc.


But usually only 1/10th the budget for people to run it. I've set up both O365 and Google Apps for schools, Google LDAP sync is way easier to work with, although the syncing of passwords is harder. MS's AD sync is complex, and ended up filling one server with debug log files.


What the author means, and is significant for the market (Minecraft Education Edition) is that it will work with any "Azure Active Directory" credentials. Every school I've worked with either uses active directory or has another directory service that exposes endpoints (usually SAML based).

Read that line instead as:

Good news! Your teachers and their students won't need yet another password to be able to use this application.


I read somewhere that it was a religious issue not providing AOT compilation.

It always saddned me not to have this option in the reference JDK, only on commercial JDKs.

Whereas we already had Java like safe languages with Eiffel, Oberon and Modula-3 compiling AOT to native code, and with JIT also available before 1996.

Oh and with value types and proper generics as well.

Now maybe Java 10 will become what Java 1.0 should have been.


They're adding AOT compilation to Hotspot.

However it'll be a commercial feature. The main reason being, they think it only really matters to rich trading houses that want to hit full speed when the market opens at exactly 9am. The tricky part is that there'll still be JIT compilation happening, because otherwise the program runs quite a bit slower, as the adaptive optimisations can make a big difference.


I saw the JLS 2015 presentation about it.

Which is sad, because if I want to pay for it I don't need to wait for Oracle, almost all comercial JVMs already do it.

When I got into Java, I always expected it to eventually support AOT on the standard compiler.


Right now many trading houses with huge IT budgets are choosing Java for their high frequency trading systems. This is an arena where microseconds matter and the fact that Java is a viable choice, says a lot about the speed and power of modern Java.


Actually, they've moved beyond that now. The trend in high speed trading is towards FPGAs. "Java don't play that." (not in any serious way).

Java in finance is actually slowly eroding overall. Performance-critical logic is being moved into Python modules written in C. "That's how we do it" anyway. The trouble is that Java code development is too slow and just getting your code deployed into a test environment is often ridiculously complicated (lots of middleware) whereas with Python "you write, you run, you're done."


It's still popular. For example, LMAX still uses Java. There were a number of things you needed to avoid if you wanted to use it where latency mattered though. GC pauses are fairly detrimental.

Where latency is really critical, things are implemented in FPGA though.


Although I guess that is telling that one of LMAX authors (Martin Thompson) has moved back to support C++ alongside Java on his newer product.

http://mechanical-sympathy.blogspot.de/

I guess C++ progress in the last years has helped this.


Even before 0x, a there was definitely a genuine performance need that drove a lot of stuff from Java to C++. A lot of people I know would write Javaish code with Boost.

A lot of the stuff that makes Java great for building high level systems quickly is still kind of missing from formal C++ though (IoC, JMX, etc...), and it would have been nice to see some of the internal stuff get open sourced.

I wonder if people are starting to avoid Java because of Oracle (e.g. vs Google).


Before 0x, Java didn't matter for production code as the 1.3 with JIT came later than that.

But in those days it was a big mess to write portable C++ code, given the discrepancies between compilers.

I don't miss those CORBA and DCOM libraries, but I miss OWL and VCL.

Java is still quite strong in the Fortune 500, and most managers don't care about Oracle vs Google, as it was caused by Google trying to avoid to pay Sun anyway.

However I sense some signs similar to when CA started loosing CLIPPER's direction, or when Borland attitudes started scaring people away from Delphi and C++ Builder.

On the other hand, Java is just too big and it will take generations to replace all those systems.


The earliest version of Java I worked with in production was 1.5, but the group I was working with had been using it in production a long time before that. There were other groups using Java for a lot of other things (JDBC, JMS, etc...). All the stuff I worked with relied on core Java and NIO though, which I'm not entirely sure how a lot of things were done before 1.5.

Java is definitely still really popular for development overall (TIOBE 2015 #1). My personal concern with Oracle vs. Google though is that Oracle does not own the language, and the copyright claim on APIs seems pretty aggressive. The JVM and Java have open specifications which in theory anyone should be allowed to implement (practice is admittedly a bit different). Even the official JDK was open source, which Oracle seems a bit hostile toward considering what they've done with purchases like MySQL and how the community largely responded.

Claiming copyright on an API seems somewhat akin to Keurig 2.0. Although, I think courts are having trouble keeping pace with technology.


I have been using occasionally Java since it was made available in 1996.

Our university adopted it right away and made it in 1997 the official language for compiler development and distributed systems classes.

Without bothering to search the Internet, NIO was either introduced in 1.4 or 1.5.

Yes they have open specifications, but they also have trademarks and compliance certifications.

The JDK was open source, but the license prevented its use for mobile devices without a Sun approved license, that Google didn't want to pay for.

So now Google has actually created a fork in the Java community.

Just imagine when you have Java 9 with modules, or Java 10 with reiffied generics, value types, JNI replacement, official unsafe package, among many other roadmap features, how to keep the code portable between Android Java and the real Java.


You are right, but google is moving to OpenJDK. This will fix those problems, or did I misunderstand what the implications of this move will be? http://venturebeat.com/2015/12/29/google-confirms-next-andro...

edit: The article does state at the end that google will still modify OpenJDK for their own needs, but that still means that the vast majority of the implementation will be the same, so this seems to avoid most incompatibility issues.


Thanks, I actually missed this. The difference with modifying OpenJDK is that it's a common codebase as opposed to a language fork.

I know Redhat is one of the other major OpenJDK contributors, and a lot of distributions are relying on IcedTea. It will be interesting to see how this develops.


So far Google has adopted parts of OpenJDK 7.

They still don't tell anything about newer versions.


NIO was 1.5. Before that, a lot of network programming wasn't practical where performance was a concern. I would guess it was the same with Linux before 2.6.


Boost Python is great for C++ shops too.


Not that it's a fair comparison, but I wonder if you took a Java from today and compared it to a C++ compiler from then, would the optimization differences make Java faster for a larger subset of algorithms? I wonder if this was just the classic problem of assuming your competition is static, and planning to eventually surpass their current capabilities while ignoring that by that time their capabilities may have evolved as well.


Nope, it has to do with the fundamental memory models of Java vs C/C++.

In C++ you can arrange your access patterns so you don't cache miss, much harder to do that in Java.


Harder to do manually, or harder to do even for the VM (given enough intelligence) because of the memory model?


You can do it by treating everything as a ByteBuffer, but then you pay some conversion costs(Int -> Float is esp painful).

It's fundamentally a data structure + execution flow problem so it's not something that a VM/compiler can help with. The fact that everything in Java is a ref(to be fixed some point in distant future) just means that every object fetch is a cache miss(really bad).


Around Java 7, there was experimental support for scalar interpolation. I'm not sure how it progressed though.


Ah, that makes sense.

So, we could conceivably claim that if that is fixed, Java might be able to JIT compile run code a bit faster than C++ in a wider application that currently, but then again by the time that happens, C++ might have a plethora of other tricks up it's sleeve to make execution a little faster.


You're hypothesizing something that really wouldn't be Java anymore.


Isn't this what Dalvik does? (And by extension, ART)

IIRC, a register-based virtual machine would alleviate the cache-miss behavior the GP talks about. (Because larger, more complex instructions = fewer hits to cache)

It's been a while since I've mucked around with this type of stuff, though.


Nope, you're thinking a one level too low in the stack.

The issue is that each object is a reference, and because it's a reference it's position in memory is by nature ambiguous(compacting GCs make this even worse by moving things around).

Until you can guarantee memory location you don't know that the N+1 object you're going to access is in the same cache line(or prefetched) and any sort of cache optimization is shot to hell. Only by knowing your standard execution flow and data set size can you write code that's as efficient about cache misses as possible.

Hotspot only tends to focus on tight inner loops where the stuff I'm talking about involves looking at the larger program(and the structures you choose to put data into). Think Structure of Arrays(SoA) rather than Arrays of Structures(AoS). Possible to do in Java, but very painful.


Not only that, but IIRC at the beginning each ref was a pointer into a table that would store the actual address of the object. So every ref was a double pointer dereference.


Java JIT is the state of the art, in most generated code is _only_ 50% slower despite totally wrecking cache/memory locality.


True, but it will hopefully change in Java 10.


Given the ~3-5 year release cycle trend(plus the fact that android is still Java 7) I'm sure we can expect it by 2025 ;)


Most likely Google will never update beyond Java 7, if their attitude at Google IO is anything to go by.

For me what counts is the official Java, not forks.


They've already updated to Java 8 with the switch to the OpenJDK so whenever Java 9, 10, 11, etc come out they'll be in sync.


I have news for you.

Check the Android source code.

They updated to selected parts of OpenJDK 7!


>So it really doesn't surprise me that a C++ version of Minecraft would out perform a Java version

I am not sure it is a fair comparison. had Minecraft been written in C++ with the same dev cycle, a complete rewrite in Java would also allow to improve its perfs ...


Compiled versions of Java libraries and apps would ruin the whole ecosystem and basically the best thing about Java. The CLR is a good compromise but in the end its not that much better.

Hindsight is always 20/20 so it really doesn't surprise me any rewrite is faster than the previous version.


Not that I prefer Java where performance is important, but profile based optimization combined with optimizing across translation units is non-trivial. C/C++ compilers have been increasing support for these more recently. Jan Hubicka's blog has a good example [1].

I have to wonder if they plan to keep the codebase portable though. One of the major benefits of Java was that nearly everything has a JVM.

[1] http://hubicka.blogspot.com/2014/04/linktime-optimization-in...


> One of the major benefits of Java was that nearly everything has a JVM.

This was a big deal in the late 90's as C compilers were still catching up with ANSI C and ANSI C++ wasn't done, with each compiler supporting a different flavour of the ongoing work.


I would guess a lot of stuff wasn't completely POSIX either (still kinda true). It's really only recently C++ added a standard memory model, portable threading, etc... POSIX is even kind of falling behind for this stuff IMHO.


POSIX is basically what should have been the official C runtime, but they didn't want to make it a richer runtime.

At least as I see it.


> That aaid, one of the more interesting debates I participated in at Sun was Bill Joy's insistence that interpreted Java would be "faster than C++."

Today, with JDK8, that's almost true. With JDK9 it might be.

There's no reason Java can't be faster, it just hasn't happened yet.


Stunning that Bill Joy ever thought interpreted Java could be faster than C++. I thought he knew more than that...


He may have simply overestimated the availability of Bill-Joy-like programmers.


Kindly explain the downvotes? I have nothing against Java. But C++ has direct control over memory and/or memory allocators, the option of allocating things on the stack, no garbage collection, etc.


As other people have pointed out: "Modding" in Minecraft is literally replacing the original Java bytecode with your own. This is why the idea of a Minecraft launcher took off; replacing binaries with modified versions is very dirty and very difficult to do if multiple things are touching the same file. I hope that this C++ version is the impetus needed for the forever-promised-never-delivered modding API to finally take shape. Then I don't even care what the underlying technology is.


One would hope so - I predict however that many end users will be slow to adopt as modders will probably start by rioting, and then replatform (if indeed there is an API) - but there will be a substantial time lag. There's a potential for a vicious underadoption cycle there, however, and bifurcation of the community into "classic" minecraft and "new" minecraft.


Oh, don’t worry, we are already rioting.

Mojang already announced that "deep modding" won’t be supported. They’ll extend the stuff current resource packs can do, but they want to avoid modders replacing or extending any game mechanics directly.

For example, modifying the rendering pipeline will be impossible.

What modders expect: Minecraft as a whole engine, similar to the API Unity presents to developers.

What Mojang is willing to deliver: A modding API that’s glorified command blocks.


>Oh, don’t worry, we are already rioting.

Any chance of modders swapping to an opensource minecraft clone?


Minetest is very good, and can be modded with Lua.


I don't disbelieve you - Mojang, even under Microsoft, has always been rather clueless - but what's your source on this?


You can read the answers from mojang_tommo here: https://www.reddit.com/r/Minecraft/comments/3ff2dy/minecraft...


you've very much misrepresented his position.

he seems to be okay with deep levels of modification, and even says that things have been abstracted enough to potentially support it.

He merely says that nobody has written it thus far with that consideration in mind.


Assuming MC does move to a C++ main version:

I wonder if anyone is studying this and drawing parallels to the Python 3 or Perl 6 efforts. There seems to be a lot of similarities.

* Existing product is great, but limitations are being encountered

* Decision is made (or reality admitted, depending on your view) that you have to make a dramatic change

* Community is split, progress is slow

* ???

I'm a huge Minecraft fan (it's about the only computer game I regularly play), and while the community has lots of grump people, it's also fairly adaptable. This would be a huge shock though, and I'm more interested in what happens over, say, 6-12 months than I am in initial reactions.


On the other hand, this potentially could unite XBox, mobile and PC versions of Minecraft, which would make quite a few people happy.

As most of this thread discusses, mods are going to be a critical point, even though it is surprising how much people manage to do with command blocks, ressource packs and other vanilla tools.


As a parent who avidly plays minecraft with my kids, exclusively on tablets, I tend to believe the article's notion that the C++ version (via Minecraft PE) will fairly quickly become by far the most popular version. So it will be ahead of Python in that regard.


I'd like python 3 much more if they didn't change the print statement.


As a frustrated parent, Minecraft itself isn't compatible with it's own mods. The whole ecosystem is a frustrating mess, or at least my having to perform tech support for my 7 year old.


Words I dread hearing when I arrive home from work: "daddy, can you install a mod for me?" Usually the first half hour is getting rid of all the crapware they were tricked into installing as they tried to get the mod themselves.


Tekkit client, Curse client, and FTB client. If the mod isn't on one of those, they aren't getting it.

I use to do custom installs of mods and even a little bit of patching to make mods play nice together (namely recipe conflicts), but the time it takes just isn't worth it when you can grab a Direwolf20 or Tekkit pack and play right away. I do kinda wish Mo Creatures was still included in packs because my little sis loved the horse breeding to get endgame horses.


You forgot about the atlauncher. It has some interesting packs like TechNodeFirmacraft (a TerraFirmacraft based mod pack) that actually somewhat educate users about rising up from caveman to industrial era technology (real metallurgy names and semi-believable progression; at least until the end game where you make fantasy alloys to move blocks of non-finite water and lava about).


Ha ha ha! You nailed it. I dread the same thing. I tell him to stick with servers on planetminecraft, but even those are tied to specific versions of Minecraft. Then there's umpteen managers for mods and utilities to roll between different versions. I let his 20 something uncles deal with the mess; I have had it. I guess at 44 I have lost patience for such things.

While Minecraft is a neat concept, the whole thing really does need to be scrapped and re-written from the ground up.


Similar experience here, but I have a different take: I believe that the cesspool of malware, buggy code and incorrect documentation has been invaluable in allowing me to teach my kids about computer and network security from a very early age. They now know about sandboxing, testing suspicious code on a throwaway VM, port forwarding with client IP address filtering, and the perils of interface version skew.


So much this. The only time I've ever been tricked into installing malware, ever, was from directly following a howto on installing a mod.


And you get to dodge all the shady mod download sites. I was amazed how difficult it was to install some of the ones my son wanted installed. I just gave up on some of them.

It's weird, because running different levels and mods for things like Doom and Quake was so simple.


Doom and Quake mods existed in a world before sketchy download sites. And today, most Doom and Quake mods seem to be hosted on websites dedicated to either just those games or mods in general (and I say this having spent some of this past weekend revisiting Quake and Doom mods).

Minecraft modding seems to be centered around a third-party controlled forum with ad-interstitial links to pay-per-impression downloads from some sketchy file hosts. The whole thing is super sketchy. And the fact that you're basically patching in compiled code (that can do bad things to your system, as there is no limited API or scope-limited runtime like in Quake and Doom modding) makes it really, really scary to get those mods running.


I still remember the BUILD 2015 (I think) fiasco where mods to Minecraft was used to demo VS2015's Java support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: