From : http://venturebeat.com/2015/09/24/how-john-carmack-pestered-... :
“I was willing to do just about anything,” he said. “On the phone I said that if this doesn’t happen, I’m going to cry. This will just be so terrible. This will be the best thing that we can do for the platform. But there are some problems that compilers can’t solve.”
It turns out that the solution was to get the top executives from Facebook and Microsoft together.
“Mark [Zuckerberg] and Satya [Nadella] were able to sit down and make sure that the deal happened,” said Carmack.
He's actually available in Fallout 1, 2, 3, 4, and New Vegas. :)
Game world is not only de-facto voxels. Its transparent, masked and oblique voxels that interact with each other or other entities, like farm blocks by water blocks, lava blocks with burnable blocks nearby, plants growing up, hostile mobs spawns and lines of sights, mobs pathing as they follow player, fellow mobs or random points of interest for wandering, etc ect.
The combination of using Java and being coded naively means that there's a huge amount of pointer chasing and unnecessary dynamic memory allocation that can be easily cut out without even needing to dip into the data structures and algorithms literature to try for asymptotic improvements. There's no SIMD and no concern for cache locality. Last time I checked, it didn't even look like terrain generation was done in a separate thread from the interactive game logic, let alone in multiple threads. The game only simulates dynamics within ~128 blocks of a player, but if you've got two players miles apart they're still simulated on the same thread.
I do fully understand that the dynamic modifiability of Minecraft's game world means that it is always going to be hungry for memory bandwidth and will be latency-sensitive. But that doesn't mean it has to be inefficient. The Pocket Edition (not Java) already showed that huge improvements are possible, because they were necessary to get anything resembling Minecraft onto mobile platforms. There's a lot more that can be done when you're back on a platform that has sufficient RAM capacity and a ton more CPU power relative to RAM speed. There's a lot of latency that can be eliminated and a lot more than can be hidden from the user without breaking the game. The development process meant that hardly any of the early fundamental implementation decisions were made with the much later features in mind.
But that's the absolute worst case, nobody actually plays the game like that.
Multiplayer gets much worse though, god forbid you have several people exploring new terrain on a server.
And god forbid you play with any of the mods that add faster railways or jetpacks or boots that make you walk faster, etc. It's a bummer that the optimizations are coming to the less moddable version, MC is a great platform for people to build new gameplay on.
Have a read about how he developed the app:
It's almost beyond belief what he managed to accomplish in one and a half weeks. Hacking the Netflix video decoding system, fine-tuning the UX control heuristics, working around DRM limitations,
optimizing the power draw and thermals, and much more, all on top of implementing the very polished app itself.
This link will start playing as Carmack tells the "broken Netflix engineer" story, while recounting how he finished the app in that incredibly short timeframe.
John made an interesting premise of the gap VR is trying to fill. Essentially, it is filling the gaps that is in your real life with VR. Hmm...Would it better to strive for improvement in your real life?
There's a repeating motif in reactions to disruptive technology, where the technology is first viewed from an oppositional mindset, which makes sense, because any disruptive technology will steal time away from other Old Activities that existed before the technology, and people who aren't early adopters only look at the decrease in the time spent on Old Activities.
We saw this motif of oppositional reaction in how people viewed the internet: people are spending so much time in cyberspace that they won't know how to effectively navigate the real world; people are having fantasy cyber-lives instead of spending time in the real world; he's meeting someone he met online, he must not know how to interact with people, etc. But social networks descended on society in an incredibly short period of time, and worked their way into the furthest corners of our lives. The oppositional mindset gave way to an integrative one, where the notion of a "cyberlife", as distinct from a "life", is simply misplaced- the internet is simply a part of Reality, sans prefix and with a capital R, instead of being boxed up in the conceptual category of "the Cyber".
There was another motifical recurrence when smartphones entered the fray. The oppositional critiques were voluminous and eloquent: we're spending so much time texting we're forgetting how to speak to each other; every crack in every interaction is plastered over with the ritualized and mutually fraudulent "notification check", signposting the way to the unravelling of the social fabric..., etc.; you can find the Real World up there, when you hold your head high, with dignity, and not down there, with your head bowed, staring transfixed at a shining rectangle, face ghost-like, bathed in the soft pearlescent glow of vapidity. But at some point, the integrative mindset arrived. It's hard to maintain the oppositional mindset when you're sitting in a restaurant that you just found on Yelp, and are chatting to your friend on WhatsApp, only to have them sit down in front of you. The handoff between "smartphone life" and "real life" is seamless. Smartphones have inextricably woven themselves so far into our lives, that if you ask someone how their smartphone life compares to their real life, they'll just give you a strange look. Smartphones are just a part of life.
I think VR/AR could go in this direction, as just another arrow in our technological quiver. If we start looking at things like social VR, which has the potential to reshape the way we interact remotely, or how architects are today routinely using VR to demo to clients, it's not impossible to believe that the integrative mindset could eventually overcome the oppositional mindset in terms of how we think about VR.
> To fix this, I had to make a small change in the Netflix video decoding system so it would call out to my VR code right after it submitted each frame, letting me know that it had submitted something with a particular release time. I could then immediately update the surface texture and copy it out to my own frame queue, storing the release time with it. This is an unfortunate waste of memory, since I am duplicating over a dozen video frames that are also being buffered on the surface, but it gives me the timing control I need.
Oh, Mr. Carmack...
And I only started coding for Android last month. I guess I'm an 11x programmer. :) ducks
Also I bet they don't behave 100% the same way, in regards to timing, across devices.
In graphics programming at the level Carmack develops, every ms counts.
I consider myself one that's very good, but I look at people like Carmack and realize I will never be one of them...that won't stop me from trying to though!
That said, I don't contend that some people are not at some point 10 times more productive than others. I contend that anyone can be just as productive, and often are, were or will be, although the manner in which they should be productive is absolutely not clear-cut. Society as a whole shapes how we act. Diversity in its composition allows what we rely upon.
For instance, I would not be as productive at work if I had to take care of a few children on my own. Similarly, I am extremely unproductive while gardening, and don't intend to spend the time to improve.
That seems to either be such a strong claim that it's trivial to refute, (i.e. a newborn baby, or a quadriplegic with severe mental retardation, etc.), or too nebulous and ill-defined to be of any use (i.e. said quadriplegic is productive in cultivating character in carers?)
0 - https://twitter.com/ID_AA_Carmack/status/684475417726595072
Some of them leave a disaster in their wake. They're fast because they're sloppy, and don't take the time to refactor their new code into something maintainable. The team pays the price for that later.
I'm not suggesting John Carmack is one of those bad 10x developers, but they do exist. Just something to keep in mind.
It may be wise to leave it as a prototype and build V2 from scratch after that point - taking many of the best parts from the earlier experiment. I've been on the former end enough times where I'd rather the next team of engineers didn't use it as the base. I often learn many lessons from building V1 which would be a strong base for rebuilding it properly.
Erlang creator Mike Williams: “If you don’t make experiments before starting a project, then your whole project will be an experiment”.
Unfortunately many startup founders don't see the dev process as experiment-driven, rather as a monolithic beast the continually needs evolution and refinement. The opportunity costs of rebuilding a prototype-quality app using lessons learned from it vs building on top of it are real - rebuilding is not always as risky as many people make it out to be. Context matters.
I don't think Carmack would have the productivity of 100 average guys if he was working on a typical CRUD business application.
"And this wouldn't be their last obstacle moving in to the lake house … [Next chapter]: Also, John stubbed his toe on a door."
Between that and Wil's enthusiastic reading — buying in to each and every one of those sure-to-be-disappointing wind-ups — it was kind of exasperating. But I'd say it's worth it, regardless. I'd agree it's probably way better as an eBook. There was lots of interesting stuff, and some glorious nostalgia :)
Masters of Doom is a great book though. I read it a couple years back. I recommend picking up a copy if the audiobook isn't working out.
Well, the main character is whiny, and I found the book to be terrible, so I don't think your opinion is too off base.
Read it. It's worthwhile.
Stay Awhile and Listen: How Two Blizzards Unleashed Diablo and Forged a Video-Game Empire 
Jordan's website w general links: http://www.jordanmechner.com/
The Making of Karateka - 1982- 1985 : http://www.amazon.com/Making-Karateka-Journals-1982-1985/dp/...
The Making of Prince of Persia - Journals 1985-1993: http://www.amazon.com/Making-Prince-Persia-Journals-1985/dp/...
Ready Player One - http://www.amazon.com/Ready-Player-One-Ernest-Cline/dp/03078...
Carmack talks are fun to watch - I think it's worth checking out.
Shit, maybe there just needs to be CarmackCon where it's just a start time for his keynote which just sort of seamlessly blends into a hackathon.
Or maybe we just need to start a petition for no end time on his talk at Connect 2016.
The more I read about Notch, the more I'm convinced that Microsoft's acquisition of Mojang and getting him out of the scene was the best thing for Minecraft. He strikes me as an incredibly fortunate but oddly emotional individual who was way over his head once his creation took off.
As far as I can tell, FB has treated the Oculus project with incredible respect, and not a single fear has come true.
I understand there was legitimate reason to worry over the Facebook acquisition, but I still strongly believe that Notch (and many others) took it too far.
Not only did he address this ("I’m aware this goes against a lot of what I’ve said in public. I have no good response to that."), his public posts about it were fairly measured after his first response (which was mostly a few negative tweets):
> I have the greatest respect for the talented engineers and developers at Oculus. It’s been a long time since I met a more dedicated and talented group of people. I understand this is purely a business deal, and I’d like to congratulate both Facebook and the Oculus owners. But this is where we part ways.
It's silly to expect consistency in people's reactions to everything -- I doubt you are "fair" with every reaction you have to things that affect you -- but, regardless, "I still strongly believe that Notch (and many others) took it too far" is an (ironically) excessive reaction to what actually happened.
I don't agree at all. Your quote concludes with an attempt to completely sever ties with Oculus. I do not believe this was in any way a measured or reasonable reaction at the time. The nature of the underlying action is not changed by the fact that it happened to be phrased in a relatively respectful way in that particular post.
While technically true, that again makes it sound dramatic for no apparent reason. Choosing to not work with a company because you're uninterested in being associated with what they do is not something beyond the pale, even if many others have no problem with that thing.
To take a topical example, did Jonathan Blow "completely sever ties" with Microsoft because he has The Witness on PS4 and not the Xbox One because Sony actually cared about indie developers at the time he was looking at consoles to develop on? I guess you could say that if you wanted some click bait (and I'm sure more than a few gaming blogs did), but don't pretend that it's somehow getting to the "nature of the underlying action".
To me, this feels like an action that was intended to stir controversy, but I will admit I could be wrong. You may see it differently.
As for Jonathan Blow, I'll refrain from commenting because I don't know anything about that situation.
Facebook buying Oculus was a two-billion dollar bet on virtual reality being the next big platform, and to have that bet pay off, Facebook has to play the long game, and let Oculus and the virtual reality market grow.
Facebook isn't stupid or myopic enough to require a Facebook login to use Oculus products. It makes absolutely no sense when you consider the reasons that Facebook bought Oculus.
Many details might have been different in this alternate history: price, timeline, quality, bundled games, future games as funded by Oculus, and more.
Unfortunately I don't know if any of us really have the information necessary to make a truly correct comparison.
My excitement for and willingness to spend money on the Rift was greatly decreased by the Facebook acquisition. It is of course still quite possible that the Facebook acquisition was a good move that will benefit everyone.
They haven't even released a consumer product yet. It's awfully early to say that it's being handled either well or poorly.
If Gear VR is any indication, then the fear that Facebook is going to mess with things obnoxiously is unfounded. I don't think we'd have as polished a product as we'll get with the retail Oculus version had the Facebook deal not happened.
Also, I think the increasing pressure from HTC and Sony is showing pretty convincingly that Oculus would have been in a lot of trouble if they didn't receive solid financial backing from a larger partner like FB. The Vive has already pulled on ahead of the Rift in terms of offering a much more versatile tracking solution in the form of the Lighthouse stations. PlayStation VR is the sleeping giant in the room that nobody's really talking about.
Oculus is not going to be able to rest on their reputation for long and still win the VR game. VR is slowly becoming high stakes; it's not a harmless little Kickstarter campaign anymore.
If it's true that they're not making any money on the $600 CV1, then it seems to me that they desperately needed FB or a similar partner.
Ultimately, we're faced with a dilemma where the deeper we go into the Oculus + Facebook timeline, the further we move away from being able to predict what might have happened if Oculus had never been acquired. If FB does something to upset people (like requiring FB login), there might be a lot of outrage, but nobody will be able to say for certain that Oculus could have even remained competitive without FB. It won't be long before we reach a point where nobody will be able to competently know what we should be comparing FB-owned Oculus to in this hypothetical alternate timeline where Oculus was never bought.
The good news is that if Facebook does seriously muck things up there are great competitors who will gladly take your money instead. I just don't think there's any sense in condemning the Rift based on some hypothetical future wrongdoing from a company that has so far treated the platform quite well, especially taking into account the fact that we don't really know how well Oculus would have fared without the acquisition.
Regardless, I wasn't trying to ask why Notch sold. I meant to ask why it was not fair for Oculus to sell if it was fair for him to sell. Asking this question is fair game since Notch chose to make his condemnation of the acquisition exceptionally public.
It probably felt unfair to him because Oculus was kickstarted by gamers (like Notch), and Minecraft was self-funded.
Since the backers (gamers) felt like they were stakeholders on Oculus, they probably were disappointed that Oculus was bought by Facebook, rather than Bethsida or Valve.
From personal experience, I think we often are most vehemently angry about behaviors we feel like we are dangerously close to falling prey to. The closeted gays who are white-knuckling a righteous straight life are often the most vocally opposed to people who give in to homo temptation.
I suspect Notch was wrestling with those issues for years, and trying to harden himself against temptation. Selling to Microsoft wasn't necessarily a compromise to his values. It's possible that deal was fundamentally different to the Oculus deal in some way that matters to Notch. But it makes sense to me that he'd have strong opinions about how to go about doing something he was also struggling to figure out how to do.
Notch promised Minecraft would eventually be released open source when he was raising early access dollars, and he seems to have completely reneged on it with the sale to Microsoft.
Oculus got acquired by Facebook and still afterwards released the complete hardware schematics for DK1, the kickstarter project, for anyone to use on Github.
The Facebook acquisition still probably wasn't a good thing; they seem to be want to turn what are essentially Monitors+tracking into game consoles with exclusives etc. You'll have a headset from one company and have to buy the equivalent spec headset from another just to get access to all the games. It's going to be like a future where Skyrim 2 were only released for BenQ monitors. You may already have a monitor with a panel made to the same spec in the same fab, but now you have to have an extra monitor you hook up only occassionaly to play some of your games.
They've said they might port their store to other PC headsets, but have made no real commitments.
Essentially you are future-worrying about something that historically has not happened. The only thing that comes close is the whole Microsoft/XBOX indie parity clause, which they eventually just took away.
My guess is that Oculus will not be the only game in town (Vive, and I'm guessing many other challengers) and that other HMD will be competitive enough where Oculus cannot do something completely unreasonable such as forcing developers to only develop for their store. Strategy wise it would be terrible too as that splinters a market that may or may not actually be mature enough to splinter.
PC Gamers are the biggest targeted market for HMDs (in terms of already having the hardware and the desire to buy one) and generally don't stand for this kind of thing. Most people don't object to Steam as a store because they haven't done something bad as to lock out a game.
>Oculus cannot do something completely unreasonable such as forcing developers to only develop for their store.
I'm not future-worrying, they already have 20+ titles that they have paid for store exclusivity with.
I'm not saying exclusives are the solution but I'm curious how Oculus plans to profit given the competition is already heating up.
And PSVR is going to be competitive with the Oculus recommended spec machines, which are 3-4X as expensive, due to smart choices in panel (less nominal resolution but more sub pixels saving 25% of performance for comparable quality; 60fps reprojected to 120Hz, saving another 33%). Both choices combined give it nearly a doubled performance boost relative to the hardware it is running on, allowing a similar quality of experiences for potentially close to a third of the total cost.
And Oculus won't even have motion controllers until potentially the end of next year.
Amusing how everyone seems to think that Vive will be the best product on earth while Oculus will be "just Oculus", "not kick ass", "not great" etc. with nobody having either one so far. The power of "We love Valve" vs "We hate Facebook" seems to be really interesting from a PR/Marketing perspective.
Until you get to input and mobility: Xbox gamepad vs motion controls. No contest. Yoga-mat scale standing without full rotation vs. room scale, walking around with 360 rotation: again, no contest.
Oculus wins on weight so far, though I haven't tried the Vive Pre yet.
From recent news, it sounds like he's now given up trying to be normal and is trying out being an over-the-top hedonistic rich guy, and doesn't enjoy that either but doesn't know how to stop.
You're very much correct however, he himself will admit that he definitely prefers just making small games with small fanbases, and that he's not really suited to being a public figure.
That aaid, one of the more interesting debates I participated in at Sun was Bill Joy's insistence that interpreted Java would be "faster than C++." From what I recall of his argument, it was that understanding the semantics of the program and just in time compilation would allow the JVM to run only the code that was needed in a smaller resident set with fewer context switches. At the time I was arguing against that, saying that a compiled version of Java could be a useful systems language but the interpreted version would not.
And even with some really really amazing hotspot technology on the JIT compiler, I don't think Java was ever faster outside of a few synthetic test cases that did no useful work.
So it really doesn't surprise me that a C++ version of minecraft would out perform a Java version, but it would be much more interesting if they included a JVM for the mods, so that the core was fast and the mods were portable.
I know this was meant to be a joke, but this was more an issue in wording by Schofield (the writer) than anything else. A Microsoft account is pretty much global across all their products. So it's also their email account, their OneDrive account, their Azure account... I'm assuming he picked Office 365 simply because that's the product he thinks is most likely to be already used in the school.
As for the optimization; the JIT's main optimization from what I've researched is inlining. Which is really beneficial for Java because you tend to end up with a nest of getter method calls all to get a single field somewhere, and wouldn't it be nice if the JIT could just inline that field for you? I suspect that, especially if performance is at stake, it's less typical to end up with patterns like that in a C++ codebase.
If Microsoft can get schools to use Office 365 instead of Google apps, then $2.5 billion for Minecraft was cheap.
If you're a teacher, it's a huge boon to be able to re-use existing accounts. You really don't want to make dozens of classes of kids set up and remember new IDs and passwords for a single application.
I definitely think that most of the speed increase is coming from the fact that Minecraft is being rewritten, and not which language it's being rewritten in.
The JVM can do a lot of optimisations C++ apps don't benefit from, but I think the loss from not having value types more than drowns them out.
1. ActiveSync! This is the only reason I'm still putting up with this garbage.
2. My company also uses O365, extensively. We have a wiki of sorts in OneNote. I can't contribute to it. Logging into the corp account and clicking "open in desktop" results in an error that claims that I don't have a license. I do. On an on-premise licensed domain-joined machine. I don't on my own O365 account but I didn't log on with that.
3. No layman interface. Want to disable password expiry for your 89yr old father? You'll need to download and install a powershell plugin before you can start reading documentation.
4. Inexplicable login problems. No UI to enter my login email because I had logged into a temporary SP online site I was debugging with. Later I found a "forget this login" link under the temporary login; restoring my access to any other account.
5. "Live login? Microsoft login? We don't know, tell us every time. Oh! You can't use Live with this service anyway!" I no longer know which of the two accounts my stuff is on.
TLDR; O365 is great for corporate/business. I'd recommend it. For personal stuff stay the hell away. They really want to give O365 to kids?
Read that line instead as:
Good news! Your teachers and their students won't need yet another password to be able to use this application.
It always saddned me not to have this option in the reference JDK, only on commercial JDKs.
Whereas we already had Java like safe languages with Eiffel, Oberon and Modula-3 compiling AOT to native code, and with JIT also available before 1996.
Oh and with value types and proper generics as well.
Now maybe Java 10 will become what Java 1.0 should have been.
However it'll be a commercial feature. The main reason being, they think it only really matters to rich trading houses that want to hit full speed when the market opens at exactly 9am. The tricky part is that there'll still be JIT compilation happening, because otherwise the program runs quite a bit slower, as the adaptive optimisations can make a big difference.
Which is sad, because if I want to pay for it I don't need to wait for Oracle, almost all comercial JVMs already do it.
When I got into Java, I always expected it to eventually support AOT on the standard compiler.
Java in finance is actually slowly eroding overall. Performance-critical logic is being moved into Python modules written in C. "That's how we do it" anyway. The trouble is that Java code development is too slow and just getting your code deployed into a test environment is often ridiculously complicated (lots of middleware) whereas with Python "you write, you run, you're done."
Where latency is really critical, things are implemented in FPGA though.
I guess C++ progress in the last years has helped this.
A lot of the stuff that makes Java great for building high level systems quickly is still kind of missing from formal C++ though (IoC, JMX, etc...), and it would have been nice to see some of the internal stuff get open sourced.
I wonder if people are starting to avoid Java because of Oracle (e.g. vs Google).
But in those days it was a big mess to write portable C++ code, given the discrepancies between compilers.
I don't miss those CORBA and DCOM libraries, but I miss OWL and VCL.
Java is still quite strong in the Fortune 500, and most managers don't care about Oracle vs Google, as it was caused by Google trying to avoid to pay Sun anyway.
However I sense some signs similar to when CA started loosing CLIPPER's direction, or when Borland attitudes started scaring people away from Delphi and C++ Builder.
On the other hand, Java is just too big and it will take generations to replace all those systems.
Java is definitely still really popular for development overall (TIOBE 2015 #1). My personal concern with Oracle vs. Google though is that Oracle does not own the language, and the copyright claim on APIs seems pretty aggressive. The JVM and Java have open specifications which in theory anyone should be allowed to implement (practice is admittedly a bit different). Even the official JDK was open source, which Oracle seems a bit hostile toward considering what they've done with purchases like MySQL and how the community largely responded.
Claiming copyright on an API seems somewhat akin to Keurig 2.0. Although, I think courts are having trouble keeping pace with technology.
Our university adopted it right away and made it in 1997 the official language for compiler development and distributed systems classes.
Without bothering to search the Internet, NIO was either introduced in 1.4 or 1.5.
Yes they have open specifications, but they also have trademarks and compliance certifications.
The JDK was open source, but the license prevented its use for mobile devices without a Sun approved license, that Google didn't want to pay for.
So now Google has actually created a fork in the Java community.
Just imagine when you have Java 9 with modules, or Java 10 with reiffied generics, value types, JNI replacement, official unsafe package, among many other roadmap features, how to keep the code portable between Android Java and the real Java.
edit: The article does state at the end that google will still modify OpenJDK for their own needs, but that still means that the vast majority of the implementation will be the same, so this seems to avoid most incompatibility issues.
I know Redhat is one of the other major OpenJDK contributors, and a lot of distributions are relying on IcedTea. It will be interesting to see how this develops.
They still don't tell anything about newer versions.
In C++ you can arrange your access patterns so you don't cache miss, much harder to do that in Java.
It's fundamentally a data structure + execution flow problem so it's not something that a VM/compiler can help with. The fact that everything in Java is a ref(to be fixed some point in distant future) just means that every object fetch is a cache miss(really bad).
So, we could conceivably claim that if that is fixed, Java might be able to JIT compile run code a bit faster than C++ in a wider application that currently, but then again by the time that happens, C++ might have a plethora of other tricks up it's sleeve to make execution a little faster.
IIRC, a register-based virtual machine would alleviate the cache-miss behavior the GP talks about. (Because larger, more complex instructions = fewer hits to cache)
It's been a while since I've mucked around with this type of stuff, though.
The issue is that each object is a reference, and because it's a reference it's position in memory is by nature ambiguous(compacting GCs make this even worse by moving things around).
Until you can guarantee memory location you don't know that the N+1 object you're going to access is in the same cache line(or prefetched) and any sort of cache optimization is shot to hell. Only by knowing your standard execution flow and data set size can you write code that's as efficient about cache misses as possible.
Hotspot only tends to focus on tight inner loops where the stuff I'm talking about involves looking at the larger program(and the structures you choose to put data into). Think Structure of Arrays(SoA) rather than Arrays of Structures(AoS). Possible to do in Java, but very painful.
For me what counts is the official Java, not forks.
Check the Android source code.
They updated to selected parts of OpenJDK 7!
I am not sure it is a fair comparison. had Minecraft been written in C++ with the same dev cycle, a complete rewrite in Java would also allow to improve its perfs ...
Hindsight is always 20/20 so it really doesn't surprise me any rewrite is faster than the previous version.
I have to wonder if they plan to keep the codebase portable though. One of the major benefits of Java was that nearly everything has a JVM.
This was a big deal in the late 90's as C compilers were still catching up with ANSI C and ANSI C++ wasn't done, with each compiler supporting a different flavour of the ongoing work.
At least as I see it.
Today, with JDK8, that's almost true. With JDK9 it might be.
There's no reason Java can't be faster, it just hasn't happened yet.
Mojang already announced that "deep modding" won’t be supported. They’ll extend the stuff current resource packs can do, but they want to avoid modders replacing or extending any game mechanics directly.
For example, modifying the rendering pipeline will be impossible.
What modders expect: Minecraft as a whole engine, similar to the API Unity presents to developers.
What Mojang is willing to deliver: A modding API that’s glorified command blocks.
Any chance of modders swapping to an opensource minecraft clone?
he seems to be okay with deep levels of modification, and even says that things have been abstracted enough to potentially support it.
He merely says that nobody has written it thus far with that consideration in mind.
I wonder if anyone is studying this and drawing parallels to the Python 3 or Perl 6 efforts. There seems to be a lot of similarities.
* Existing product is great, but limitations are being encountered
* Decision is made (or reality admitted, depending on your view) that you have to make a dramatic change
* Community is split, progress is slow
I'm a huge Minecraft fan (it's about the only computer game I regularly play), and while the community has lots of grump people, it's also fairly adaptable. This would be a huge shock though, and I'm more interested in what happens over, say, 6-12 months than I am in initial reactions.
As most of this thread discusses, mods are going to be a critical point, even though it is surprising how much people manage to do with command blocks, ressource packs and other vanilla tools.
I use to do custom installs of mods and even a little bit of patching to make mods play nice together (namely recipe conflicts), but the time it takes just isn't worth it when you can grab a Direwolf20 or Tekkit pack and play right away. I do kinda wish Mo Creatures was still included in packs because my little sis loved the horse breeding to get endgame horses.
While Minecraft is a neat concept, the whole thing really does need to be scrapped and re-written from the ground up.
It's weird, because running different levels and mods for things like Doom and Quake was so simple.
Minecraft modding seems to be centered around a third-party controlled forum with ad-interstitial links to pay-per-impression downloads from some sketchy file hosts. The whole thing is super sketchy. And the fact that you're basically patching in compiled code (that can do bad things to your system, as there is no limited API or scope-limited runtime like in Quake and Doom modding) makes it really, really scary to get those mods running.
I've got lots of things to say about the minecraft modding community from end to end having done some stuff in the background of both sides. From development, to server management, to being an end user of all this.
I think that the only hope for a good API is to actually have an API. Bukkit was the correct way to go from the start, forge is not an API.
If anyone want's to know funny stories about bukit, forge, minecraft modding, and all you can get in contact with me and I'd be glad to talk about it all.
I tried MineTest but it really (!) wasn't as easy to get started modding it as with Forge-Gradle. You can see my little mod at https://github.com/voltagex/minecraft-rebridge. I want to go back to i, one day.
Then again, part of that is that I'm a clumsy coder, and part of that is Forge's "support" channels.
However, most of the more interesting mods still could not possibly exist for Bukkit, due to it's server side status, so I can't see any MS API faring any better than Bukkit
"Sponge is a combination of a new API (based off of Spout/Flow’s APIs) implemented on top of Forge, with assistance from other parts of Minecraft’s modding community (Glowstone, Cauldron, Spout, etc.).
It will be both a server and client API, and its target user base is pretty much anyone that wants to mod their game, including server owners. However, we may focus on the server-side portion first.
We invite any developer to help out."
Development seems quite active on Github.
They are even making some of the VERY basic yet devastating problems that Bukkit made at the start. Again, there is more to go into than I can fit into the scope of this comment chain.
Bukkit failed because of copyright laws. Despite this, people still use bukkit. It was powerful, easy, and had an intuitive API. It made programming seem fun to me.
A proper API done client side would look like bukkit, but have hooks into almost anything you could need. Look at GMod for example.
Let's hope this becomes Minetest's time to shine.
Yes! I've been waiting for this forever. The first time I ran Minetest on a crappy laptop years ago it was as smooth as butter and I couldn't help but think, "Why is no one playing this?!"
I think the primary reason very few people play Minetest is because of marketing and piss poor default gameplay. If you just run Minetest without adding any of the (nearly built-in) mods you'll get an odd-looking Minecraft-like world stuck in "creative mode" (which is boring). If, however, you take the time (lots of time) to fiddle around with it and put together a collection of decent mods it becomes a good game!
The fact that it is written in C and has a proper modding API (in Lua!) should be drawing people in like crazy but for some reason it isn't.
I agree. "Vanilla" Minetest needs monsters, villages, villagers, fortresses, more ores, more items, etc. in order to compete with "vanilla" Minecraft.
It's so very close to being a Minecraft killer...but there seems to be strong resistance to adding more to the base game.
Unfortunately, right now, every query about vanilla Minetest being a little "barren" results in replies like, "Install some mods." sigh
Personally, modded minecraft is the only type I continue to pay. If I didn't have the choice of mods, I'd rather go with one of the better options among the infinite number of minecraft clones.
I wonder if this going to create some bizarre fork situation, with an MS-approved-and-bound C++ version and a legally questionable, community-maintained legacy Java version.
This doesn't mean the Education Edition will be cross-platform. It does mean that the fact that it's written in C++ doesn't prevent it from being cross-platform.
Against that, how many pupils would benefit from spending that development time -- a finite resource -- on incorporating mods into the C++ code base?
It's not a pseudo-religious issue.
Do you think Microsoft is actively trying to sabotage the modding community? Or is it just that they failed to take it into consideration?
Of course even if the plugin/mod architecture is good getting existing mods converted will be a struggle, particularly where the original coder considered them complete and is now busy working on something else new & exciting.
> patching bytecode
There is a lot less patching bytecode going on now, and it is now handled by Forge.
> unofficial APIs ... like Minecraft Forge
Unofficial APIs are what happen when nobody else provides them. It's far too late for any "official" mod API that isn't Forge, because the massive amount of modding that already exists is just not going to be re-written.
The best thing that could happen would be to bless Forge as "official".
I also agree that, by default judgement, Forge basically is the modding API at this point. But I disagree that blessing Forge as official is the way forward, precisely because of how tightly bound it is with the existing structure. How well would Forge be able to integrate with this new C++ version? Who knows! So, yes, use Forge as a primary input or even a starting point for a true official API. But don't just blindly bless Forge.
... That's exactly what the word "completed" would mean in every other context. Things are typically not called "completed" unless there is no more pending work on them. I would maybe look at terms like "feature complete" as an alternative here, where the intent is that the only development will be in maintaining compatibility with future releases.
> good sense to build in a good API for mods
LOL. That API already exists - it's called Forge. We gave up caring about an official "mod API" many years ago.
"Where's The Modding API?" - Yogscast, Dec 20, 2013
Node.js isn't particularly relevant to their primary business, nor is keeping Visual Studio proprietary. Opening them makes developers happy at a time when fewer non-MS developers are writing for their platform (they moved to the web and mobile).
Microsoft's bad behavior with Windows 10 (including the drama involving the back-ports and strong-arm upgrades in 7/8) suggest the company's behavior hasn't changed much.
But as I said, you're entitled to your opinion. I would love to be proven wrong here. Unfortunately, experience has taught me otherwise.
The real question - that you ignored - was this:
> I suspect that "always get updates" promise I paid for years ago will be ignored.
Given that Microsoft already restricted a C++ version of Minecraft to Windows 10, do you really think they will ever ship this new C++ version of Minecraft for Linux and Mac? At the same time, not years later? Or will they split the community?
Considering that this named and marketed as a separate product ("Minecraft - Education Edition"), I don't really see where you believe you have a standing here. It certainly wouldn't be the first time a company put out a new version of something in order to move on from previous commitments. For instance, "Minecraft - XBox Edition" and "Minecraft - Android Edition" are not in sync with "Minecraft - The Original PC Edition" either.
And I certainly won't pretend to know where they're going with this, and I also want and hope for this new version to be a cross-platform product. I was more addressing the general opening statement, which is exactly what I quoted in my response.
To be fair, that's where the users are, and that's how they make their money.
They seem to have given up on the server market, but they're still trying as hard as ever to keep the desktops.
Even as open source .NET is still a huge pain to get running under Linux and development of .NET anything is extremely difficult without Visual Studio (which doesn't run on Linux). Not only that but most of the .NET libraries one would use for any given purpose only work on Windows. So even if the core of .NET is open source and cross platform any software written using that platform can only effectively run on Windows.
When Microsoft puts out a version of Visual Studio for Linux desktops ("Minecraft modding edition"!) then I might start believing that they're serious about cross-platform software. It'll also signify their complete lack of relevance because to have gotten to that point would mean a level of desperation that would indicate a sinking ship.
It's just not in Microsoft's nature to make a product that's truly independent of its operating system and/or office software.
> Even as open source .NET is still a huge pain to get running under Linux and development of .NET anything is extremely difficult without Visual Studio (which doesn't run on Linux). Not only that but most of the .NET libraries one would use for any given purpose only work on Windows. So even if the core of .NET is open source and cross platform any software written using that platform can only effectively run on Windows.
The open-sourcing and cross-platform development of .NET is an ongoing process that's still in its infancy. Yes, of course it's a pain in the ass right now. Installing git on Windows was also a pain in the ass, and even now it's still basically a prepackaged Linux emulation environment wrapper.
I don't know: there's more than enough ignorance around for completely absurd views to be popular. Look at Trump ;-)
Microsoft is actually more cross-platform than either Apple or Google. It has stuff on everything from memory sticks to mainframe class servers, and it has a cloud business (which supports Linux, as well as Windows). It has dozens of apps on iOS and Android, not just on Windows.
Office runs on Windows and Mac OS X, in browsers, and there are apps for Android and iOS.
Visual Studio Code is free and available for Linux and Max OS X as well as Windows
It's not everything, but who does more?