C gets a hard time and this code base highlights some of the modern features C is missing by its use of perfectly clear workarounds. E.g. The prefix “namespacing” done here works just fine, easy to understand. The go-style visibility approach is growing on me lately and it works well here.
C does deserve some of the criticism right enough, e.g. the same macros re-defined in a few places - it’d be nice if c had a better way than its version of macros although its not actually a problem here because the macros aren’t really big enough to have subtle sharp edges.
Ahh a beautifully written project, just shows that maybe C is entirely workable in the right hands.
So yeah basically you want good code quality (knowing what you do) and good unit testing in any language.
C has proved totally workable on many occasions. I guess a lot of the comments on this thread where written from a Linux Kernel :)
Seriously, I don't use scanf that much in C. Most of the times, it doesn't do exactly what I want, or there is a more specialized alternative (ex: strtol). Also, generally, %s is bad, even the "safe" variant. For sscanf, it makes a copy, which is inefficient (if you are writing C, you want efficient code, right?) and you have to know the size in advance, and usually, you don't.
Most of my string manipulation in C involves loops, pointers and a lot of in-place manipulation. It is tedious compared to other languages but it is very efficient, the reason I picked C in the first place.
And C has libraries too, to be fair. So if you prefer the "choose between 25 competing libraries that all provide the desired functionality but half of them are deprecated or don't work" workflow a lot of people are used to from C++ or whatever you can do that. Lots of people use libraries in C. In fact, as you might be aware, your OS's package manager doubles as the C language's third-party package manager. There has been plenty of "already-written C code," because it's a language that has been ubiquitous in computing for fifty years.
I'm not sure where these same tired critiques bubble up from but I see them constantly and they don't make sense. There are a number of entirely valid and damning critiques to be made of the C ecosystem but "there aren't any libraries written in C" and "oh, so you say C is simple, huh? Have you tried using it?!" aren't among them.
You're not going to be a 20 year veteran C developer and then suddenly just today learn about some obscure built-in, and that's what is meant by "simple" here, and that's a good thing.
But you could suddenly learn about a new library feature that performs the same function as another language's builtin, which is for all practical purposes an equivalent experience. In both cases you see an unfamiliar string of characters in someone else's code, and then look up what it does.
I learned C as a first year undergrad.
I've even been a teaching assistant for the very first programming course students take, so I have taught C to first year undergrads!
About 30 years ago.
So unless you actually DO teach then don't assume :)
C definitely has many flaws, but I wouldn't say complexity is one of them. I wulld rather say it's too simple and one has to add the complexity all by themselves :D
If you need to do lots of string processing, C is simply not the right language.
Thankfully one isn't forced to only learn a single programming language in life ;)
Asset files, config files, shaders, obj files, CSV, XML, JSON, etc... it's not uncommon. Read the asset loading code in the posted game.
For a language that doesn't even know what a "string" is, C applications tend to involve a lot of string parsing.
You could have thrown in a bunch of functions from string(3), but even then... C string handling isn't great by any means, but it isn't vast or complicated either.
It's small and easy to grasp the entire thing though, unlike most other more modern high level languages.
That said, it's not entirely clear that matters too much. Almost any non-toy C project is going to be bringing in libraries outside libc, so you still need to go off and learn those if you want to work on a project. Doesn't really matter whether they're in the base language or not.
> It's small and easy to grasp the entire thing though, unlike most other more modern high level languages.
So ready for the question?
Oh, well it didn't address what I wrote.
> So ready for the question?
Doesn't address what I wrote.
You would have a point about knowing exactly every single corner case of the language. I don't think that's easy even for C. Fortunately I didn't claim it was either.
You said "It's small and easy to grasp the entire thing though, unlike most other more modern high level languages."
The entire thing doesn't include corners? And you entered the conversation in support of someone that literally said "every corner, nook and cranny"! If you were making a much weaker claim, it's on you to make that clear. Instead of this petty "doesn't address what I wrote" garbage.
On top of that, being able to suss out what is undefined behavior is in fact quite important for proper use of C. I wish it was a super rare corner case but it's not.
> The entire thing doesn't include corners?
> And you entered the conversation in support of someone that literally said "every corner, nook and cranny"!
So reply to that.
> If you were making a much weaker claim, it's on you to make that clear.
I did. You just didn't read what I wrote.
> Instead of this petty "doesn't address what I wrote" garbage.
> On top of that, being able to suss out what is undefined behavior is in fact quite important for proper use of C. I wish it was a super rare corner case but it's not.
Sure. Doesn't change the fact your question was stupid and didn't address what I wrote.
All these could perhaps be done in c, but why spent a lot of time making this infrastructure if one could spent more time on ones application domain solutions.
In practice though that's simply not possible. E.g. trivial example but you couldn't ever depreciate #include even though we have #import now.
Instead of "fixing" the language, there's written (not enforced by the compiler) documentation https://github.com/isocpp/CppCoreGuidelines - actually the first line is the perfect summary:
"Within C++ is a smaller, simpler, safer language struggling to get out." -- Bjarne Stroustrup
This is great documentation (and many parts are relevant to other languages) but many of its proscriptions cannot be enforced by the compiler so it will always be opt in, mis interpreted or ignored.
I've been doing a lot of zero-copy FFI work in Haskell. It became quite mechanical to create 100% literal bindings to C libs (Pointers and all), and it's nice to use it directly in Haskell as-is. I'm hoping I can write a simple tool to do exactly what you describe.
num : I32 : 123456789
fd : c:int : c:open(c:"file.txt", c:O_RDONLY)
:: c:print("file:%d number:%d\n", fd, num)
For my Haskell ideas, generating Haskell from headers automatically feels like the best route. Especially if it's configurable to some degree. It's common that you can better type a C API with Haskell than C ever could. Usually with techniques like IO, phantom types, GADTs, and the new linear types.
Of course it's workable. It does have a lot of gotchas and seems to foment a propensity to reinvent the wheel (Although that also relates to the limited contexts in which it is used).
I'd generally avoid it unless it really is the right tool, in which case, just approach it in a methodical way, like every other project.
And because it is in C, you can talk to it from Nim, Python, Rust, etc.
And scripting is a very big part of RTS games.
I'm curious, what difference does it make if it was written in, say, Rust, or C++. Wouldn't you still be able to talk to it in a similar way? What's so special about C in this context?
This practice is fairly common in python when you're dealing with AI (I'm sure there's other examples, but this is the one that sticks out to me.) SDKs like TensorFlow are written in a lower level language (C++ I think for TF) and then FFI called from python, so you can end up with a function like initTensorFlow() and calling it is enough to bring TF online and ready to start processing other data.
And you may be thinking "Well, I can do all that in rust or go or <insert prefered language here>" and of course you can, ultimately it's all just a linker and compiler deciding where to put code and data in an executable, but C is the defacto standard and more tools from more languages are going to work well with it than say, rust or go. Which, now that I think about it, is probably why Rust supports defining a c api for your code, so it's easier to call from other platforms. Rust specifically tries to play nice with being embedded in existing projects because it makes it easier to adopt rust incrementally.
Just my $0.02
C++ constructs--not so much.
Does your language allow overloaded functions? If not, okay, you have to map that somehow (probably name mangling).
Oh, your language has a different notion of inheritance than C++, well, now you need to map vtables somehow.
And what does your system do about exceptions? Constructors/Destructors? Does it agree with the standard library about what a String is? What about memory allocation? etc.
Even if the C++ ABI was perfectly described and standardized, mapping to it is always going to be error prone and clunky relative to the C ABI (which is just a lot simpler).
If you want a good example, look at what it takes to use Vulkan via the C API/ABI--which is an exemplary implementation of an API, in my opinion. Look at the hoops you go through with loaders, structs that have type fields in them, extension fields so that you can add functionality without breaking compatibility, etc. All of that extra "gunk" is the kind of thing that C++ hides normally that gets terribly exposed when you have to cross an ABI boundary.
I agree with the general gist of this, but:
• Java has no unions
• Java has no rectangular multidimensional arrays, although of course C arrays decay to pointers when passed anyway
• Java has no unsigned integer types, I imagine this is generally easily handled though
• Java has no const modifier, you'd need to keep track of that manually
• Java has no preprocessor, although of course this happens at the API level not the ABI level
• Java lacks bitfields but they're generally avoided in C APIs anyway, for good reason
• Probably worst of all, Java references are importantly different from C pointers
You're absolutely right though that the C ABI is a pretty good lingua franca ABI largely due to C being a pretty minimal language. No overloading, templates, garbage collector, or object model to worry about.
Of course there are other mechanisms, but for direct bindings I think C bindings are by far the most common.
Then I see a post like this and know for sure that I am.
I program, rock climb, ride bikes, and write. For any given one of those I can easily find 10,000 people who are more accomplished than I am and often at a younger age. I accept that I will never be that good at any of them, not because I can't be, but because I don't want to invest that much of myself into one of them. There's nothing wrong with that.
This is a story we stop telling kids around the end of high school. We tell this story because, as a child, being told you’re dumb can stunt intellectual growth, or being told you’re not physically capable can stop kids from forming a habit of exercising.
If you're on this website and your day job is programming, your expected IQ is already way north of 100.
Speaking from the point of view of someone who spent 8 years in school learning computer science, then the next 15+ practicing it:
I cannot create an RTS game engine in C. Not in a million years.
It's all good and well to be encouraging, but this is literally our field. If we can't identify our own short comings accurately (after for some people is decades of experience), then we are probably less capable than we think instead of more.
Given enough time I can copy other peoples implementations, but its extremely mechanistic to the point where you can't say that I actually 'wrote' it or learned anything. And it would be questionable if all the functionality would fit together as a succinctly as the posters.
I hope you don't think that is goal post moving, but if I do something mathematical in order to get it correct I have to sacrifice everything else: speed of production, efficiency of the end-product, readability of the code, etc. Compared to my output in logic problems or HCI, you'd think two entirely different people were involved and one was significantly smarter.
It's a failing (of the sort where asked "What is your biggest weakness?" at an interview, I can always answer immediately this), but it's what allows me to be impressed with work like an entire RTS game engine in C in just 3 years as a passion project.
I wouldn't call athletes "physically superior" either. That kind of supremacist talk is looked askance upon, for good reason.
But they're definitely faster/more coordinated/stronger than average, pretty much by definition. They can be proud of that.
Computer programmers are also smarter than average. If they weren't, they wouldn't stick around. It's not physics or higher maths; I figure anyone above the 80th percentile can do a good job of it, and some people in 60-80th can get the hang if it, but will probably never be great.
That's just the way it is. Phrasing it as being intellectually superior to the rest of humanity, that's just your hangup.
You must be living on a different planet from me. I can kind of see how someone might be tempted to just say “you know what, you’re right” and let you keep on thinking that, though.
It's not, what you have. It's what you do with it. I struggle with some things my parents find easy, vice versa, most of the 95%ile that I know have various mental health problems or other hidden disabilities that make daily life difficult -- especially achieving something like this.
And Feynman went far with being just an ordinary person supposedly slightly lower on the bell curve.
The only thing that actually tangibly matters is dedication and sweat, and how much time and effort you're willing to put in to something.
Of course, that's not to outright say that an IQ test or however you want to measure cognitive "capital"* isn't valuable in some aspect -- it does measure something, after all. Less fighter pilots died in training once they started selecting using early IQ tests (Source is a psych book from the 80s I skimmed a few years back :P). But that's an extreme case. For projects like this, for most of the things you will ever, ever want to do, it outright does not matter and the emphasis on it in programming and "intellectual circles" (on the internet, I don't think anyone in real life actually gives a shit outside of college admissions) is massively overblown.
* - I'm phrasing this in capitalist terms explicitly because "cognitive capital" is an explicitly western construct that to be honest seems to be more detrimental than it has been positive. Also, outright racist in the historical use and implementation.
"Feynman was universally regarded as one of the fastest-thinking and most creative theorists in his generation. Yet, it has been reported-including by Feynman himself-that he only obtained a score of 125 on a school IQ test.
I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. He also reportedly had the highest scores on record on the math/physics graduate admission exams at Princeton.
It seems quite possible to me that Feynman's cognitive abilities might have been a bit lopsided — his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities."
Anyone can botch a test. You can't fake winning the Putnam.
He was a very smart man. Also very sociable, a man of the people. I think Feynman demurring about his 125 IQ test was a way of presenting himself as relatable. He could have easily done the opposite by slamming home the point that he aced the Putnam and intimating that he was one of the most intelligent men alive, but he was smart not to.
This may be a selection effect. Perhaps the members of the 95%ile who are successful don't have the bandwidth left to join MENSA and see no value in doing so.
As a personal anecdote from ~15 years ago, which was the last time I affiliated with anybody who talked openly about being in MENSA, their activities there frankly sounded a bit like a self-therapy group which turned me off from even attempting to join.
That doesn't matter for the purposes of this discussion: It still shows that there's room for a lot of non-over-achievement in those top five IQ percentiles.
Situation: “We have a broken company. Let’s hire one of these superhumans to fix everything.” Reality: the person hired is the same as everyone else, so they can’t apply their superhuman strength to fix the situation. Lesson: People are more similar than they seem.
Situation: Complex project seems to be so involved and difficult that it must have been done by someone possessing powers on a different plane than the rest of us. Reality: The creator spent a very long time learning about the concepts and technologies, then spent a shorter, but still long, time building the project. Lesson: Even those who accomplished great things did not do so effortlessly, and not overnight either. Unlike a god, they did not download the “desire to build great things” program and simply execute it.
Judgment of the claim: this appearance of higher kindedness is (mostly) an illusion. Most people are largely the same.
In your first example, the superhuman cannot fix everything because he has to work with other average humans, who drag him down.
In the second case, since he is not forced to work in a team, his superior skills are shown in the result.
Here is a much better summary of the argument:
Seems like begging the question.
I don't think this is true. People vary wildly in how long it takes them to learn and understand concepts. Some people are better at this and so they achieve more with the same amount of time devoted to a particular endeavour. We can be honest about this without feeling worthless. I could devote my entire life to mathematics and I wouldn't become Euler.
Thankfully people vary just as wildly in their affinity for things and that's why we see impressive projects like this. But I aqree with your general premise that seeing other people's endeavours and achievements shouldn't make you feel depressed about how you spend your life.
Most of the people on the street? Yes. Most of the people in a history book or at the top of their fields? Not remotely. The author of this is in the second category.
They mostly got there through the trick of being a good leader and then getting assigned all the credit for their group's work in the history book.
> "There's a tendency among the press to attribute the creation of a game to a single person," says Warren Spector, creator of Thief and Deus Ex.
Stage magicians can spend years conceiving, preparing and rehearsing a "trick". But on stage, when the audience experiences the illusion ... it is magic.
Similar to some workplaces stuff. There can be many man hours of time spent propping up repositories, testing, debugging, logging etc etc capabilities in general.
And when the shit hits the fan and the capability solves the problem ... magic.
Then the audience says "Thanks for solving the problem 10x Dev."
I see what you did there.
In this sentence, the first 'you' is a different person/reality than the second 'you'. Our priorities are integral to our identity.
So what difference does it make if you can't be? Realistically speaking, the very best pour both their soul and their genes into an endeavor, and most people literally can't match them even if they tried.
1. The author didn't write this overnight. The oldest commits go back to Oct. 2017, three and a half years ago. Imagine what you could accomplish if you worked on something for three and a half years.
2. There are many things we can do with our time, and ultimately we do them all to be happy. Don't measure your own worth by how much product you leave behind. Measure it instead by how happy you are today. Most likely the OP wrote this code because working on it made them happy, and that's great. For many of us, though, the activities that make us happy don't have biproducts. But that doesn't detract from how happy those activities make us. Value the things that you do because they make you happy, not because they appear impressive when held against some external grading scheme.
Edit: list formatting
One problem is making assumptions that the author is trying to prove something by a clear display of superiority. The code itself is of a high quality, from what other people are saying. But it's probably only because so much effort was put in for three years on it.
Also, I have to wonder how many other doors the author left closed because he spent so much time on this project. Was there anything else of a similar impact that he contributed to for all those years? And what of the other things he may have passed up?
Some projects will consume you. They will be the tantalizing activity pulling you away from Friday night dinner parties with people you don't understand and other miscellaneous outings that are comparatively less interesting. You constantly think you're leaving something unsaid or undone away from a keyboard. In some cases you feel obligated to continue for no reason other than having already come so close to accomplishing the thing, sunk costs and all.
With some projects, hypothetically, if someone is to see me in a cafe with nothing better to do, I'm almost certainly going to wrap up my current conversation after a point and take out my laptop to continue working. That is how my life is structured with those projects. That is how they affect my actions day after day. It isn't necessarily a life of glory.
Working on projects and having the finished product in my hands is not what brings me happiness; it merely staves off misery. It fills a void in my life that was agonizing up to that point. My theory is that if your goal is to avoid misery and despair at any cost, and have the means to do so, you can be driven out of desperation to accomplish substantial things. Other people could see what I did and make their own guesses as to what it took. But really I was only trying to prevent myself from becoming undone. It was simply too irritating of a problem for me to leave alone. I'll probably never be able to fully explain why. It certainly wasn't because I was trying to prove something to other people.
But I'm only speaking from my personal experience. It's not like I actually understand why the author did what he did. People just don't seem to know at the moment, and maybe that's where the air of sadness over suddenly seeing this accomplishment come out of nowhere originates. There's nothing to counterbalance the impact of learning of its existence within a split second with what actually went on behind the scenes for many orders of magnitude longer.
Your post has a sad undertone, one that it feels like you are trying to apply to OP in order to justify not achieving the heights others have. Some people are insanely good at things. Most people are barely good at anything. Don't think of OP as a bastion of despair, reframe the project as an achievement and I believe you will feel better.
I'm not feeling less discouraged.
It's not that I worked on the wrong project, it's that most months I worked on no project.
I feel like that's pretty normal but it's still bad. I could be just as happy and make more nice things.
> It's not that I worked on the wrong project, it's that most months I worked on no project.
Will you start working on projects in future months?
It doesn't seem to be.
So if I take your advice and imagine what I could accomplish... that just makes the feelings of discouragement worse.
You listed two things to keep in mind. For me the first one actively makes the negative feelings worse and the second one doesn't really apply.
That's a good point, and I didn't take a healthy perspective on it. Imagining what we can accomplish in a long span of time can be distressing. The road ahead always appears longer than the road behind; that's why it's so easy to look at the accomplishments of others with awe and then look towards our own intentions and feel anxiety.
A better approach is to not imagine what we could accomplish in three and a half years, and instead just dedicate the next three and a half years to doing the things that make us happy and pleased with what we're building. Projecting our intentions can be daunting, but taking one day to do a bit of code or design is not a big step. And doing it again the next day is no bigger a step. Taking it each day at a time, and taking care to enjoy each of those days at a time, is key to keeping our motivations high. And then some day we look back and realize just how long we've been crafting our craft, and how far it's come. But it all starts with one day's work.
I mean this is definitely a piece of gem, but nothing is exactly high tech or hugely difficult. And if you don't think you can make it, you can lower the target and create a, say, a 2d rpg engine that emulates Ultima IV or Pool of Radiance, those pioneers of modern RPG. For art assets you can use free resources or purchase inexpensive ones.
I really think that's achievable for pretty much every programmer. After all in some universities they give you a similar task as a project for advanced game programming classes.
And then you go from there, polish the game and make it fun to play.
My takeaway is that the most difficult part of building something like this isn't the code, it's the life skill of sticking with it. I think that's both a more valuable and difficult skill to acquire.
2. Seek self-fulfillment and purpose in other aspects of your life. I am 100% sure the people around you - family, community, neighborhood, etc. - could use someone who devotes some time for doing good. Sure, it might not be what you had assumed you would gain fame and glory doing, but so what?
I'd be more specific but obviously I don't know anything about you or where you live or what you do.
Start doing things is easy, too.
Finishing stuff, is hard.
The trick is to gradually work your way up from small, completed projects. If you start big, you probably fail big.
The hard part is justifying the time spent on a project. You often start a project and feel like it's not the best use of your time. That you may be the only one to use the project or find value in the project and that's enough to kill motivation. Why start a business if the vast majority of businesses fail? Why start a business when there is already so much competition? Why write a novel when there is no hope of selling it? Then there are the "I'm too old for..." thoughts.
These thoughts eat away at people.
If someone pointed me to my purpose project and told me if I were to keep at it that I would find guaranteed success, then I would have an infinite well of energy at my disposal. But of course, no one is there to tell you that.
I on the other hand can die happy knowing you answered one of my comments!
> Investor, husband, father, ex-Microsoft. Creator of vuepressbook.com.
Being a good husband and a father while common, still involves effort, and is an accomplishment in itself. Also, being an investor is way to have a big positive impact on the world! So is writing that little book on Vuepress! And you can still more creative stuff (even if you’re close to retirement) in your life—especially if you have saved up a FIRE level of money.
A polymath (Greek: πολυμαθής, polymathēs, "having learned much"; Latin: homo universalis, "universal man") is an individual whose knowledge spans a substantial number of subjects, known to draw on complex bodies of knowledge to solve specific problems.
The latter is made by the developer, but I wouldn't call those assets "pro quality". This isn't to diminish the astonishing amount of effort that went into the project though!
I'm also building an open source game in my spare time, although it's nowhere near as far along as this (and I'm also a massive noob at game dev) - it's a civ-type game in three.js designed to showcase bottom up organising behaviour; think age of empires 2, if the villagers determined their own behaviour. I shared it on HN years ago but shelved it until recently when I was looking for a fun project to pick back up - https://github.com/ajeffrey/civarium.
Most error handling just consists of checking the return value of a call and propagating it up to the caller if necessary. Sometimes I also set an errno-like variable with an error message to check it in the top-level code. It's a bit wordy but obvious and sufficiently good for all my use cases.
I don't think C limits the size of the project. It's all about good organization and coming up with the right higher-level protocols/conventions. This, IMHO, is what allows or prevents the code size from scaling gracefully.
If you want to re-create the STL, maybe. But you can make custom data structures tailored to your task at hand instead.
For example, instead of a std::map or std::unordered_map that allocates and initializes each node separately, you could preallocate some of them in a big chunk of memory, hand them out via a bump allocator scheme, and later free them all at once. Instead of a std::sort algorithm, you could use a bucket sort if it's possible in your situation, to improve your asymptotics from O(n log n) to O(n). Etc, etc.
During the development of the project, I had a thought that it would be nice to have a RAII/defer mechanism to get rid of repetitive code for freeing resources at the end of a function. But I'm not sure if that's really necessary since you can just put the 'free' calls at the end of the function and insert some labels between them in a kind of 'stack'. This perhaps is more in the spirit of the language - a bit more wordy, but having less voodoo done by the compiler.
#define itervar(i) i##__LINE__
#define FOR_DEFER(pre, post) for (int itervar(i) = ((pre), 0);
FOR_DEFER(foos = alloc(Foo, 1024), free(foos))
// This "loop" only runs once.
FOR_DEFER(foos = alloc(Foo, 1024), free(foos))
FOR_DEFER(bars = alloc(Bar, 1024), free(bars))
#define FOR_MEMORY_ARENA(name) for DEFER(Arena name = make_arena(), free_arena(&arena))
Foo *foos = arena_alloc(&arena, Foo, 1024);
Bar *bars = arena_alloc(&arena, Bar, 1024);
// all allocations automatically freed at the end
Thank you for your response.
This way if you accidentally create a new language, programmers will still be able to load existing C header files without having to manually write and maintain FFI bindings.
That being said, I did come across some discussions (ex: https://stackoverflow.com/questions/34724057/embed-python3-w...) where it is not possible to strip the standard library from Python 3. I think the use case of embedding strictly the interpreter without any "batteries" is not popular and thus has not been that well-maintained. I've not tested this in practice, however.
Thank you so much for making this in the open and documenting everything !
As someone who comes from a web dev background and who's completely unfamiliar with developing large projects in C, I'm curious about your setup. What does it look like? In terms of what kind of IDE, OS, interesting tooling, etc. you use on a regular basis to work on this.
Also, I've seen the first commit goes back to Oct 2017. Did you work on this full time during the whole time / how many full-time months you would say you devoted to this project? Also, if I may ask – what do you do for a living?
Thanks again and congrats on the project!
As a web dev, I feel a moment of satisfaction when I manage to make a heading look right on both desktop and mobile. Lol, what you've done is several orders of magnitude more difficult... I could spend a lifetime coding and not even know how to start drawing the first pixel. Kudos!!
I just really appreciate people who can code a clean, elegant app, because I can't remember the last time I saw one in the web world!
I have to admit that would be quite the single feature I would be curious to learn about.
All I know about RTS netcode is that if there are N players, each players "commits" an order, in a turn by turn manner, and all the game code, physics and behavior etc is entirely deterministic, which guarantees that all clients are always synchronized and don't need to share state (well there is some hashing system to detect if there is a bad sync, and I don't know if it's possible to rewind and cancel things).
A RTS netcode doesn't seem difficult to implement, but making a deterministic RTS doesn't seem so easy, I think, especially if you have boids and other pathfinding cool things.
Although, this is less crazy when you think about how heavily you can use macros, and sort of roll your own high-level language yourself.
See also Engelbart & Co. Amazing how productive this approach can be.
This is most awesome thing, I wish more projects would use linking exception!
Just in case someone will reuse code on server side; why not to use AGPL instead of GPL ?
Every time I've attempted even the basics of graphics programming I very much get scared off when it doesnt click quickly (if anyone has tips on how to get into it, it would surely be appreciated :))
I couldn't find any tests in the project. How do you test things and maintain it?
I'm a bit curious about the choice of OpenGL 3.3. When OpenGL 4.0 came out in 2010 it was a game changer for me and I remember it made everything (apart from quick debug drawing) so much nicer. OpenGL 3.3 was released simultaneously and was essentially as much of 4.0 as could be backported for those unable to move over right away.
Is this choice a hint at how long you've been writing this, is it comfort and familiarity? I'm not criticising the choice, it obviously works for you. I'm just interested in the reasons.
In some parts of the code, I did make use of OpenGL 4.xx features like glMultiDrawIndirect, but this is put behind a check to see if it's supported by the driver and added a slower fallback for OpenGL 3.3.
Plus, Vulkan is not really "faster" than OpenGL. It just gives you a different API for programming the same graphics hardware, which in the hands of the right person can be used for writing code which is "faster".
I guess my surprise was mainly because it's the kind of extra bother I personally like to get away from in my personal projects. Then again, I typically don't write things meant for adoption by a wide audience. Kudos for making the necessary effort.
But does that mean low FPS too?
Regardless of how you feel about Python 2 vs Python 3...one is officially EOL and the other isn't...
Newbie: “I need to start programming!”
Veteran: “I should stop programming.”
How did you stay focused? And how did you segment the work? It requires progressive refinement to coordinate all of the systems. Wondering what order you took.
Back2Warcraft are the guys that stream and cast most of the competitive games. I suggest you watch them live on twitch when there's some tournament, and/or on youtube for past games.
It's a great time for warcraft 3, thanks to the great effort put in by fans. If only the game hadn't been killed by Blizzard :(
But there's still plenty of people playing, with classic graphics and W3Champions servers (you still need reforged), and plenty of pros to watch.
Enjoy your warcraft 3!
You'll see this in some places in Linux's source too.
Thank you for open sourcing it as well. I've already learnt a thing or two by skimming the code.
I recently looked at making a really simple RTS using something like Unity and I was surprised how little love that genre gets from a lot of engines.
I like the Babba Yaga as a unit, lol!
The engine uses Python as a scripting/config language. You can use it to hook into a lot of events pushed from the engine core (unit got selected, unit started movement, unit started harvesting, etc.) and customize or change the unit behaviours.
Over the duration of the project, I really did learn to appreciate why Lua is embedded into games and Python isn't. Lua is really small and you have full control over everything. And you're eventually going to need that control when you implement features like reflection, pausing the game, etc. CPython is this big shared library that does its' own thing and you have a lot less control over. The parts where it just doesn't expose enough through its' API do do what you want is a real huge pain. I ended up writing a bunch of code to serialize all the internal data structures and this was a massive chore. Also you have a lot less control over CPython's performance and memory allocations.
I didn't really appreciate these things when I started the project so hence I went with Python. But since I ended up doing it, I guess you can still enjoy the benefits of it.