Back in the day I did most of my dev work on Solaris. I then spent 4 years as CTO as a startup that was pretty much only Windows.
When I subsequently went back to working at a unix shop I was initially struggling with vi as I tried to read some of the C++ code. I couldn't remember commands, was having to refer to the man pages every few mins. It was torture.
A couple of days in, I was writing up some notes in vi when someone walked past my desk and started chatting. When we finished talking I looked down at the monitor and I'd written more than I had when I was concentrating, nicely formatted, the works. Turns out "my hands" had remembered a load of what I thought I had forgotten.
For the next few days I had to keep finding ways to distract myself so that I could work efficiently. Eventually it all came to the foreground but it was the most bizarre experience while it was happening.
The problem was he had cleaned his keyboard a couple of days earlier and put some keys back wrong. When he was sitting down he logged in by touch typing. Standing up he looked at his keyboard when he typed.
(Applies also to those government warrants forcing you to divulge information.)
Sit me in front of the piano and I'll mindlessly bang out the song I've been practicing. Make me think about it in the middle and I'll forget what comes next.
Once start working on Linux port he'll regret about that. Every developer that start with own platform-specific code end up using SDL2 anyway. Don't do that mistake.
- On Windows, DPI scaling is broken (ask for a 1280x720 window, get 1600x900)
- On Mac, mouse locking is broken
- On Linux, my Xbox gamepads don't work at all
- Required several hacks before it could be used as a CMake submodule
This stuff is under active development, which is shocking for such an established and widespread library. I'm glad it exists, but it doesn't "just work".
Can't really blame it though, it has a very boring and nearly impossible task.
PS: What OS X version you had problem with?
It's probably because I'm on a random Git commit rather than a stable release, but I had to do that due to another issue. It's surprising how many things are in flux, given how long it's been around.
Really? I use Linux full time at home and I've never had problems with 360 controllers either when developing with SDL2 or playing games built with it. Are you talking about original Xbox and/or Xbox One controllers?
There's code in SDL specifically for high DPI, but it's confusing and practically undocumented.
It's just something I'll have to mess around with a while, and then remember not to breathe on anything once it's working. :)
As result these people usually don't even consider any alternatives as all of them even worse from their standpoint.
I would use SDL in Linux ports of things because it is the closest to a reasonable native API on Linux (which says more about Linux than SDL actually). But even having done so I would then use native APIs in Windows, OSX, etc.
If your standard of quality is high enough, it won't really be possible to reach it using a blanket API like SDL everywhere.
Though when it's relatively new game with own engine and small team maintenance cost for own cross platform code going to be high. Even on Windows there is tons of small problems that already solved within SDL. It's really not fun to debug problems of XP, Vista and some not updated systems.
PS: Also as far as I aware SDL2 currently used by all Valve games on all platforms. I pretty sure they wouldn't be using it if it's wasn't working well.
Besides, there is more than rendering for a decent engine. Managing controls in an agnostic way is nice to. I am actually working on adding better control-events so it can wrap Windows, Windows phone, Apple TV and External controllers easier.
That would make sense, last time I touched it was back in 2012.
> "Besides, there is more than rendering for a decent engine."
For sure, but Cocos2d-x is a complete software framework and SDL is meant to be a library. Although I would grant that it would make sense if the author would choose to use a game engine for his game.
>That would make sense, last time I touched it was back in >2012.
Cocos2d-x has never used SDL for anything.
for v4 the renderer is vastly different. Going from v2 to v3 there were major changes here as well.
I wish the author told me more about it than just this. Can somebody comment on how it compares to recent VS editions these days? About 5 years ago I also looked into using OSX as main OS. As I've always been using non-commandline graphic text editors and IDEs for most coding that made XCode the go-to environment but I just couldn't deal with it even though I tried. I don't remember all details but in general it just felt inferior to VS on like all fronts, with no advantages of any kind (for C++). Again, IIRC, but it did annoying things like opening the same document in multiple windows, one for editing and one for debugging or so? Anyway, what's the state today?
- I don't spend a lot of time setting up the IDE, so it's important the defaults are sensible. For instance, it might be possible to change this, but XCode's code completion seems to be less useful than VC2015. I really need it to be near immediate, and I also need it to check string that may appear in the middle of a function name instead of just the start. Especially since NS libraries have strange and long names for things. As I'm writing this, I just found out you can have tabs in XCode. Why isn't that a default?
- XCode crashes maybe once or twice a week for me. VC doesn't.
- Unspecific weird things happen in XCode way more than VC. For instance I was unable to see variable values in XCode for a while. Eats up my time looking it up. Hasn't happened to me on VC yet.
- XCode is highly integrated with the Apple environment. You can build stuff for the app store and send it right there.
- XCode has a less than complete Git integration. You need a bit more detail than what it gives you. I use SourceTree anyway, but it might matter for some people.
- Compile times are hard to compare, as I'm doing different things on the two environments. VC2015 is definitely a lot faster than a few years ago though.
Only once per week? Lucky you. Sometimes XCode has crashed for me several times per hour. Sometimes it works for a while. Sigh Apple . The fact XCode uses clang as a compiler absolutely rocks.
That said, Visual Studio 2012 just crashed earlier. I guess it was some bug in Window splitting or something, did some unusual things with that just before the crash. VS2012/2015 seems to be generally stable. Visual Studio C++ IDE context operations, like finding references just doesn't work well at all. It finds so many totally irrelevant items.
Excited to try VS2015 with Clang support for Windows applications, with official support.
: When it comes to personal experience, Apple's recent software quality seems to be shoddy. For example USB3 mass storage is stable only for a few minutes before a forced USB stack reset on my El Capitan Macbook Pro retina 13" 2015. Sigh. Impossible to do things like run virtual machines off USB storage on a relatively new $2k machine... At least without booting to Linux or Windows. Things like these make me seriously consider to stop using OSX as my primary machine.
I have the same MBP and while the it'll drop anything on the right one that's more intensive than a flash drive, the left one has no such issues. Lot of reports of the same "solution" working floating around.
VS is okay, not perfect. But I'd say it's better than searching for strings when you really want a type or function.
Where Xcode is better than Visual Studio for C/C++ dev (IMHO of course):
- C++ compiling and linking is easily 5x..10x faster out of the box than Visual Studio thanks to clang
- the static analyzer has a really nice 'arrow'-visualization of the steps that lead to the warning
- clang provides more useful error messages
- compiler warnings and errors are directly overlaid into the text editor view
- built-in support for clang address sanitizer (just a checkbox to tick)
- support for iOS development is really slick
- better out-of-the-box support for command line builds either through xcodebuild or the gcc-compatible toolchain
- Xcode comes with a lot of profiling and analysis tools where Visual Studio has only slowly caught up (but VS2015 seems to be mostly on par).
Where Xcode falls behind compared to VS:
- Xcode has that strange 'Scheme' feature for build configuration
- the debugger's variable inspection has usability issues
- working on source files with a couple thousand lines of code feels laggy
- before El Capitan, the whole UI felt slow on a Retina MBP, but I guess that's because of general optimizations in the OS
- it crashes or freezes about once or twice a week on me
- probably a number of smaller ignorances which I have learned to ignore
I usually don't touch any of the UI builder tools in both IDEs, only straight C and C++ stuff so I can't comment on the more platform-specific features.
Would be interesting to hear your opinion again, once clang is fully supported through Visual Studio.
One strange thing I see with clang running on Windows (in the form of the emscripten fastcomp backend) is, that clang runs a lot slower on Windows than OSX or Unix. So may be it is some underlying IO problem? I'm not sure but hope this can be fixed.
There is a lot of head scratching with XCode project configuration (for me at least). It's very unintuitive.
clang (though XCode) is much, much faster compiling my projects than cl.exe (through VS) even though I'm using precompiled headers in the Windows build and no PCH on the clang build. VS2015 is supposedly much faster but I still have a few compilation issues I need to work through before I can switch to it.
The debugging experience on Visual Studio is much more pleasant.
Visual Studio feels a lot more "enterprise-y" than Xcode and offers a lot more advanced features and tools on pretty much all fronts, especially in the UI (e.g.: Xcode doesn't even have file tabs, but sth. I'd describe as big "whole-UI-tabs"). But...
Xcode has llvm. This compiler and it's tools (i.e. the analyzer, debugger, etc.) just make VS' compiler look like it's from the 90s. Really.
-> Have you ever heard of llvm's "address sanitizer"? Forget the days of endless debugging! This little helper has revolutionized my debugging productivity and solved so many little subtile bugs for me...
So in the end you'll loose a lot of nice UI gimmicks and additional tools, but the compiler suite makes up for that.
And even if you don't need those llvm features, you still get a unix environment, which makes working on many fronts a lot easier. E.g. I'm primarely working on different kinds of web servers: To test everything I can just install whatever I need... brew, curl, netcat, wrk, ... And it will just work. And let's not forget all those "standard" unix tools like find, xargs, grep, etc.!
I think this is an intentional interaction design. When you are always working with hundreds or thousands of source files, tabs kind of lose their meaning. The fuzzy-search quick open panel and project-wide find become your main dependency for quickly jumping around your codebase (with the benefit that you don't need to touch the mouse).
Preferences → Navigation → Double Click Navigation → Uses Separate Tab
Because that's what they are.
There's a lot of attention on .NET Native. Given that LLILC went from nothing to being able to build and JIT Roslyn in 6 months...
Let's just say I have some speculation in mind.
I'm primarily a Python guy and with the 2/3 split, I've had my eye on .Net Core to migrate my business platform to. Pretty much been looking to dump Python for anything over a line count of 500 and keep it that way.
I'm open to any speculation as to where it's going because I'm finding the .Net platform to be more attractive than ever. A few years ago it was looking pretty sad but MS really turned it around and I'm interested in a permanent migration.
I really can't find much, that I enjoy using, that approaches Python's broad use cases than C# on .Net. With dotNet Native and Xamarin, I'm very seriously considering the plunge.
I prefer VS because it performs better when editing and navigating the code base ( it is actually faster to run in it inside VirtualBox than XCode natively ).
Also VS has code navigation and editing features that XCode lacks. For instance if you want to do a find/replace in VS you can double click on a word do Ctrl+H and the word you selected populates the search box. You can then populate the replace box with what you have in the clipboard or just type in what you want.
In XCode you need to copy the word to replace into the clipboard, do a Cmd+F, paste the word into the search box and then type in the replace box. This is much slower.
In VS you can setup bookmarks in your code with Ctrl+F2 and jump between bookmarks by pressing F2. This is great when you need to have present multiple parts of the code base to accomplish some task.
I don't know how to do this in XCode.
In VS you have a stack of source code windows that you can easily move about via Ctrl+W+n where n is the stack depth.
This is incredibly useful to navigate between multiple files.
Again, I don't know how to this without using the mouse in XCode.
Then there are other issues with XCode mentioned by other comments like the incredibly confusing build settings and the instability of the IDE.
I often wonder if Apple actually uses it to develop their software.
> For instance if you want to do a find/replace in VS you can double click on a word do Ctrl+H and the word you selected populates the search box.
Your workflow sounds nicer than Xcode's here. While I don't often use find+replace, I do use project level find constantly. My muscle memory shortcut for this is:
Cmd+C, Cmd+Shift+F, Cmd+V, Return
I'm pretty sure I use this hundreds of times a day. Especially when analysing unfamiliar code to trace its execution paths.
> Ctrl+F2 code bookmarks
These sound really cool. A bit like numbered unit groups in Starcraft.
The only similar mechanism in Xcode is quick open. Cmd+Shift+O then start fuzzy-typing the name of a file, method or declaration and hit return to jump to it in the editor (option+return for assistant editor). I have actually changed my quick open shortcut to Cmd+Shift+D because it's easier to trigger one-handed.
> In VS you have a stack of source code windows that you can easily move about via Ctrl+W+n
Unfortunately, while a stack of source locations is maintained in Xcode, you can't jump to a direct location within the stack.
What I do here is use Ctrl+Cmd+Left and Ctrl+Cmd+Right to navigate back and forward in the history stack for an editor window. So, for example, if you Cmd+Click a symbol to jump to its declaration, you can press Ctrl+Cmd+Left to go back.
The one thing I like about Xcode's version of this is that it keeps track of source locations rather than files or windows. So you actually navigate back/forward within the same file (if you were jumping around within the file) as well as between files.
Cmd+E, Cmd+Shift+F, Return
This sequence seems to work in many other apps, I don't know if it's a system command or just a common convention.
As far as the file stack goes, I'm not really sure what that does; but Xcode has other navigation options, like tabs, and files can be switched with the fuzzy finder.
You are absolutely right about the instability—that's the reason I switched away from Mac/iOS development. Too many weird bugs, in Xcode and in Swift. I haven't found it too slow on an SSD but when I used an HDD the speed was horrible.
> ( it is actually faster to run in it inside VirtualBox than XCode natively ).
Is there anything special you're doing with VBox? In a Windows VM, Eclipse is mostly usable, but disk accesses (or something) have incredible latency. It makes opening a new tab take a few seconds.
This only performs a rename in the current file though, not across all files in the current project. (At least in Xcode 5, maybe this has been changed in more recent releases.)
Still it is really useful.
I'm fairly sure Xcode has a command "enter search/replace string" that puts the current selection into the search/replace box.
Lack of refactoring support for Swift is currently baking my noodle, as is the lack of a supported plugin API (with which I could solve many of my own problems). Very crashy.
All that being said, it has a lot of good stuff. The analysis and profiling tools are great, there's a bunch of really powerful tools for games and 3D, view debugging is a thing now, the UI builder is super powerful (once you get used to it) and the unit/performance/UI testing tools are (IMO) pretty awesome. Lately integration with all things App Store (provisioning, entitlements, etc) has got a lot better and most of my pain points there have gone away.
It's hugely subjective though, some people would disagree with much of this. Like any IDE, you end up in a love/hate relationship. The main difference being that if you hate VS you can hit up the extensibility API and make your pain go away, whereas Apple don't really care if you hate them.*
*I'm aware of Alcatraz and it's neat, I'm just not quite sure I want to introduce unsupported hackery into my production environment.
You can also change the assistant editor to manual control: I frequently alternate between automatic, UI, and manual control of the right pane. There are some great keyboard controls to make it easier.
I don't use (need?) 'watch' in my debugging, but I thought the UI exposed a command for doing so (is that what you mean by integrated?). I think right clicking a value/variable has 'watch'. And I'm sure that LLDB has commands for it.
Yeah, I hear the refactoring tools are great, but they don't work with C++ either, which is what most of my source-base is.
>as is the lack of a supported plugin API (with which I could solve many of my own problems).
I don't know if this would solve your problem, but you can tell Xcode to compile (or maybe handle is a better word) any type of file with a script. So if you want to compile your Haskell source with Xcode, for example, just go to the target settings, click on the "Build Rules" tab, and add a rule that says "Process files with names matching: " *.hs, and Using "Custom Script:" with a pointer to a script that runs the input file through ghc, or whatever you need.
A bugbear of mine for XCode is the absence of C++ refactoring tools, which CLion certainly has.
You still have to revert to XCode for that crap that is Interface Builder. JetBrains tried to write an IB clone inside AppCode, but they abandoned it.
AppCode lets me code 80% of the time without having to use that horror that XCode is. Alcatraz helps a bit alleviating the other 20%, specially the XVim plugin for Xcode.
As an alternative opinion, I use Interface Builder every day to visually create my interfaces, add layout constraints to it, hook up actions to buttons and define the navigation flow of the app. I very much like the fact that I have small view controllers.
In VS, Control-; is a pretty good way to navigate. I turn off all the toolbars anyway.
Not for me. ReSharper's "Go to Everything" is so much better.
I find the UI in VS2013 very "unstable" - I constantly manage to drag stuff away and hide windows I need to use. I miss the .h/.cpp side by side view when I'm in VS.
It's very easy to setup a color scheme in Xcode that looks nice. I've used hours in VS to get something that is ok. And then it resets to the default about once a week randomly.
Debugging C++ template code actually works in Xcode. Running debug build of stl under win32 is extremely slow in my experience.
I almost never have crashes in Xcode, but I think this probably depends on your code base and project setup a lot.
LLVM compiler errors are much easier to understand. That said, VS have found bugs that LLVM don't see. So, compiling on both help keep code base in good state.
Our game code compiles in half the time on Xcode/LLVM compared to VS2013.
Set "_NO_DEBUG_HEAP=1" in your system environment variables.
Things I like about Xcode:
- the OpenGL debugger and profiler are top notch
- clang is a great compiler
Things I dislike:
- it's incredibly unstable. I've had it crash 8 times within an hour.
- I can't do a find-in-files without freezing the ide for 30 minutes. (Visual studio can handle the same code base in seconds)
- a lot of ui design decisions are "different" for seemingly no reason. Most devs can jump between vs/IntelliJ/eclipse easily, but almost every Xcode design decision is just weird. Like why do compile errors show up where my file navigator is supposed to be? Why are important tabs just shown with tiny incomprehensible flat icons? It's all arbitrary, but it seems like every ide has settled on conventions that Xcode breaks to no obvious advantage
I'm missing Jetbrains code completion so hard and all those useful shortcuts like CMD+E, CMD+W, CMD+ALT+V, CMD+ALT+M, SHIFT+F6, CMD+SHIFT+UP/DOWN, CTRL+N, CMD+1..9, CMD+F9, ALT+SHIFT+F9/F10.
Guess I have to buy an AppCode license.
But I do not do much iOS/OS X GUI development, so I am happy with the terminal/vim.
On top of that, I think Xcode (like Eclipse) compiles your code as you type, leaving you no surprises until you need to link...
He put the OS glue in place in 1 week and that sounds about right for this sort of effort, given some prior experience with writing portable code. The bulk of effort was spent earlier on abstracting principal code from the platform specifics, and sounds like he did all the right things there. Good stuff.
The level of quality for a one-person team is phenomenal.
I'm curious about his opinions on OSX after he's gotten the game to run with sound and above 1FPS.
Correct data organization is critical for getting this right. The PS3 was particularly brutal in this respect since it forced you to segment your computations in 256kb chunks(to fit in the SPU).
OpenGL on multiple monitors - this was much more difficult to do on MacOS. I had to create a separate window for each monitor, create a rendering context for each window, make sure my graphics code was issuing the drawing commands to the proper context, then have each context queue/batch "pending" rendering commands and issue them all at once at the end of a frame on a by-context basis. Whereas on Windows you can pretty much create a window that spans multiple monitors and draw to it with a single rendering context.
Input - I used DirectInput on Windows and wrangled a suitable implementation using HID Utilities on Mac, which was not easy given my lack of previous USB programming experience. A major annoyance was the lack of a device "guid" that you can get via HID Utilities to uniquely identify an input device - I had to manually contruct one using (among other things) the USB port # that the device was plugged into. Not ideal.
SSE intrinsics - my experience was that Microsoft's compiler was MUCH better at generating code from SSE/SSE2 intrinsics then clang - my Windows SSE-optimized functions ran significantly faster then my "pure" C++ implementations, where as the Mac versions ran a bit slower! My next thought was to take this particular bit of code gen responsibility away from clang and write inline assembly versions of these functions, but I took a look at the clang inline assembly syntax and decided to skip that effort. (I did write an inline assembly version using the MS syntax and squeezed an additional 15% perf over the MS intrinsic code.)
Prtty much everything else (porting audio from DirectSound to OpenAL, issuing HTTP requests, kludging up a GUI etc) was pretty straight forward/did not have any nasty surprises.
I don't know whether this is still the case – or something like tweaking the target CPUs would help – but assuming it is, did you report it to either the open-source Clang project or Apple? The developers have seemed to be quite responsive to reports like this.
> Shining Rock Software has only a single developer doing all the software development, artwork, and audio.
I'm blown away. Really cool.
If you want to expand further, all you're doing is making exact, self-contained replicas of the same town in other places on the map. There's not much variety because each town needs the same resources, and each map has those same resources.
Like many games of this genre, the game has a reverse difficulty curve. This is especially true here because of the focus on survival. That means that the first few winters will be spent micromanaging every single resource to ensure everyone has sufficient materials, but after that initial period is over, it's impossible to fail because the town basically runs itself.
The game is relatively deep, and has tons of room for clever tactics. Also, it was developed by a single person, which is pretty impressive for a game like this.
What can make such considerable difference?
However, there is one thing that is a major difference: the system timer. Windows sleep() call has a granularity of 10ms, while OSX one has 1ms. So, if you (or your game library) is using sleep to return unspent CPU time to OS, it's easy to write the code that would run fast on one OS and very slow on the other.
Most developers treat OSX version as a checkbox on their "required features list" and only care whether the game works or not when porting. They do not want to spend time looking at the intricacies of OSX and just tell you that it's graphics driver's fault.
I develop games on Linux and OSX and later port to Windows before releasing (because that's where the players are), so I have to write specific code for Windows to make sure it runs faster. For example, my latest Steam game was really lagging and slow on Windows until I figured out the timer problem I wrote about above.
Other developers do it the other way around, and OSX performance isn't top priority.
Every driver can negotiate its capabilities with the client app.
Apple has a software renderer that takes hold if client ask for an extension the gpu doesn't have, you have to ask a render surface in a specific way to get an error back when you ask ans unsupported feature.
At least that was the case when I last worked on a OSX app.
found some reference from the curious
"Since the OpenGL framework normally provides a software renderer as a fallback in addition to whatever hardware renderer it chooses, you need to prevent OpenGL from choosing the software renderer as an option."
also note that 1 this might not be the banished problem and 2 this may be an outdated document
To put it shortly - you can't just assume "it's all OpenGL so same code will work the same".
Basically, there's a lot that could possibly go wrong during his first days of porting, and after all that he just saw "the title screen" and observes 1 FPS rendering. So it can be anything, it's too early to know, I hope the author posts later what was missing.
The existing OSX drivers are surely good enough for games, so it's the problem of the use and surely not the case that "the platform doesn't allow" or the "drivers."
Also note that it's not about DirectX vs OpenGL. He has OpenGL on Windows too (see the first paragraph here for the quote).
Though interesting that it's an Iris Pro, MacOS used to have the best Intel drivers as Apple pulled them in-house. ATI and NVidia have always treated MacOS as a secondary target.
If it was a 10% performance hit, I could blame the driver but comparing 60 FPS (I presume) with 1 FPS when people are playing the game in Mac OS X using WINE; that sounds like something is probably not quite right in his code.
To say that there is a recipe for slow OpenGL on Mac is a bit misleading: Mac is capable of fantastic OpenGL projects.
OS X's window manager is optimized for fullscreen windows on top of everything else (including menu bar and the "Dock"): https://developer.apple.com/library/mac/documentation/Graphi...
I also found that even with a GPU-heavy fullscreen application, having another (smaller) window on top doesn't noticibly degrade rendering performance. It seems the window manager is doing some clever things there in regards to compositing.
I assumed this is because Game Engines are built usually targeting Windows/DirectX. Some say that DirectX is more mature and powerful, although this might be subjective. And so the games perform better on Windows.
With the advent of Vulkan, maybe subjective opinion that DirectX is better than OpenGL dies down, as Vulkan is supposed to very good - kind of like a rewrite of OpenGL.
And with DirectX 12 on Windows 10, Microsoft has done a lot of good stuff.
That is how the industry has survived the 1983 crash due to race to bottom quality.
That's nonsense. Tell them that doing double work is a good thing (especially for the limited budget). No sane developer likes walled gardens and lock-in because it always translates in complications (not caused by real technical reasons), and doing the same work multiple times to address stupidity of vendors who push said lock-in.
> That is how the industry has survived
Saying that industry survives on lock-in is like saying that technology survives on the lack of progress. I.e. it's a completely backwards thinking.
It is how many in the industry make a living, by doing consulting as experts in porting games between platforms.
Arguing about the beauty of FOSS and OpenGL, and the dismay of them being ignored by professional game developers, only reveals lack of knowledge how the industry works.
Once upon a time I learned the hard way that being too focused on that, made me lose the picture how the inner workings of the industry are. Back when I still cared about game development and had the privilege to visit a few well known studios. One of them apparently owns a black console that is selling quite well.
Professional game developers don't care about FOSS, 3D standards or whatever.
The only thing that matters is getting their vision of game out in the hands of their fans, regardless of what the systems their fans might have available.
There are plenty of companies selling middleware and consulting services for porting activities.
Gate keepers help prevent a flood of low quality games and copy cats like the one that caused the 1983 crash.
Industry makes a living on technology progress. If someone makes a living on pushing lock-in (MS), they are doing a disservice to the whole industry. It has nothing to do with FOSS - it's about progress in general (lock-in is the opposite of it).
You didn't disprove what I said above. Duplication of work costs more money, and no one likes to waste money, especially when reasons for that duplication aren't even technical, but are caused by crooked vendors who force that extra expense on others with lock-in.
> Professional game developers don't care
They care about their budgets. Your idea that duplication of work is welcomed is complete nonsense.
Just walk around the corridors at GDC or attend a few IGDA meetings, and then you will see who is right.
I have done it several times and still have the badges, did you?
Economics work as usual here and gaming industry is no exception. If someone forces duplication of work on others, that increases costs, which ends up being passed to some party. And for the end user it can translate in lower availability, slower time to the market, higher prices and so on. So far you didn't manage to demonstrate that it somehow magically comes for free.
TL;DR: lock-in taxes the whole industry and slows down progress.
I do not intend to demonstrate that something, whatever it might be, magically comes for free.
You made several mistakes in that statement.
1. You claimed that only publisher funded developers are professionals, while those who are self funded or backed by other means (like investors or crowdfunding), i.e. independent (=indie) developers are not professionals. That's an insult to many truly professional people. There is no dependency on publisher funding to be a professional.
2. You assumed that publisher funded production doesn't care about this issue. Do you think they don't have to balance their budgets? Just because they are publisher funded doesn't mean they have infinite resources and doesn't mean that those publishers are happy about extra costs.
I.e. everyone cares about it and nobody normal likes it. The only ones who like lock-in are those crooked vendors who push it on everyone else. Also, if someone doesn't care about the industry progressing - they can't be called professionals.
Those are mistakes on your point of view, not mine.
Of course there are production and development costs, like in any other business, however the industry doesn't get crazy about FOSS and stuff like that.
My point of view steams from having had the opportunity to get a glimpse how the AAA game development industry works.
Have you ever been there, instead of trying to advocate for everyone cares about costs and free mantra?!
Just go attend a GDC, ask around how many devs care about your point of view.
Based on my experience attending them, I bet the answer will be very few.
Term AAA is ambiguous. Please define it. If you mean publisher funded (a common meaning), then see above. If you mean big budget, then your remark about independent developers is invalid as well (there are independent studios with big budget games). Anyway, I don't see how any of that is related to professionalism. Funding method or budget size has nothing to do with it.
> Of course there are production and development costs, like in any other business
Yes, overcoming lock-in and duplication of effort add extra costs. That's exactly what I was saying above. It equally affects big and small budget projects, as well as publisher funded and independent studios. Saying they don't care about extra costs is simply ignoring the reality.
> How the AAA game development industry works.
Still ambiguous, but let's assume you mean AAA = publisher funded (since you contrasted it with independent studios before).
Simple example - most legacy publishers don't even release games for multiple APIs (such as OpenGL), because of costs. I.e. they are hostages of lock-in. That exactly demonstrates the issue above, and the fact that it has a direct impact.
So saying that no one cares about it (or no one is impacted by this tax on the indstry) is completely wrong.
It is not how it works in the industry.
They focus on one platform, because game programming is more than the graphics API, the hardware architecture and OS are also part of the whole equation, and what means being able to extract every single byte and ms for a few extra FPS.
The talks done by Naughty Dog are a good example of how much it matters to be an expert on a specific platform.
Then they leave the ports to other game studios that specialize in porting to specific platforms, which is another way how money flows inside the industry.
There is a whole industry specialized in game ports since the days of Atari ruled the world.
A publisher that targets PC, XBOX, PS4 and Nintendo has already by definition supported 4 graphical APIs, not counting the additional OS and hardware differences.
You can shout to the windmills how much bad lock-in and duplication of efforts are, like it happened to Don Quixote, no one will care until you change the speech to the language and mentality that reigns in the game industry.
What matters is IP, licenses and getting the games into the hands of users.
The technology used comes a few bullet points down in the priority list.
Not according to experts who actually work on cross platform games.
> A publisher that targets PC, XBOX, PS4 and Nintendo has already by definition supported 4 graphical APIs
That's exactly the point. You can't claim they are happy about spending x4 times more on supporting their engine on each system and have a very limited ability to share code. It's always extra costs. They do it because vendors of those walled gardens limit developers' choice and artificially force incompatible APIs on them.
> you can shout to the windmills how much bad lock-in and duplication of efforts are
They are bad and everyone knows it.
> no one will care
Those who care more, work on breaking that lock-in. See what Oxide Games developers have to say about this lock-in idiocy, and don't claim they aren't professionals.
And how it improves the user experience of their games.
Once more, grasp the culture of the video game industry.
^ That is how I know you're BSing. I can assure you, while reading between the lines you seem to be very concerned with promoting Microsoft.. if you think game devs are extracting "every single byte and ms for a few FPS".. given the buggy, unoptimized nature of many games, you are quite the comedian.
Most of these had to do with templates that expected the code inside them not to be compiled until they were instantiated. The Microsoft compiler has that behavior, while clang does not.
TL;DR: If a type, variable... depends on template parameters, code using it is checked when the template is instantiated with concrete template arguments. Otherwise it is checked when the template is defined.
I don't quite understand why you call templates near-useless, if two-phase name lookup didn't exist?
It took two weeks to get the code compiling and running. That turned out to be the easy part. Getting the application performing well, feeling "native", and getting the bug count down took another six months.
I love Banished and I'd like to see a completed OS X port. But I'm not expecting this to be done, like, tomorrow!
OpenGL on OS X is still behind the times, and so far it's not even clear if Apple will add Vulkan support when it will come out.
To this day I still don't see or find how it is useful.
Not just unix-like, OS X is certified UNIX.
For example, http://pubs.opengroup.org/onlinepubs/9699919799/
However, just like C, any certified implementation is free to add extra behaviors and there are certain parts that are actually implementation specific, like how signal handlers behave in certain situations.
BTW, Linux is UNIX too!
Unfortunately some people will only sign big cheques if the given useless certificate is present
First make the iOS version. Then, port it over to Java. Then, port it over to C# or maybe ActionScript3/Flash.
This way, I can recursively update previous versions as the 'best solution' to interesting problems become most clear by the end of the 2nd or 3rd port. This gives the Objective-C/iOS version the attention it needs, and I can use the rapid application development features for each new port.
The code that is completely different on the platforms is stuff like HTTPS requests, open file dialog, create/delete folders.
Did the author buy a MacBook Pro just for this purpose? I'd assume this is his personal laptop, but his "Using a Mac" section sounds like he's not a Mac user even in his free time.
This comment breaks the HN guidelines by being both uncivil and unsubstantive (i.e. it adds no information). Please only post comments that are both civil and unsubstantive.
If he's worked in the Windows world, the most likely comparison was probably Cygwin or Microsoft's old Services for Unix, both of which are much clumsier experiences obviously taped on to different base.
Anyway, I don't really care anymore, I bought a thinkpad instead. Cocoa is just something I just can't even.
My experience has been pretty different. I'm not a professional developer though.
I tried multiple times to contribute OS X fixes to the Ogre code base. They're not hard things --
1. Use SDL not OIS.
2. The symlinking is broken.
3. Link the frameworks properly.
There was one more thing, I think. Just minor issues that probably have crept into the codebase because no one is able to contribute the fixes.
It's Ogre. It's not OS X.
What boggles my mind, is that OSX is an unix underneath, so I don't understand why it would do anything different and force developers to learn new habits. That's not how you attract devs. Apple has made an habit to break backward compatibility, something neither linux nor windows tend to do.
I think it's not so much to think that OS manufacturers should not to be different than their competition by separating even how their development tools work. The only objective of that is to have developers who stay loyal to apple because they can't have their app running on both windows and mac. Not to mention I had to re do everthing at each new XCode version.
So in the end, having my project run on both XCode and MSVC, was too much time lost, so I just sold that aging laptop. Apple is just so special, and I guess I was not good enough for that.