And I don't believe that they only started using unix operating systems not their own "nowadays" - when OS X was too immature during development and its long maturation what did you think they were using?
I thought they were using NeXTSTEP, hence all the NS API calls in Obj-C. Back in 1989 I am guessing NeXT would be built on some kind of Unix system first. Considering that OS X is a descendent of NeXT, I would think that before OS X, they would use it to run code, servers, etc.
"combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS"
Why would you assume that? There are a ton of things that linux does better than OS X - and it would be extremely stupid for any company regardless of size to not use the right tool for the job. For example, even IBM uses, sells, and supports Linux instead of AIX or OS/360 on their line of servers and mainframes. I think that your assumption is just really old fashioned.
Internally Apple does use Linux, just as Microsoft uses a blend of OS's - supporting Linux on Azure, for example. I read that they actually use Linux as a host for their Hadoop service on Azure.
At it's core OS X is Unix. In what way would Linux be a better choice? I am not saying that Linux is a worse choice, but for a company that writes an OS as one of it's core businesses, it only makes sense to run that OS in as many places as possible. For one, by running OS X as a server OS they would necessarily spend more time on development and improvement of the OS core. This would pay off in the long run by further improving stability and reliability of OS X.
I am not arguing that OS X is the perfect solution in most circumstances, but it can be a good solution in many situations, especially if you are Apple, and have the full source and the capability to adopt the OS as necessary.
Microsoft, especially nowadays, tries to be very cross compatible, so it's not surprising that Azure supports Linux apps and guests. But Azure RUNS on Windows Server 2008, not Linux, not Unix.
Because it isn't really about the OS, it's about the software. OS X is fine as a server platform, but it doesn't have the same software and support ecosystem for data center usage. Apple dumped that market with the Xserve because it didn't work for them.
Red Hat/Suse/Oracle etc. all sell tailored solutions for that usage that are Linux specific technology (mostly, some stuff gets ported to other Unix derivatives but most doesn't). Sure Apple could do all that too, but they don't want to. It isn't their market, so why sink money and effort into engineering OS X to do it when they can just buy high quality products ready to go?
What tools is OS X lacking? From my experience most of the development and server tools are available natively on OS X. It lacks support for containers, but that would be a worthy addition, and I would say worth spending money and time on. The rest is already there for the most part. Developing further their server infrastructure would allow Apple to make a play for the corporate market. Any way, it's a silly argument. I thought they ran most of their backend on OS X, it looks like was wrong.
It's not small server stuff like Apache that they are missing. It's stuff like distributed failover, exotic driver support, SAN, management etc. that they are missing. Big data center stuff, the kind of thing companies like Red Hat make.
Those kinds of products are huge investments. Sure Apple might be able to market towards the enterprise, but they simply don't think there is any money to be made. They used to have for instance Xserve that tried to stay afloat in that market, but which made little money. Since they canceled it, Mac OS has only been developed as a small to medium server (which it isn't half bad at). But big time data centers are a different world.
For instance, as a very basic example, does Mac OS support Infiniband or the more exotic high-speed ethernet network interfaces? For Infiniband, the answer is no and in the other case the answer is "kinda, but not really."
My background:7 Xserves still in production here in K-12 education, 1000+ users in OpenDirectory
In the pipeline:Migrating to the new shiny Mac Pros along with OS X Server
Reasons: Thunderbolt 2 connectivity is amazing and works fine to connect FibreChannel RAIDs.
OS X Server: Though it's correct that the GUI got simplified a bit, it's the same server package and complex as it always has been, however easy enough to support. And if configured correctly, a solid workhorse for many scenarios: network accounts for lab use, calendar and contacts server, along with some helper tools it works in heterogene environments fine, supports huge amounts of users in via LDAP..just to name some reasons. for 20 bucks the best server os to support Mac and iOS clients. And because the underlying foundation is UNIX, it's friendly with any networking stuff such as RADIUS for your WP2-Enterprise wi-fi needs..just to name a view.
One thing that is not quite right in the post above: SAN support exists via XSAN.
That's the real mission for those unmanned boeing test spacecraft - they are the precursors and work-in-progress test craft for the classified military project to reach Mars 10 years before current estimates of civilian efforts (including NASA) in order to retrieve the generator from the rovers.
I think you're thinking about the problem the wrong way.
There is a cost to postponing the launch, but compared to having their rocket blown up by lightning on launch, it's nothing.
As far as I know, the budget for these launches already includes x number of retries - since scrubbing launches and retrying in another window is actually standard procedure - so they are not "loosing" as much money as you think. It's only if they exceed the # of retry attempts that they've budgeted for that a solution like yours becomes viable, and even then, you're not fighting against "the cost of getting things set up".
You're fighting against the cost of having their rocket blown up on launch because of bad or unforseen weather events. That's why they scrub - not because the weather is too bad to fly - it's because there could be an unforseen event happening because an anvil cloud (which has lightning in it) - merges with another large storm formation at 10 nm - basically, it's insurance against the rocket blowing up.
If you can insure your weather detection solution in the high hundreds of millions for a positive launch, then there would be a market for it.
And to be successful, your product will need to cut that 3 min downwards, not upwards.
So that mission control can have a better confidence of scrubbing a launch, say within 10-s of launch time vs 3 min.
You get what I mean? Better weather detection allows mission control to delay longer until the best guess estimate of when to hit the STOP button, not so that they can plan ahead to put a launch at xyz date/time because a weather prediction system says so.
First of all I absolutely reject your claim that houses offer a priori simply a nicer place to live than flats. Sure there exists certain combinations of price point, geographic location and life situation where it holds, but it certainly isn't a given.
Secondly how are you going to build nice affordable houses for 1.4 billion people without sprawl. Are you suggesting completely giving up on urbanization and going back to having countless tiny villages?
There are many very nice places to live in the world that are not afflicted with suburban sprawl and provide houses to live in, yet are not tiny villages. I'm not going to sit here and fill in the gaps in your imagination; I suspect you've already made your mind up.
There are many very nice places to live in the world that are not afflicted with suburban sprawl and provide houses to live in, yet are not tiny villages
Absolutely, but they either suffer from rapidly rising houses prices or no appreciable population growth. If you stick to a population ceiling of say 50k or so then many problems are easy
But we are talking about about places that are seeing up to a million new people show up in just a few years. Building everybody a nice house is neither realistic nor, in my opinion, desirable. We've either got to re-think the whole "everybody gets a nice house" or try to reverse urbanization in favour of lots and lots of small town capped at around 50k population.
Diffing is just one small part of version control. Keeping binary files in version control is the number one step away from messy file-name-based versioning.
Many design studios are forced to use project1.ver1.psd project1.ver2.psd project1.ver3.psd, etc and so on in order to version their files. Single psd files can be on the order of high hundreds of megabytes to low gigabytes for high resolution ready-for-press files.
Not being able to diff the files is not a problem from an organisational point of view. Of course, in an ideal world there would be diffing of large binaries in a way that makes sense, but thinking there's no use in versioning binaries is very short sighted.
This exactly. We keep our game's asset in SVN at the moment only for the convenience of versioning. Artists are still locking files to ensure no conflicts are happening (and therefore no merges are needed).
It's much easier to naively do a repository checkout/update than manually detecting changes (or rolling your own solution using rsync or similar).
Especially when considering the game has over 100Gb of raw assets. SVN, Git or Perforce might not be the best tools for such a task but it works great.