Hacker News new | past | comments | ask | show | jobs | submit login
IBM shifts remaining US-based AIX dev jobs to India – source (theregister.com)
117 points by jandeboevrie on Jan 16, 2023 | hide | past | favorite | 95 comments



IBM over the last two decades really seems to be a story of managed decline.

Its proprietary systems were riding high in the 90s (even if market-share wasn't the absolute largest, their "Big Iron" had a good reputation amongst 'serious' IT folks), but were superseded by linux and commodity hardware at some point in the 00s. They sold off the thinkpad business as non-core, and they sold off their commodity server business (x-series) at some point too.

Both hardware and software solutions have been de-emphasised in favour of 'services', and while that's fine in a business sense, it's so sad from the perspective of all that big blue has done for our industry over the years.

Yes, they now own RedHat, but large acquisitions are part of this story. Each one stems the decline for a while, but cost-cutting and streamlining inside big blue eventually manages the new addition into a shadow of its former self. Maybe this one will be different ... I hope so.


> They sold off the thinkpad business as non-core, and they sold off their commodity server business (x-series) at some point too.

In a sane world, they would be spun-off and let prosper as their own entities.

> Both hardware and software solutions have been de-emphasised in favour of 'services', and while that's fine in a business sense, it's so sad from the perspective of all that big blue has done for our industry over the years.

It looks like MBA philosophy screwing up everything by optimizing 'numbers' as if those numbers have no connection to real life. Very optimal in the short run, but catastophic irrelevancy in the long run. But hey - at least IBM shareholders got maximum returns for some time, and that's all that matters right...


I think linux has been the great proprietary unix killer. Not that I'm bad mouthing linux, but I blame it for the death of both Solaris and IRIX.

Looking forward to see preempt_rt merged. This will certainly put a lot of pressure on QNX and VxWorks in the future.


For me, one of the main reasons that Linux replaced proprietary Unix in a lot of places is, ease of learning.

It's simple and cheap to get started with Linux and as a result there are lots of people who know about it, so it's very easy to hire people with Linux skills.

In comparison, getting started with Solaris/HPUX/AIX can be expensive, you might need a physical workstation, getting patches without paying might be tricky etc.

Mainframes have the same problem. I tried to learn more about mainframe security back in the early 2000's and it was really difficult to get any access to a mainframe to practice/look at things, despite working for a large bank which had multiple mainframes.


This is what I think. It's the old business model, which worked when everything was expensive, only really needed by big corps with deep pockets, and any machine a teen would have at home had nothing in common with the big irons at some insurance company. IBM would make big money, consulting firms would make big money, you being good with AIX would make big money.

Then with Linux, you could have what could run on any big server, for free, as a teen in your bedroom. You could poke at everything, look at the source code, ask around how you do this or that, since knowing how to do X wasn't some well-guarded secret to have an advantage over the competition, but something fun to share. Eventually those teens would get older and look for jobs or go to university, while at the same time Linux keeps on maturing, and now if you as a company want to build some system from the ground up, or just replace something ancient, you can pick that expensive well-established system from IBM, requiring expensive experts to maintain them, and program for them, expensive software, ... or go with that free OS that a lot of people know their way around with and ask for a much lower salary.

Of course, this didn't happen over night, especially the "it's free but there is nobody to yell it if it breaks" aspect of open source was very strange to $BIGCORP and seen as an unacceptable risk, but there was a steady shift towards that, also in large parts because it was pioneered by all those late 90s/early 2000s tech startups that were exactly created by those "Linux teens". Because that's what you tinkered with in college, not some proprietary OS that you couldn't even afford, or get updates for, or ask anyone for help if you got stuck.


> when everything was expensive, only really needed by big corps with deep pockets, and any machine a teen would have at home had nothing in common with the big iron

Some of us who belonged to Unix User Groups were fortunate to have dialup access to many of those proprietary Unix varieties and their respective dev tools, but access to the OS source code was not at all common or expected. It was a great time for testing ANSI C on disparate platforms.


> Then with Linux, you could have what could run on any big server, for free, as a teen in your bedroom.

It took a long time to get there, though. Prior to the mid-to-late 1990s Linux was seen as a toy, and the upstart competitor to big proprietary Unix was Windows NT.


the era you are describing had dozens and more, OS, hardware and software stacks.. Invention was ordinary. The massive user base today of a few platforms is the reverse situation.


As far as general purpose computing is concerned, Windows did win. However, Unix still dominates the most profitable consumer segment through Mac OS X.


Meanwhile Solidworks, MATLAB, Mathematica and the like watched it happen and made sure they got their mindshare and user base ready with scads of educational offers and giveaways to make sure that's what everyone knew and had used.

No need to mess about for hours with a half-baked FOSS mess when all you need is an academic email for a license. Which then keeps the mess, well, a mess, due to lack of the network effects.


Not so sure. I learned commercial Unix back in the late 1990s on discarded SPARC hardware which was available in skips and Yahoo auctions for virtually no money at all. In fact it was generally cheaper than the boxed Linux distributions you had to spend on because you only had a dialup.

The killer with the commercial Unixes is that the documentation was orders of magnitude better. That is true today still as well. Most Linux knowledge I have to sift through today is obtained from dubious quality manpages, partially incomplete or out of date documentation and random blog posts.


I'm not sure free/cheap Unix workstations are a ubiquitous experience (indeed I don't think I've ever seen a skipped working Unix box!) Late 90's was good for SPARCs but I spent a load of time looking out for AIX kit and it was seriously hard to come by, without spending a lot of money.

Also, even if that works for individual hobbyists, it doesn't scale to things like University courses. There having Linux means it's easy and cheap to teach unix-like setups. Whilst top-end universities might be able to kit out labs with Unix workstations, it's been much cheaper to setup labs with PCs and Linux for a long time.

So Universities will churn out thousands of people familiar with Linux tooling every year, leading to easier hiring leading to more companies adopting Linux.


having gone from Linux to HPUX and now partially back to Linux as the OS i make my money supporting, im not sure the learning gap between Linux and Commercial Unix(tm) is big enough that there was ever a real problem training/recruiting Unix(tm) admins.

I think the the real reason why Linux supplanted commercial Unix(tm) lies in the hardware market, around the time when Linux got good enough to compete directly with commercial Unix on stability we also saw the x86_64 systems getting good enough to complete with Power/Sparc/Itanium based systems on most workloads.

And as the Hardware vendors challenging the commercial Unix market with cheaper Linux boxen, were often the same vendors who sold commercial Unix Boxen, the transition was often managed more then fought.


If you were willing to take Linux admins and cross-train, sure I could see that. But a lot of companies tend to ask for x years of actual experience with the product, and there's far more people with that in Linux than Any proprietary unix.

The hardware is another factor too of course, the cheap X64 server being "good enough" inevitably moved people in that direction.


Wouldn't it be in these companies' best interest to release free "dev" versions of Solaris/HPUX/AIX/etc? I guess even then the problem is there's even a moat there to begin with. Company would have you fill out various info to get a license whereas you can download Ubuntu/Debian/CentOS without any such restriction.

I also wonder about mainframes and why IBM hasn't come out with some sort of "emulation layer" for X86 machines. Yes mainframes are expensive, but wouldn't you want to do everything you can to get mainframe software that people can learn with into as many hands as possible?


In the context of the 1980s and early 1990s, there was an x86 version of Solaris 2.x that had drivers for a few of the commonplace PC hardware devices of the time, so perhaps the home hacker might have been able to take that up, but probably only through a school licence.

The ''real'' Solaris OS as well as HP-UX, AIX, IRIX, OSF/1, etc. were written and developed for the proprietary hardware platforms of Sun, HP, IBM, Silicon Graphics, DEC, respectively. The cost of those hardware devices kept them out of the home hacker's reach until used or EOL units started popping up many years later.


you have emulation for mainframe on x86, they give you a free 3 days emulated development env on their cloud if you sign up or else you can buy ADCD license and install it locally. other alternative is to use Hercules as emulator (only works with old ADCD images)


Linux didn't kill Solaris alone. Sun Microsystems started doing that (as much as we loved Sun), and then Oracle put the lid on the coffin.


Once Linux was mature enough to do all the shared hosting tasks Solaris previously owned, using dirt cheap commodity PCs, Sun was finished.


They had some good tech they could have leveraged if they wanted to.

OpenSolaris just came too late, was released too slowly and used an organisation which was too complex and made little sense to the community.

Sun demised is actually a brilliant case study of what a good engineering company shouldn't do. Great products all around, aweful commercial strategy and corporate leadership.


> Great products all around

It would help them a lot if their products did not commoditize their own products as well.


Aren't QNX and VxWorks intended for hard RT workloads? I'm not sure the PREEMPT_RT patchset goes that far.

One thing that Linux is doing, albeit only in makeshift, uncoordinated fashion, is making userspace μkernel-like implementation possible for things that used to be exclusive to the kernel. Combine this with full "containerization"/namespacing of all kernel interfaces, live snapshotting and migration of containerized workloads, maybe distributed shared memory allowing for even multiple threads of a single process to be run seamlessly on the same or different nodes with full location independence. This gives you pretty much everything that network-distributed OS's were designed to do in the 1990s, and allows Linux to extend seamlessly from small embedded to datacenter-scale workloads that used to be exclusive to proprietary OS's.


And to add: the footprint of devices running Linux with the PREEMPT_RT patches is generally much greater than the footprint of a device running something like VxWorks. WindRiver actually has its own PREEMPT_RTed distribution[1].

[1] https://www.windriver.com/products/linux


There’s room in the market for a commercially supported *Nix.

Linux is ‘good enough’ for a lot of people, but it’s interface is inconsistent (something a lot of OSes suffer from). A set of tools were all of the commands used the same argument structure in the same order would be hugely beneficial.

This is just my opinion.


Something like red hat cough ibm?


It exists: it’s macOS.


The market segments changed when business realized you didn't need to spend $$$$ on expensive UNIX hardware when cheap x86 systems for $ could do the same thing. It's just market dynamics changed.


Ironically is that it only got there, because many UNIX vendors saw in it a way to reduce their own UNIX development costs, thus helping in the process to kill their own products.


HP-UX too. Though I think Irix and HP-UX were both mostly dead at the time Linux was ascendant in the enterprise. i.e. 2004 ish


HP-UX is kind of an interesting tale in that it's always been underrepresented in the enthusiast community, perhaps because HPE never tried to put up hpux on intanium as an alternative to Linux on proliant, or maybe because it's core market seems to have been in the manufacturing sector where it supplanted hp3000 minicomputers and ran boring logistics software and factory control systems and never really made it big in education, research nor web hosting.

by 2004 HPUX was actually pulling off an successful migration from PA-Risc to Itanium and it lived on profitable for over a decade after that until HPE finally published an Roadmap for when they were going to end HPUX development which is currently scheduled to happen around 2025/26 with the last new itanium systems sold around 2018.

Irix was killed off by windows around the time the pc industry up with SGI's graphics capabilities. And i cant recall what happened to digital's true64(but it never really survived the merger of DEC into first Compaq and then HP).


tru64 fully survived the merger of DEC into Compaq and then HP; the enterprise I work for had one of (if not the) largest trucluster in the world until not that long ago, and HP fully supported it. Migrating folks off it to linux wasn't easy, as the system had wonderful properties.

Oracle took the cluster tech and added it to oracle.


Itanium was a dead platform on launch. When HP didn't port HPUX to x86_64, it decided HPUX was dead... They do occasionally put out OS updates for those unlucky to have bought into it, but if your software can run on another OS, I can't really think of another reason to keep the hardware around.


Nvidia killed SGI, and incidentally killed IRIX.


Rick Beluzzo killed SGI, then went ''home'' to Microsoft a hero. /s


That’s a name I hadn’t seen in a while. When you browse his Wikipedia page and see “succeeded Ed McCracken,” the initial impression is well, how bad could he be?


linux didn't kill solaris, Oracle and its greedy execs did, stop trying to rewrite history


Solaris was fading into irrelevance before Sun got bought by Oracle.

Sun's open-sourcing of Solaris probably extended its lifespan but Oracle isn't what turned it into a niche platform.


> I think linux has been the great proprietary unix killer

For the server market, completely agree. But for the workstation/desktop, not so much, with macOS being the last viable alternative, which is still being developed.


It's worth noting that IBM i has a kind of dependency on AIX. IBM i has an AIX binary compatibility environment called AIX. Think of it like WSL, but for AIX binaries and on IBM i. Of course if they created this compatibility layer today they'd probably choose to make it compatible with Linux rather than AIX, but they chose AIX and now they're stuck with it.

This means AIX, or at least the AIX ABIs as supported by IBM i, has to be kept alive for as long as IBM i is alive. So either this bodes badly for IBM i or they consider the amount of ongoing maintenance that PASE needs to be so small it can be handled on an ongoing basis by the i or the new skeleton AIX team. I suspect the latter rather than them canning IBM i though.


Why can’t they just emulate the abi on Linux


IBM i PASE is just running (some of) the AIX userspace on top of a radically different kernel. Same basic idea as WSL1 (as opposed to WSL2). They could have it run the Linux userspace instead, but that would be a lot of work to emulate its differences from AIX, and would break backward compatibility with existing PASE applications. It isn’t clear what the benefit of making that change would be (as of today, as opposed to back when they first developed it-we can’t change the past). What it does mean though, is IBM i PASE is reliant on AIX user space development for its own progress (if there will be any)


> “… AIX 5L was where it started to feel "legacy" and unloved”

AIX 5L was released in 2001 and the ‘L’ stood for Linux.

So the writing has been on the wall for a very long time.


I remember a fellow student in the late 2000s who suggested she was being recruited by IBM to do kernel work, presumably on AIX.

I wonder. Is there any enterprise that looks to shift work into non Linux, non windows, in 2023?


In the industrial manufacturing world, the documentation, stability, and relative smallness and comprehensibility of FreeBSD are attractive. A public facing example is Beckhoff, who moved from WinCE to FreeBSD. https://www.beckhoff.com/en-us/products/ipc/software-and-too...


Yep, can confirm. I spoke with some folks from Beckhoff in Vienna during EuroBSDCon 2022.


> I wonder. Is there any enterprise that looks to shift work into non Linux, non windows, in 2023?

Does Serverless (FaaS/CaaS/WASM) count?

There might be organisations looking to move some workloads to *BSD (for instance storage or networking - famously Netflix run FreeBSD for their networking).

With regards to Windows, is there anyone switching workloads to Windows? I was under the impression that doesn't really happen anymore, Windows Server being kind of a legacy product (MS retired the slimest deployment, Nano, and features in new releases are nothing special), Azure supporting Linux well and .NET Core supporting Linux well.


There is still a fair amount of small business dotNET software being written that kind of require windows server but that's mostly targeting standalone desktop application or spreadsheet abuses, but for anything that needs high end servers and advanced storage in order to meet performance Linux is pretty much all of the market right now.

There is some niche's in the network space where xBSD plus custom asic's plays a role, due to licensing concerns but more and more vendors are finding a way to do something similar with linux.

And as FaaS/CaaS in practice depends on a set of Linux kernel API's those deployments are still Linux clusters underneath all of the obfuscating complexity layers that 90% of people don't actually need, nor benefit from.


    Does Serverless (FaaS/CaaS/WASM) count?
I'm not sure it does, because the servers that "serverless" programs run on are overwhelmingly Linux.


That's a good question. I think it's fair to call some of the true cloud native services an OS, in that you are loading programs into a framework, which stores state in a particular cloud-specific API database, and storing data in cloud-specific API objects (S3).

I was thinking more about the lower level though; that is, the OS of the bare metal.


Funny how I'm sure this will catch some IBM customers "by surprise" but the only thing slowest than them are their customers

Yeah maybe now it's ok to move to Linux. But hey, sure, you do you


Last year we signed one of the large Indian outsourcing firms as a customer. The team we worked with ran fleets of AIX boxes for their customers, running their legacy systems.

There was a strict requirement not to change the disk image the boxes were generated from.

Our most popular integration method is a cross platform golang binary. Unfortunately, we used some key dependencies that would not compile for AIX so we had to abandon that route.

We ended up extending our shell scripting integration to use the OpenSSL http client instead of the usual curl. It meant that when sending requests we literally have to prepare and concatenate all the headers, but it works, and we are monitoring all the background jobs on the very old machines, giving the team a way to address operational problems without waiting for reports from their customer.


Seems the plan is to milk captive customers as long as possible with minimal investment.

Also, a few years ago they announced that the XL series of compilers were being rebased on LLVM/clang. Of course they claimed it was to enable innovation or some similar PR mumbo-jumbo and not cost-cutting, but, well..


Why not both innovation and cost cutting? Reproducing LLVM requires some crazy motivation?


> Why not both innovation and cost cutting?

Ideally yes, but has anybody seen any indication that this is actually happening in this case? Until we see such a thing, I think people have reason to be skeptical.

> Reproducing LLVM requires some crazy motivation?

Oh, absolutely. While a LLVM monopoly isn't desirable either, unless you have some different vision of how to architect a compiler, reinventing the LLVM wheel probably isn't particularly useful.


They bought RedHat and have been pushing it. So this shouldn't be a surprise to anyone.


> but newly migrating to AIX is increasingly more trouble than it's worth paying for.

Was this a thing even a decade ago? AIX felt like it was in maintenance mode for much longer to me. From my limited perspective, it looked like the vast majority moved to just using Linux, maybe FreeBSD, and buying some batteries-included appliance for anything you couldn't easily self-host and maintain.


I don't think i have ever even heard of an unix to FreeBSD migration being done by an large commercial organization, and there is virtually no commercial enterprise software being certified for FreeBSD.

It's one of those things where nostalgic old Unix admins who's not really ready to embrace the anarchy of the Linux ecosystem is kind of pretending some clean old fashioned alternative exist, and cling to the myth that FreeBSD is more of an Unix(tm) Successor then Linux.

And yes i have also not really heard of greenfield Unix(tm) deployments since maybe 2008 and even then Linux was clearly where most people went when their old custom minicomputer(think DEC PDP 20, AS400 and HP3000) systems had to be replaced by something a bit more standardized.


We were still doing 'tons' (relatively speaking) of new Solaris on SPARC deployments at Oracle when I was there 7 years ago. They sold an appliance called the SuperCluster that was all Solaris. Someone was buying them lol


I think SC was built on top of LDoms if I recall rightly. Been too long.


AIX is still the go-to platform for a bunch of financial institutations (banks and insurance companies). Although there are a lot of companies within that sector that slowly move to Linux on x86 I know from first hand experience that there are also still a lot of enterprises that swear by running their Oracle/Db2/SAP workloads on AIX on Power.


I can fully believe that, having no experience in the banking/insurance sector for example, but that sounds like just staying with your legacy setup that just works, not "newly migrating to". That part just sounds insane to me, but maybe someone can share a counter example?


Those enterprises that I mentioned often not only maintain their current workloads on AIX but even add new environments to it. An example: an internal strategy defines that every mission critical application needs to use Oracle or Db2 as its' data store and another strategy says that all Oracle/Db2 workloads need to run on AIX. In companies with such a setup even today new applications are deployed on AIX. Well, the databases for those applications at least ;) New applications indeed only very rarely land there as they usually are hosted on Linux in some VMware environment.


And this is just foolish.

SQLite has DO-178b certification that neither Oracle nor Db2 will ever obtain.

You can trust SQLite in avionics, not so with the others.


Well, those companies don't do avionics or care about them, and SQLite would do nothing for their use cases (heavy financial analytics, reporting, OLAP, etc).


Really? That's fascinating.

Avionics certification is an incredible milestone for FOSS in general.


This is the reason why SQLite will never be forked.

https://www.sqlite.org/qmplan.html


I know when I dealt with AIX back in the 90s banking it was merely used as a gateway to the A/S400 systems. Nothing more.


The Linux support was awful anyway. They provided an AIX version of RPM. Just RPM, not yum, nor any repos, so installing anything beyond the stuff they gave you was a nightmare reminiscent of the late 90s.


They have Yum and repo support now. I don't know when it was added though.


I'm not seeing comments about the quality of India's customer-facing pro-grade IT staff here. I've been out of the Unix sysadmin game for a few years but I'll just go ahead and say that IMHO they seemed to be excellently educated and trained, with most workers being multilingual and multicultural. Is IBM making a mistake moving AIX dev jobs there? I am certain that they would not make such a change without considering the quality of the staff there.


There are some excellent engineers based in India - but they are in high-demand, and you have to be willing to pay above-average (by Indian standards) to retain them. And some people aren’t keen on doing that, because they think “India was supposed to be cheap, why pay this engineer so much, that isn’t really that ‘cheap’!” But if that thinking wins, one ends up with an India team without any excellent engineers…

I have no idea what IBM is doing though. I know when Microsoft moved SFU/SUA (the legacy pre-WSL Unix compatibility subsystem) development to India, the product quality went massively downhill-probably because they didn’t budget enough to hire and retain enough sufficiently skilled developers.

That’s the big risk with moving stuff to a low-cost country - if you do it “on the cheap”, it will be a disaster. But since the whole idea was to cut costs and pursue “cheapness”, the temptation to do that is there


I’ve worked on AIX (Mohegan Sun Casino in the late 90s). Switching between AIX and Linux as a developer is not difficult. They are are both just different flavors of Unix.


I am curious, for people who are familiar with AIX and Linux, what are the main differences day to day?


Best summarized by this quote from a coworker: “AIX is what you’d get if aliens implemented a UNIX.”

Utilities exist, but they tend to be different.

There’s also oddities like the compiler (xlC) emitting code that is able to dereference null pointers, with the zero page. It’s a valid but odd choice of undefined behavior. [1]

[1]: https://groups.google.com/g/comp.lang.c/c/OJwY1pWFYhE


This is a old quote and IIRC was e.g. for the way AIX handles services (SRC / System Resource Controller) and stores configuration data (ODM / Object Data Manager). It felt 'alien' for admins used to use rc scripts (which are possible too) and files in /etc (used in aix too).

Now, nearly three decades later, we have even more automated and integrated mechanism 'systemd' and I think that there would be still features worth adopting from AIX. Not the first time i have such thoughts with an OS slowly vanishing.


> “AIX is what you’d get if aliens implemented a UNIX.”

The first version of AIX, for the IBM RT PC, actually ran on top of a microkernel written in PL/I, called VRM - how’s that for alien.

Subsequent versions moved closer to the Unix mainstream by dropping the PL/I microkernel.


Yes, storage and virtualisation, as was said, but that sounds less different than it actually is. It feels more like a mini vs mainframe difference. AIX is vastly larger in that respect on AIX 7 than on AIX 5L, for example. IBM seems to have invested a lot of effort into making AIX huge in VM, clustering, storage, and more . AIX 7 is much more into this than e.g. AIX 5L.

As for standard Unix tools - or rather "Linux" or OSS tools these days - they are not that difficult to find and are for the most part readily available. IBM used to supply a lot of that directly, but these can be found elsewhere and for myself I work on AIX more or less as I would on Linux.

When that's said, if you don't actually need the AIX-specific features you can as well run Linux. But if what that site indicates is correct then I'm a bit surprised - IBM seems to have invested a lot in those "differences" the last couple of decades and it's strange if they're abandoning that, or planning to - yes, you can run Linux on IBM hardware, but then.. why? Sounds to me they'll lose HW sales if they abandon AIX. Their strategy has been to make the system more and more Linux-like in many ways (compiler has gcc-compatible options, as one example), but at the same time add large-scale features which Linux doesn't have. Which is clearly why AIX has survived while none of the others have (IRIX, Solaris, Tru64..) - those Unix systems didn't really bring anything more than a Linux system could. AIX does.


> you can run Linux on IBM hardware, but then.. why?

Don't know if this counts, but you get much memory with fewer cores. For software where licensing is coupled to cores this quickly gets cheaper.

And for what I see in production the hardware is more reliable in comparison to two large x86 vendors. This is of course based on a small number of systems (n~100)


Storage, Virtualization, and general UX are just completely different. Standard unix utilities are either not available or just different enough to make life hard.


The standard Unix utilities on AIX are a pain as well.


I remember years ago having to port some C code I wrote to AIX.

Hit this really weird bug - on AIX, errno is not thread-safe by default!

Solution is simple: add -D_THREAD_SAFE or -D_THREAD_SAFE_ERRNO to CFLAGS. Took me a few hours of scratching my head before I worked it out though.


> Hit this really weird bug - on AIX, errno is not thread-safe by default!

Isn't that a conforming implementation?

(IIRC[1], errno is thread-local, not necessarily race-safe. i.e. it's visible to interrupts, which could change it)

[1] No doubt someone will correct me if that is wrong.


Many old-school unixes were developed in the single-thread days. When they added support for threads they were an opt-in feature, like the above macro definitions, and using libraries like libc_r.so instead of libc.so, and so on. AIX even had compiler wrappers like xlc_r which set the relevant preprocessor macros and linked in the correct runtime libraries.

When the Linux glibc 2.x/NPTL ABI was taken into use, it was clear that threads were not a passing fad and stuff was made multi-thread (to the extent the API can enable that, of course) by default, so there was no need for carrying around both a single-thread ABI and a multi-thread ABI.


> IIRC[1], errno is thread-local, not necessarily race-safe. i.e. it's visible to interrupts, which could change it)

On AIX, errno is not thread-local by default, it is a global variable shared by all threads. Only if you set one of those two defines, does it become a thread-local variable. On every other platform I’ve ever written code for, errno is thread-local by default and no special define is needed to make it so. Hence, when you port to AIX, if you don’t know about those defines, you can get all these weird race condition bugs related to error handling, because suddenly errno isn’t thread-local any more


AIX command line was so cryptic that everybody used SMIT to figure out the commands. You then copied/pasted from there. Linux never had such a tool.


A big difference is that toolchain is somehow related to how Windows works as they are both COFF based.

So you also get shared libraries with private by default and import libraries for example.

It also supports lazy loading of dynamic libraries, where the OS implicitly loads dynamic libraries when it hits an import stub.


Complete lack of standard tooling and GNU utilities. Even the tools that seem to have been "ported", like ps looked like work in progress rather than something intended for production use.

AIX was a steaming pile of shite for a Linux or Free BSD power user.


Duplicate submission. I submitted this exact same article 4 hours prior to this submission.[0]

[0]: https://news.ycombinator.com/item?id=34395790


> I've run personal installations of AIX as my primary personal server since 1998, first on an Apple Network Server 500 and now on a 8203-E4A POWER6 p520

I wonder what the idle power consumption of a p520 is, but I guess he doesn't have to pay for it...


With edge and Microsoft latest os might have some difficulty linking with Windows computers. Trust IBM to make this a smooth transition in the computing world


Presumably because their domestic US unix support -- and focus -- will be on Red Hat, cuz IBM owns RHEL.


To me this makes a lot of sense these days.

American tech workers are demanding full time remote positions at rates 4x-6x what some Indian workers are willing to work for. They’re educated, professional, and showing up with a good work ethic.

If a company is going through the hassle of remote workers, they might as well optimize your labor costs.


The quality of work is, and has been, subpar in my experience


both the comments are generalizations :), AIX development and support has been happening from IBM Labs India for a long time (Had some former colleagues who were into it) & if anything platforms like AIX are legacy Unix flavors that will probably not get a lot of people who'd be excited about that.


You get what you pay for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: