I used to work at Sun, and the Solaris codebase is the most amazing C code I've ever worked with. I'm probably going to be accused of bias, but the Linux code is really messy compared to Solaris.
Sun was already on the way down by the time I left many years ago, but what had happened since Oracle bought them has been nothing but depressing.
Oracle is nothing but a cancer. Everything they touch turn into goo. This is not the first company to be killed by Oracle acquisition. And not the last.
And don't let me started on the ridiculous range-check trial: this sums up the disgusting state this company has fallen into.
Once something gets bought by them, you know it is done. Slowly, but surely.
They perform a function akin to the maggots that destroy cadavers in nature. Part of the overall ecosystem.
ORA stopped being a tech co a while ago, now it is a finance play. Use cash to buy a business for its locked in customers, gut it to squeeze max money out of it until last customer is gone. Rinse, repeat.
Aurea does the same in mid-tier by the way.
The circle of life...
I think that's a more apt description of CA, BMC, or Symantec. Places where tired old software goes to die a quiet death. What Oracle does is worse: kill software that still has plenty of life in it. I've seen them do it by acquisition, and I've seen them do it by stealing code or ideas from partners (personally, twice). So they're not so much a graveyard as a slaughterhouse for software.
Oh and don't forget to tell the truth in your next job interview: "CA is a tremendously successful company and excellent employer, unfortunately, the role was not the right fit for me".
Yup. Precisely what happened to Stellent, a company that used to produce great document filters. After ORA buyout, employees fled like there was a plague epidemic and prices spiked at such a level that you'd either cut your veins or consider migrating to another technology.
> ORA is the elephant's graveyard of software.
Well, except that some of the elephants were still alive and perfectly fine, and ora worms started eating them before they were dead.
Of course, these businesses don't make enough money to cover the massive shareholder draw, so it's all a stage play to convince Wells Fargo to loan them enough to pay those dividends.
It's an untenable and irrational position in the long run, but markets can remain irrational longer than individuals can remain solvent. And many businesses are in this same situation, needing bank loans to pay never-reducing dividends.
This is true also of companies that are taken over by private equity firms, although that is probably common knowledge by now. e.g. every single one in my team at Rackspace has left for a different role (all at different companies, all at different times) after being acquired by PE firm.
Sometimes I wonder how the engineers can't see this coming. Sure there are cost savings to be made by streamlining product offerings, cutting the "recreational budget" etc (i.e. money for office parties). But the biggest cost center for tech companies are its employees (probably that and real estate).
I am actually surprised they invested into Solaris for so long, considering its long time half dead status on the market.
I suspect if the takeover hadn't happened, the FOSS systems landscape would look a whole lot different. Like what, I don't know, but definitely different.
NeXT was acquired by Apple. 6 months later... Jobs was in charge, and history unfolded.
But does/did Solaris have anything worthwhile to offer over BSD/Mach.
Sun was in bad financial situation at that point, there is a high chance all these engineers would be laid off 8 years earlier if takeover hadn't happened.
It's possible IBM would have been a significantly better steward. Of course, it would have been hard to be a worse one.
That's a bit unfair. IBM would have probably canned hardware (and hence Solaris) much earlier than Oracle did, because they already had their own offering in that space and they were as invested as Oracle in Linux (ok, they don't have their own RedHat clone, but they are definitely big on supporting Linux).
IBM was buying Sun because of Java, and planning to chuck the hardware out one way or another. Oracle bought Sun because of the hardware and basically kept everything else going, albeit in a reduced fashion where there was overlap (MySQL) or they were not interested in the niche (OpenOffice).
I think Oracle had some decent ideas and didn't execute well on them, partly because of cultural issues. I don't think IBM would have done better, certainly not on the hardware/Solaris side. They would have probably done a bit better with Java/OpenOffice but that's about it. Solaris was doomed by the Linux boom, there is very little anyone could have done about that by the time the acquisition became inevitable.
They were interested in OpenOffice ... just not sustainably, as it turned out.
All of the really talented Oracle people I know are enterprise sales people, and they're _really_ good. All the good tech people I know "joined" via acquisition and jumped ship as soon as financially sensible.
I worked at a place where I saw two renewals for AdobeCQ licenses paid while I worked there at ~$600k/year, all the time we were using Alfresco underneath (and I see now that site is running on Sitecore...)
Ironically, it's Oracle itself who is busy self-destroying. The move to cloud-based subscription services, where switching to competitors can be so much easier, looks good in the immediate but it's pulverizing their stranglehold on partner ecosystems, and making their long-term outlook more fragile.
Yes, bean counters. When the only sales person involved is now the AWS Architect the bean counters put Postgres and Oracle side by side and it becomes a no brainer. You're going to see lots of enterprise development moving away from Oracle as they move to AWS.
I'd say it's more the "Contract Signers" who're at fault. The devs and AWS Architect are perfectly happy to use inexpensive AWS options, but somebody _else_ goes golfing and drinking with the Oracle sales team - and arrives at work hung over the next day with a shiny new half million dollar a year Oracle licence which everyone else is now required to use.
The bad part of running Oracle is absolutely everything else, especially the bit where you ever have to talk to Oracle. When we moved our stuff from Oracle to Postgres, the best bit was never ever having to think about licensing.
My stodgiest, most risk averse (Oracle using) clients are beginning to migrate to Postgres. I'm so shocked by this it still sounds like a lie as I type it. For most of my career it's been "nobody got fired for picking Oracle". Now that nobody gets fired for picking AWS the penny pinchers are looking at the RDS pricing delta between Postgres and Oracle and all of the sudden it's a no brainer. Maybe because the only sales person involved is from Amazon? IDK but Oracle will slowly die by it's own worms.
Must be a horrible job even if they get paid well for it.
Enterprise software sales can be a lot of fun (I am in sales engineering, not at Oracle) as it is well paid and ultimately about enabling customer successes. But I have to look myself in the mirror at night, and can only sell something I believe in - things like open source, or cloud computing benefits, etc. Oracle database used to be worth believing in maybe 10 years ago: boring but valueable software. But these days the company is such a blight on the industry that it would be hard to work there.
Is it just so they can say "not my fault, our project is only using big name frameworks, DBs, etc so I did all I could?"
Or is it because they don't even begin to understand what's going on in the tech scene and just buy what appears to be the most shiny, expensive solutions?
Text files would have been enough to cache the data. But you never know how a project can change over time so I proposed PostgreSQL or IBM DB/2. The latter in case they absolutely wanted to pay for a database and they are already using an IBM AS/400. (We were using PostgreSQL and Informix.)
At the meeting they said they have already bought a license for an Oracle DB. Without consulting us. So we were forced to use it without prior experience.
Expensive parties, gifts etc
A vendor once sent me a mug. I had to open the box in front of a witness.
It is. It happens all the time. I get similar offers regularly in my growing startup. It's usually couched better than that, but it's not rare.
Solaris lives on in illumos and forks of it.
I'm also glad that ZFS has managed to find its way to other operating systems. I've been a huge fan of it ever since it first arrived in Solaris.
Solaris was competing against free, without much to justify the large added cost. It's been a very long time since I heard of anyone buying new Solaris installations.
A coworker who used to work at Sun maintains that they really needed to go private to avoid years of chaos from waves of layoffs when they were profitable but not enough to satisfy Wall Street.
I have a legacy production environment that is Debian based OpenVZ and Nexenta. Containers and ZFS were in heavy use a decade ago, the marketing just wasn't there to make it "cool" like Docker.
Sun was great at many things, but they were never great at marketing.
It still surprises me that Dell was able to somehow go private before Wall Street killed them too.
Short sighted capitalism is imho a threat to society. But finding an acceptable solution is not going to be easy.
Google is a Linux shop that doesn't care at all about product continuity. Why would they have cared about Solaris? I'm sure it'd have got the chop instantly if Google had bought Sun. Sure, the engineers might have been able to find other opportunities within the firm, but Solaris would have been scrapped instantly (maybe open sourced, maybe not).
If Google had bought them for the IP, then at least that IP would live on, which is what everyone really wants.
From the article: "Finally, and perhaps most significantly, personal egos and NIH (not invented here) syndrome certainly played a part. I'm told by folks who worked at Apple at the time that certain leads and managers preferred to build their own stuff rather than adopting external technology, even technology that was best of breed. They pitched their own project, an Apple project that would bring modern filesystem technologies to Mac OS X. The design center for ZFS was servers, not laptops—and certainly not phones, tablets, and watches—and the argument was likely that it would be better to start from scratch than adapt ZFS. Combined with the uncertainty above and, I'm told, no shortage of political savvy, the anti-ZFS arguments carried the day. Licensing FUD was thrown into the mix; even today folks at Apple see the ZFS license as nefarious and toxic in some way, whereas the DTrace license works just fine for them. Note that both use the same license with the same grants and same restrictions."
It's still not too late for Linux to merge ZFS:
There is nothing stopping Linux from mainlining ZFS at the source level apart from kernel developers reluctance to give into "layering violations." Somebody can correct me if I'm wrong.
To wit: I am not sure if linking and terms of binary distribution matter after sources have merged for open source projects. If you have the source code and have right to modify, merge and distribute it, arguing about static or dynamic linking and binary distribution is like arguing about the color of your car door after the car has been made. It's inconsequential compared to amount of IP and resources put into the source code, and easily changed by any user (to a different architecture lets say).
Modifying has no meaning when it comes to binaries. But it's core to copyleft and open source. You would only care about binary licensing if you were a closed source product, and had to have ultimate control. If somebody had complete copyright over a ZFS binary, they could say how it can be used or not. The way an EULA would restrict you. Since no such copyright holder or binary exists for ZFS, and only source does, I don't think most people would stop collaboration in source code once the licenses are compatible.
Linking exceptions are for those who do own all of the copyrights and want to distribute along side open source software, and having a distinction otherwise in the open source world adds to license proliferation and makes no sense. ZFS doesn't have a single copyright holder acting on it's behalf, it has lost certain privileges because of this. I'm sure Oracle would be troll about this too, but they would probably be wrong given FreeBSD and OpenZFS.
I do think the GPLv3 fails a little bit because of the same argument. But I'm no lawyer.
Intent by everyone involved such as the author, the accused and the law writer. If the author intended that the work is used in one way, and the accused knew this but decided to go against it, then that carries a lot of weight. Similar, if the law writer intended the law to address a specific situation, that also carries weight.
Precedence from cases that involve derivate work. There is a fuzzy line when two works merge to create a third. Music has a large legal history, parts which are contradicting itself.
And last there is the law itself. Modifying for example is a explicit exclusive right in some places (such as the US). One case involved a person who bought a painting, cut it down into squares, and rearranged them into a mosaic version. The painter sued and won the case, arguing exclusive right to create modifications. If something is binary or source code should irrelevant to the question about if the "work" has been modified based on what the author originally created.
As this timeline and some Googling shows: https://en.wikipedia.org/wiki/OpenZFS#History
Sun did work in good faith with Apple, and the Linux community to get them to adopt ZFS, unsuccessfully (successfully with FreeBSD). Additionally, the fact that Sun did successfully open source quite a few things (virtualbox, jenkins, openam, solaris, staroffice, netbeans, etc..) and relicense Java from SCSL to GPL, makes their intentions towards the open source community pretty clear. Yes, they wanted to make money, but they probably open sourced and created more open source communities than any other company in SV history.
Now, about modification, any open source license listed by the FSF will grant modification rights to users. I don't think compiling is making a derivative work. It's like unfolding a chair to sit on it. It's just a part of normal usage of software, you can decompile a binary and learn from it, also normal usage. The compiler is a tool, like a screw driver or paint gun that will let you assemble a chair or paint your car. Reading and learning from source code is usage too. Modifying the actual source code would be a real modification and could be making ZFS work on an Raspberry Pi, which is allowed by open source. Given that Sun wanted ZFS to be widely adopted in open source, they adopted the CDDL to let people modify ZFS so it could be used by OSes other than Solaris. This is what the OpenZFS community enables, and is completely compatible with GNU/Linux or Apache open source norms. Oracle might come knocking for money, but that's not the history of Sun or current ZFS contributors, who are just out to make better software using the open source process. They would probably not disagree with what Netgear or Canonical did, and if they did, it would be on the OpenZFS mailing list and in a news story or two. It's not.
You can't copy books and sell them, and I can understand you can't modify an original artwork and not affect the copyright owner's rights. You can correct an error in a book or claim inspiration from a painting to make another. You can't claim copyright if someone uses a binary in a VM when you didn't intend it. You can give others the right to modify source code, and ask that others do the same. That is open source and the GPL. OpenZFS, FreeBSD, have as much standing as Oracle, which is really none, to actually stop someone from porting ZFS to anything they would like and distribute it along side proprietary or open source software.
The other side is of course each one of the linux developers, each holding the full power of copyright. To cite SFLC, no free software developers have ever sued an other free software developer over license incompatibility, so its very unlikely to happen with ZFS. Such court cases really on happen between companies.
So to sum up, a case over ZFS is very unlikely, but I would not bet on what would happen if android suddenly started to use ZFS.
I wonder what Linux Torvalds thinks about merging ZFS into Linux now, he wasn't too keen a decade ago. Sun is no longer around, someone worse like Oracle has taken their place. A couple of lessons for the open source community here, I think. And Brian Cantrill nails in on the head in the youtube video link.
ZFS will need to be on Linux first before it can show up on Android or media centers or gaming consoles, and I don't doubt Oracle's ability to find a way to patent troll anything. But it will be just that: patent or copyright trolling.
Canonical, Debian and SFLC have really done with right thing by distributing ZFS on Linux, using AFS as a precedence. I hope more merging like this can happen in open source in the future.
Even if they win Oracle v Google, it's been a huge distraction, a huge cost in lawyers (who knows how much they'll recoup), a big unknown and the search for an alternative has to be costly and time-consuming.
$8.3 billion almost seems like a bargain in hindsight.
I doubt Google was ever an alternative. Realistically it was a fight for power over the enterprise customers between IBM and Oracle, with most (non OS) products built or integrated with Java.
Maybe it was even a hot potato- someone needed to keep their strategic bet alive, since Microsoft also share the same strategy in the enterprise app segment. (Sell complex but underperforming applications to ill-advised customers using long feature lists.)
I remember when we ripped out our 3 6800s and replaced them with 9 Dells, for way more power at a fraction of the price. Would have loved to recompile on Solaris, but the hardware savings easily covered the cost of a port...
Except they did; anyone working in Solaris engineering at that time or even now could tell you that x86 was just as important as SPARC. From a technological perspective, they are completely equivalent.
For example, the ZFS Storage Appliance is x86-based, not SPARC-based.
SmartOS for example only supports i86pc; there is no sparc port.
Unfortunately for the free software world, IBM's lawyers got cold feet when it turned out that Sun were mired in bribery claims. Oracle didn't care. Pity, because actually having Java properly open sourced, and likewise with various other Sun technologies (e.g. an OpenSolaris that didn't rely on a couple of binary-only libraries, for example) would have been quite a net win.
I think Sun was (mostly) acquired to prevent Java IP being sold to other, more nefarious parties (such as MS or patent trolls). Both Oracle and IBM were (and still are, though to a lesser extent) heavily invested into the Java ecosystem. Sun OS/Solaris has historically also been the reference O/S and platform for big-time installations of the Oracle RDBMS.
The enemy #1 for Oracle at the time was SAP, which they couldn't force out because it's another cancer (the lock-in is huge); so they developed a strategy of buying loads of ecosystem apps to "surround" it. At that point, they could sell the database and apps in one package, and then slowly erode sap away. Hardware was a natural addition to that strategy.
Unfortunately they borked execution. They didn't invest properly in making solutions ready-made, so after you bought a (very expensive) box, you still had to pay tons to consultants to set it up, making it uncompetitive on the whole. The industry shift to cloud did the rest. Now they're way too busy turning into "bigger Salesforce" to care about metal.
Solaris on PPC existed at various times, IBM could have provided a convergence path for the Fortune 500, AIX/Solaris/Linux/Java all on PPC.
It was actually a close thing, but someone else will have to write that story.
At that point, unless the SPARC hardware has some definitive cost/performance advantage, we'll buy x86. SPARC is for legacy.
Same applies to POWER, BTW. How many new apps have you seen in the past 10 years that were designed for POWER?
And the 6900's features with regard to HA in the field fell well short of advertised.
Now you can get a Linux box with 4Tb of RAM so no-one should buy Solaris over Linux.
And you could cram even more x86 cores with up to five Xeon Phi coprocessor cards, while Intel's supplies last.
IBM Services do a lot of support work for Solaris around the world.
But yeah, other than that it doesn't make a lot of sense.
Red Hat (if they could have afforded it) would have been an awesome home for Solaris/Sun.
Red Hat gets a nice check from us once a year. It isn't free by any means.
We had a bug in our installer recently where it wouldn't work at all on RHEL (I stopped testing on RHEL a few years ago because compatibility was so reliable that it never behaved any differently from CentOS). It took almost 24 hours to get a bug report about it (we see about a hundred new installations a day). So...it may be even less than 1/10th.
That said, we mostly operate in the low-end web hosting market: solo web developers, small design shops, web hosts selling to solo web developers, small businesses, etc. Our software rarely ends up in huge enterprise deployments. Even for our customers that are big businesses that use RHEL on the backend, they might have CentOS on their web server, because it's just a rental and that's what their web host installed for them. So, our numbers are certainly skewed toward CentOS because margins in web hosting are razor thin.
Usually in the same kind of companies that use tons of FOSS software and never give anything back.
Is that true? Don't companies interested in Solaris pay for their OS? Weren't they competing against companies like Red Hat?
I came up during that middle era when the shift was happening. I was an early adopter of Linux, but all of the real training I got (my employer at the time paid for it) was Solaris-based. But, even with the training and access to Solaris, I prefered Linux. I just had more comfort with it because it was my daily driver. When people I worked for were making decisions about OS, the recommendation they nearly always got from me was "Linux". And, I believe that played out millions of times to get us to the world we're in today.
So, yeah, Solaris was competing with free, but not always at the business level...the part that mattered was "how many people with influence are using this OS as their daily driver?" And, Linux was/is a phenomenon. People love Linux. People loved Solaris, too, but it was a much lower number due to lack of access...early days of Linux, you couldn't even get Solaris without a SPARC box to run it on. Later, they made x86 Solaris free, but it was too little too late, and by that time Linux was better than Solaris on a number of extremely important metrics (package management and package selection, for example, but also in terms of just plain fun).
At work GNU/Linux was just an internal server, all real work was being done in Solaris, HP-UX and Aix servers.
So fast forward to modern times and even Microsoft uses Linux kernel syscalls on their new POSIX personality subsystem, instead of actually supporting POSIX.
POSIX support is in the API, not the syscalls. musl libc places great emphasis on POSIX conformance. So if you ran a musl-based distro atop WSL, that should give you what you want, or at least something closer.
What do you mean? Using Linux kernel syscalls is just one of many ways to "actually support POSIX".
Do you know that by default GNU tools don't respect all POSIX expected behaviours and it needs to be explictly enabled?
Its not bias. Im a (mostly) C/C++ dev, and very curious about some open source implementations, and the Solaris C source code is the most beautiful complex C codebase i've know.
Linux is what it is now, because of the amazing amount of man-hours into the codebase, but given the quality of the Solaris code compared to Linux, i bet if we could measure how many hours/enginners/money it would be necessary to make some features in parity for both OS's, not only Solaris would require less people and be implemented in less hours, but it would probably be less buggy.
If Solaris were open sourced in the right time, and not too late as it was, im sure it would probably be the top Unix flavor by now.
better bsd's > not as clean bsds > solaris > linux
with not as clean and solaris being on roughly the same footing- much like SMF requiring XML, solaris code, while very clean, seems very overengineered and not as elegant to me.
that said, i really haven't hacked enough kernel code to deserve to be commenting, so yeah..
Linux have less bugs per lines of code than any major OS in use.
Linux code is the 'benchmark of quality http://www.pcworld.com/article/2038244/linux-code-is-the-ben...
I did my internship at Nortel. A decade later and having worked from small shops to IBM and beyond, nothing comes close to the quality of people/code that I saw at Nortel.
From another perspective:
It is up to you to carry the torch now.
What is the incentive for quality?
You aren't looking at the average workaday C program from that era. I guarantee you that OpenSSL, for example, does not look cleaner than code of today.
The challenging part of us in the software maintenance job is to balance a need to refactor with a need to add features. It colors your opinion about a lot of things. You start to evaluate methodologies, technologies, frameworks, and even library choices by their impact on long term maintainability.
I frequently find myself at odds with a primarily greenfield developer in tool choice because I'm looking forward into the future and it doesn't look pretty.
I came to the same conclusion and for most projects there's not really a lot of good choices there outside of those two.
Even more disappointing was that RIM/Blackberry had every opportunity to take that mantle (business/enterprise IP telephony) and didn't even try.
I'd hate to think Sun's demise, in the alternative, was due to their hippy open-standards approach, which is very appealing to engineers...
I wonder what this means for their big iron, such as SuperClusters? Those still run Solaris.
But then Oracle doesn't seem to have the organizational capability to start major new successful product lines anymore. They grow through acquisition.
Also, some of those "firings" come with a decent chunk of money; maybe some of the folks who stayed made a rational choice of waiting until fired, then will move to a prearranged job somewhere else.
Oracle is expert at slowly bleeding teams while suppressing pay to milk products for all they’re worth. They are developer-hostile (including to employees). It is career death.
If Oracle acquires a partner you depend on, you have 12-24 months to find an alternative before they cut your legs out from under you and steal every last drop of profit from the relationship you have with your customers.
Don’t believe any promises to the contrary. Oracle promised ours would be different. They gave us pay raises to stick through the transition. It was all a ruse. Once we were in the jaws of the machine stack ranking took over, raises and bonuses were crap, and a lot of architecture astronaut garbage was rained down from above. They increased the price of our product by two orders of magnitude which lead to massive revenue gains. They simultaneously shrunk the team and claimed there was no money for bonuses or equipment. Developers have a 5-year laptop replacement policy.
I repeat: get out!
I'll admit that the way they've handled the recent layoffs is atrocious, with most employees finding out via FedEx notification and a pre-recorded concall message. Rumors of this major cut have been circulating for months. I've lost many good friends with 10,20,30+ years in Sun/Oracle. But I think Oracle gave hardware a fair shake.
Full disclosure: I worked in a Solaris dev/sustaining group until this past week.
I’m telling people forcefully because Oracle has been doing the acquisition game for a very long time; they’ve figured out how to string people along to get the maximum value out of the acquisition. I personally lost out on thousands in pay by sticking around for too long.
Oracle as a company does not value engineers. A software engineer is scum compared to sales. If you want to be an engineer and make the real money (and get any respect) work in Sales Engineering. You’ll be away from home for 40 weeks a year but you get decent hardware and a small commission from the deals.
For those with career ambitions or self respect my original advice applies: get out.
As for SPARC, Oracle does seem to have invested heavily, in part because of the elaborate self-delusion that Ellison seemed to have that he could develop magical database hardware that would somehow repeal the laws of physics.
As for the warning, it is indeed apt; Oracle is a mechanized and myopic profit-maximizer -- a remorseless and shameless corporate sociopath that lacks the ability to feel anything at all for its customers. Yes, your products will die of asphyxiation and incompetence and so on, but the much more acute damage will be to one's sense of purpose in the world: working for Oracle is a nonstop trip to either an existential crisis or a mercenary's existence (or both). And as many discovered on Friday, working for such an entity out of a noble (if misplaced) sense of duty or loyalty is pointless; Oracle feels nothing for you, its employees, for the same reason it feels nothing for its customers or its partners or the domain or the industry or society writ large: because it feels nothing at all.
Guess who that "bcantrill" person is that you replied to :-)
Any idea what (if any) the academic or other foundations of this delusion were & how far Oracle got before cutting their losses? Rock seems to have been suffocated before the ink on the acquisition was dry so I'm assuming that's not it.
I love reading about the dead-end roads of computer engineering, especially those that had a few gigadollars driven down them.
I wish it were more funny but it's not. Oracle has a special way of decimating open source projects.
Actually, all three of the mentioned open-source projects Oracle has "decimated", have gone on to live happy, healthy, productive lives.
Perhaps Oracle just pushes baby birds out of the nest to see who can survive on their own, and who falls to their death.
Haha, what? Oracle (silently!) closed-sourced Solaris again seven months after its acquisition and much of the core talent walked.
Went to a bunch of architecture meetings, and saw that nobody had a hint of a clue. "Project Fusion" was supposed to fix everything... As far as I know they're still working on that, some 12 years later.
Then I tried to get myself laid off; there was supposedly a list you could yourself on to be laid off with severance. After one month waiting I had enough and quit. So for one month I "worked" at Oracle. Best decision in a while.
That said, I do have some engineering friends who work at Oracle, and they generally like it, so your mileage may vary.
We are planning to move to another CPQ in the next few months, but that's not cause oracle at all or the product got worse under Oracle.
But then we also use oracle DBMS part of our product, and we are moving away cause we hate oracle licensing/ support costs and while oracle can do alot, we are only using limited subset of functionality.
we merged with another company and they used apptus already and have a good process with using it.
i havent seen it live, nor seen anything on the configuration,implementation, support side yet.
That's pretty stupid advice since you're most likely vesting some very profitable stock options.
Most people won't really have the kind of stock.
Retaining the employees of the acquired company that you want to retain is a very important part of the process.
>>... most of the recent innovations in Solaris's core technologies (DTrace, ZFS, Zones, etc) have all happened in illumos.
> As a core Solaris dev at Oracle, I can tell you that's not true. I just can't prove it to you. :-(
Among the more interesting topics Roch wrote about were some enormous changes to arc and l2arc, the zil, encryption, spa_sync, sequential scrub & resilver, which LBAs to choose when writing, and scalability of rw locks. OpenZFS is still reinventing several of these (e.g. sequential resilver and persistent l2arc are in github PRs now) albeit in generally very different ways.
If he is able and willing to participate in OpenZFS development, the whole project and its close relatives (e.g. ZFS on Linux, OpenZFS on OS X aka macOS) will benefit from his having explored the invention and development of similar wheels.
I do not know if he is still with Oracle. Either way, if you move quickly, the blog is likely to survive until after Labour Day.
(Someone should archive it for posterity!)
Sad to see the loss of diversity in the operating system space. Thank you SunOS & Solaris for all the goodies over the years - Zones, ZFS, NFS, AutoFS, dtrace, etc.
But such reactions happening now seem to indicate that people missed the attempted reproprietarization of OpenSolaris or at least missed out on marketing for the successors? Well here's what I see as kind of the canonical video detailing everything up to that drama point: https://www.youtube.com/watch?v=-zRN7XLCRhc As far as I understand it Oracle has been pretty irrelevant for things to do with Solaris since then.
For those, who have not worked in Oracle or have little understanding of Oracle's internal culture, I recommend this nice article about why James Gosling, the creator of Java, quit Oracle: http://www.eweek.com/development/java-creator-james-gosling-...
If there is code in there that turns out to be copy-pasted from somewhere else, open sourcing makes it more likely that people will find it. That could be expensive (e.g. when some BigCo owns the copyright on code they have been selling for decades) and/or have even more serious consequences (e.g. when there's a GPL-licensed code fragment in there, and they linked it with a part they want to keep commercial)
Answering that question conclusively can be very expensive. They may not have a full history of the code, and if they have, it may not contain all metadata needed.k
They may fear releasing the source opens them for patent lawsuits or may have patents they aren't willing to give up, and fear that open sourcing it without any patent clause will not give them much goodwill.
> some telecom stuff they wanted to get out off.
I'm guessing that it might well have been?
Copyright. That intern who worked at SGI or NetApp who (un)knowingly reused some of a project they had on their laptop at their new place of employment that was actually software owned by their old place of employment. Scrubbing that or getting proper (open source) licensing for that bit, along with all the legal headaches that entails. Remember that we're talking about code that has been around since effectively 1982 with Sun UNIX 0.7 (or SunOS 4 as Solaris 1.x in 1991).
Licensed from other. I recall that various parts of non-Sun operating systems had licensing for parts of NFS. It wouldn't be surprising to find that parts of Solaris had licensing from other companies too. Including code directly licensed likely wouldn't be compatible with the license from the other company. Removing the licensed code to make it linkable is an option, but a time consuming one that diminishes the value of the overall project ("what do you mean I need a license from HP for something that DEC wrote?")
Your patents. Some open source projects have patent clauses with them. Sure, you can do the stuff to license those patents under the terms of the open source license... or chose one that doesn't have them. The former isn't at all in the interest of Oracle; the later is "Here's some BSD code... we don't know what patents are in there, but if you use them we will sue you."
Other patents (part 1). Surprise! In open sourcing the software, it is discovered that some intern reused the methods learned at another company in part of the product that has made its way to today. Now you've got the lawyers looking for blood for a decade of royalties.
Other patents (part 2) Recognizing that NetApp didn't want ZFS to be open source and there were some cross licensing aspects with Oracle with the ZFS settlement... they'd probably have something to say about it. There's probably some WAFL patents in it now.
Competitors. There are things in Solaris that would help competitors to Oracle products. Yes, open sourcing with a copy left license would mean that those competitors would be more challenged to use the software, but its out there.
Parteners. There are things in Solaris that help partners. Open sourcing Solaris, while providing good will to the community reduces the leveraging power with the partners for good deals. Some of those partners may also be interested in parts of Solaris not being open source.
So... nope. There are lots of legal issues and many business reasons to keep it closed source.
Consider also that this is development from March 2010 (first release under Oracle closed source - prior to that it was CDDL) until October 2015 (the last release) and the amount of value that has been added compared to the amount of effort for the above.
The way Oracle as company behaves, is quite common in that universe.
I will miss having Solaris around.
However those with heavy use of external consulting and off-shoring, seem to come pretty close.
Using consultants is a business decision which helps the company hire people for the short term of the project and also offload risk. Not all companies have the capability to handle all kinds of risk. For example, software companies don't specialise in financial models and investing. So they don't take on financial risks by investing in derivatives and other instruments to make profit. Whenever possible, risk outside core competencies is outsourced. This is good for the company as then it can focus on the core business and make money.
Off-shoring is bad in the sense that jobs are lost in the local economy. But, this is again similar to having a factory in China as opposed to San Francisco. It brings in more expertise at a reduced cost. Off-shoring helps make things cheaper in the end. For example, as your insurance company uses off-shore consultants to make their software, it is cheaper directly translating to lower insurance premiums. the same for many other products.
While I understand that software is a different beast to build unlike toys or other products. Once built, the normal theories of economics still apply.
Outsourcing software is like burning all the design documentation for hardware your having someone else build. Even something as well known as injection molding tends to work vastly better if you have experienced staff as part of the design process. And software is worse than that because the design process is part of development, so outsourcing means you don't even understand the problem space.
If I wouldn't pay the consultants in the office across the street to work on my core IT (outsourcing), why would I pay someone separated by oceans, timezones, language, culture... (offshoring)?
While working in healthcare, our corporate overlords repeatedly rammed the "blended shore" model down our throats. Which never worked. (Got to know some nice people, though. So there's that.)
The easiest part of our job was the coding. Requirements gathering, analysis, project management, customer relations, QA/Test, etc. Working shoulder to shoulder with our clients, it was hard enough. No way our work could be further delegated while still delivering something useful.
Preaching to the choir, I know, apologies.
Just like any other short-term strategy, could be appropriate sometimes, but very rarely is.
Not having in-house expertise has the really big problem you mentioned because the only way to avoid these problems is if you have experienced oversight and that's almost inevitably the first “expensive” staff cut.
Off-shoring takes all of those problems and amplifies them with a big communications latency hit. I've never seen that go well except when an entire product can be handed over, including the management.
A decade ago, at least in my experience, Chinese products were synonymous with bad quality and were not considered reliable. Nowadays, while those kinds of products do exist there also exist very good products designed and made in China (eg: DJI, OnePlus, etc.).
I reckon its still early days for software and going forward, we will learn how to build higher quality standard stuff and even high quality novel products. Generating this stigma against against consulting and off-shoring before we have had time to properly analyse the cost-benefit tradeoff doesn't help anyone.
They might be and are probably bad in many cases, and cases where they did not work should be publicised and studied. But making broad statements and correlating them with bad work environment is not helpful.
In the short term it can often work really well. Somewhat more rarely it can also work long term.
The trick IMO is to be hands on. Apple's approach to manufacturing is to be aware of what's going on even if they are not actually doing it.
The pace at which Oracle develops software, the flagship Oracle Database for example, is ridiculously slow. The Database team takes about 3 months to 6 months to develop tiny changes (say 10 to 20 lines of code). In other Fortune 500 companies I have worked for, I have seen such changes taking about a few days (5 days max!). I am not kidding! What takes say 3 days to develop in a normal software company may take about 3 months to develop in Oracle. And mind you, Oracle Database is one of the premier departments of Oracle; other departments are even worse!
Oracle is also remarkably apathetic to its employees. The link shared by foo101, i.e. has a few anecdotes that highlights this apathy. In fact, when I read James Gosling's account of Oracle in that link, I thought, "Wow! This is so accurate. Even someone of the name and fame as James Gosling had to face the same lowly problems at Oracles that relatively unknown developers in Oracle face."
Disclaimer: I worked for Oracle for 2 years.
It's no different with SQL Server or DB2. Each vendor has its captured market, with very little power to enlarge its share and commensurately little need to innovate. Customer demands amount to operational window dressing. Why spend money on engineering when customer and vendor are both satisfied?
> Oracle lives in a market protected by high barriers to entry and low customer expectations.
This is false. Oracle has a very tough competitor called Microsoft SQL Server. In fact, Oracle tries to catch up with many existing Microsoft SQL Server feature with its every release. Like it or not, the NoSQL database servers like MongoDB, ElasticSearch, etc. are also competitors of Oracle. That is why Oracle was forced to introduce supporting for storing, indexing and querying JSON in their database. This was a completely new feature that required Oracle to develop new querying syntax, new querying mechanisms and new constraint validation syntax. There are many such examples where Oracle is forced to improve due to competition.
> And no one, from CTO to DBA, makes any demand to substantially improve the product, such as by rectifying theoretical problems with SQL that have been recognized for decades.
This is true. But then everyone from CTO to DBA make a lot of demand in substantially improving the product in other awys, such as providing new features regarding scalability, robustness, security and auditing. This is why Oracle Database has seen a lot of enhancements in multitenancy in the last few versions.
> Why spend money on engineering when customer and vendor are both satisfied?
Oracle does spend a lot of money on engineering. Why? Because it has a lot of development to do in their database to remain competitive. If anyone thinks that the field of RDBMS is mostly a stagnant field and no new development happens in this area, then that is a gross misunderstanding of this market. Database market is still very competitve especially when Microsoft SQL server leads the game with modern features and as open source databases and NoSQL databases are eating away the market share.
See the following two URLs for example to see how Oracle has been adding new features in the last two releases:
While all this looks good on release notes, it is only someone like me, who has had the misfortune of being an Oracle developer, that can vouch that the development process and the development pace within Oracle is hopelessly archaic and painfully slow. Oracle still follows the waterfall model of development for example. There are probably hundreds of reasons and contributing factors to this. A few of those hundreds off the top of my head.
* Management that does not care about absolutely anything apart from their own promotion.
* A culture that rewards talking out loud rather than actual work.
* Top-down heavy handed management that provides zero autonomy to engineers, thus no motivation in engineers to innovate and improve engineering practices.
> This is false.
You have no further to look for Oracle's captured market than the US federal government. When Snowden talks about the realtime interception, tracking, and monitoring of any and all electronic communications, worldwide, he's talking about the NSA using Oracle databases to do it. Ellison made the company successful by selling the as-yet-unproven technology of a relational database to the FBI (IIRC), and it's just continued from there. This is the environment that led Scott McNealy, then CEO of Sun, to famously quip, "You have no privacy. Get over it." He knew that the NSA was collecting everything it could, and storing it in an Oracle database running on Sun hardware. Well, the commodity hardware caught up, but Postgres is still struggling to match the features Oracle had 20 years ago, so Oracle DB is still the king of enterprise databases, where cost is no object. Ellison owns government IT, which is what leads him to be so smug about his success. Even if all Fortune 500's would cut Oracle off, Oracle would continue to rake in piles of cash from the government. It is the definition of a captured market.
(And I don't mean golf.)
> Also, asked whether in hindsight he would have preferred Sun having been acquired by IBM (which pursued a deal to acquire Sun and then backed out late in the game) rather than Oracle, Gosling said he and at least Sun Chairman Scott McNealy debated the prospect. And the consensus, led by McNealy, was that although they said they believed "Oracle would be more savage, IBM would make more layoffs."
OpenSolaris is a discontinued, open source computer operating system based on Solaris created by Sun Microsystems. It was also the name of the project initiated by Sun to build a developer and user community around the software. After the acquisition of Sun Microsystems in 2010, Oracle decided to discontinue open development.
I believe they should consider this because Btrfs, which Oracle itself started, it going nowhere fast; and because Oracle customers who run Linux will benefit as well.
I'm imagining, and as things stand, my imagination is hinting at crappy battery life due to lack of tuning.
But if you want to try, Solaris was forked long ago --
Illumos is out there.
 Sun's buy of Cobalt had bad timing, since the dot-com bust happened soon after.
And the bizarre quazi-portable-but-definitely-not-laptop SPARCstation Voyager:
That said, consider the flip side of the heterogeneous aspect. You were unlikely to be able to run software on two different platforms that could communicate in a meaningful way. It was duct tape everywhere. There was no "cloud" that one could get significant computing resources on. You could pay (much more) for time on a shell at uunet or another isp... or buy your own for $$$.
A 250MHz Octane MXE with 128MB RAM and 4GB disk has a US list price of $47,995 in 1998. That's $72k in 2017 money. Making consistent technology stacks has reduced the cost to the point were we think very little about the hardware anymore - and by making those decisions unnecessary it has allowed for improved portability of skills and not worrying about the abstraction of the hardware (until it leaks).
Now, when most stuff is settled-upon, it's like cars. There are differences, but not really. Turn lights to your left, wipers to your right, wheel turns left and right, there's a manual stick or automatic, pedals... it's all there, where you expect them to be. And that's good! Times were a bit more pioneering back then, naturally.
Sounds like you can't see the wood for the trees. :P
It just didn't make sense that Sun kicked AT&T's ass with BSD Unix, and then capitulated to them by switching over to SVR4.
Yeah, yeah, I'm sure there was some business reason, but it was a bitter pill to swallow.
Good times, good times...
And on related note, I suppose Oracle won't open their diverged Solaris even if they plan to shut it down? In the past, Sun also planned to open their Sun studio and C/C++ compiler. That never happened because of Oracle.
Makes you wonder where Linus stores his old socks.
With the exception of Red Hat I totally concur that Linux has moved very far because of financial infusions from industry. But at the same time that 'toy operating system' was already quite usable before any of that happened.
As for Red Hat, they exist because of Linux, not the other way around.
And rightly so, the vendor on the other end needs to know they've got an actual live person to work with on troubleshooting. Without Redhat I have no doubt Linux would still be alive and well, but it would NEVER have gotten the foothold it has in the enterprise today (coming from someone who worked at one of those hardware vendors back in the day and tried to push for support of other distributions).
On a not completely unrelated note, there was something I read in the Kubernetes Steering Committee bootstrapping process that sounds really logical in the context of this news.
In Kubernetes Steering Committee, there will be no more than 33% membership from any given company. So if Docker, and CoreOS, and Weave, and Google, and Microsoft, and Amazon all come to the table and somehow get equal representation, which seems possible given how I understand the voting process, ... that's great, and no one company can "silent EOL" the product of Kubernetes.
And even if one of those companies is significantly over-represented within the list of members of standing that will vote for the Steering Committee members, and the second of those companies significantly eclipses any of the remaining nominees, the steering committee will still probably be in the hands of at least 4 companies.
I'm really quite miffed about a few well-liked community driven things, suddenly getting shut down by ownership lately. Not going to name any names, but in meetings to determine our organization's future direction in software, it's going to have to come to everyone's attention that in general overall momentum is a whole lot more important than corporate backing.
We shall remember Solaris for all the good things that came out of it!
ZFS one of the best file systems including copy on write snapshot functionality.
Solaris zones. Proper containers before Linux and LXC/Docker existed.
Dtrace for application and kernel performance.
And the SUN hardware workstations and servers that Solaris powered. Still remembers watching 4th of July fireworks being live streamed remotely on a Sun workstation using Solaris.
I feel the same way as a client. Everything I've used that they've purchased has turned out for the worse. Be it neglect or price increases the promises always exceed what's actually delivered.
Moreover they're transparent about their desire to lock you in and then press that to their advantage.
I actively avoid few companies but they're at the top of the list.
I came her to reminisce about the beauty of Solaris from a long time ago and your comment struck a nerve.
Quote from American Gods:
“A single product manufactured by a single company for a single global market. Spicy, medium, or chunky! They get a choice, of course! OF COURSE! But they are buying salsa.”
I (thankfully only) used to do Linux BSPs in a former life. In the last year or so of doing that, I think we spent about 15-20% of a project's time debugging systemd problems and working around it being too smart for its own good. 20% for the bloody init system sounds fine until you realize the rest of the time included stuff like writing or expanding device drivers.
It would be great to see in-depth experience reports for systemd, good and bad. The overwhelming majority of anti-systemd commentary has just been noise for so long, and as somebody who is very much in favor of systemd, I'd love to see some real discussion and actual informed criticism.
They keep having embarrassing security exploits (like remote code execution in the dns reimplementation, or handing root to strange usernames by design).
Many people say they broke logging because the binary files create administrative nightmares and are flakey by design. Ubuntu LTS' systemd logging subsystem definitely broke a bunch of production machines I work with by stealing control from rsyslog during a botched update. We have a bunch of tooling for log processing and shipping. The systemd binary format is a usability nightmare compared to .gz files.
Being a member of the "video" group is no longer enough to use DRI or the new rootless X11 stuff. One of the crucial system calls has been hardcoded to only work when invoked by UID zero. The kernel maintainer rejected a one line patch to fix it. The argument is that systemd can launder the call through its own authentication subsystem, so the kernel doesn't need to implement workable permissions for /dev/ anymore. I have no idea how far that brain damage has spread. Just "chown root:root /dev/video; chmod og-rwx" if you want systemd! Don't proactively break every non-systemd distro out there by intentionally crippling the kernel API!
I've noticed that systemd debian-derived desktops age poorly -- uninstall a bunch of packages and reinstall, and you will find you can no longer log in correctly. I never managed to root cause it. It looked like init issues with multiple repros across multiple OS vendors.
If I were choosing a "most interesting alternative to systemd and classic init" award, it'd go to GNU Shepherd which is the init system of GUIX as well: https://en.m.wikipedia.org/wiki/Guix_System_Distribution
All the talk about BSD and launchd has me thinking Shepherd might be on to something. Launchd is XMLed up from here to Sunday, whereas Shepherd can have all the benefits of Scheme's S-EXPRs being tree structures while also having a great scripting language (Scheme) at your disposal.
It was kind of like learning Unix after coming from a DOS background.
If FreeBSD switches to systemd, I'll stick with OpenBSD. OpenBSD is as likely to switch as they are to rewrite the kernel in rust and go.
TrueOS just started using OpenRC and seem happy with it so far.
I suggested to them back in January 2017 that since OpenRC has s6 integration, they might do well to add s6 to that to gain full service management. I never received a reply. I haven't heard that Laurent Bercot was contacted, either.
I personally use the nosh system and service managers on FreeBSD and TrueOS, of course. I just wrote up a more detailed account of how I used them on TrueOS to run the PC-BSD desktop login and chooser utility under proper service management, and to improve several parts of that subsystem.
I don't think Linux would have gotten this far if its core wouldn't be influenced by the design principles of Unix and the kernel project wouldn't be run by a person who takes care regarding incompatible changes. Look at ReactOS or Wine. I'm worried that systemd might prove to be a major headache in the future.
In my experience, systemd fails on both points. For example, and understanding of user permissions under systemd is probably beyond 99% of developers' expertise.
As an application developer and hobby sysadmin, systemd is a godsend over the misconfigured and broken stuff distributions have delivered for years.
systemd has made my work a lot more effective, and I’ve gained massive productivity.
And that is roughly the size of the problem. If you're an application developer or a hobby sysadmin then probably systemd is good for you, but if you're an experienced sysadmin it spells 'fixed what wasn't broken' and it re-introduces many issues that were already thought about, taken care of and laid to rest.
Note the rise of "devops" that is basically about getting a straight line from devs to management so devs can sideline ops and their naysaying of the latest shinies devs wnats to sprinkle the projects with.
Another thing is that there is less and less interest in maintenance, because maintenance is not fun. The GNU generation i slowly leaving, and is being replaced with the "fun" generation that is hell bent on rewriting working, if crufty, systems using the latest language fads over a caffeine fueled weekend...
Dumbing these down to make it possible to run these highly skilled and specialized jobs as part-time job without relevant training is one of the main reasons the state of software is what it is.
P.S. this is why, contrary to graybeard whinging, boot time matters and sysvinit cannot possibly keep up with systemd in the cloud. The faster your instances boot, the less capacity you lose while they're down.
Even then, I think you'll find you want the bare metal the instances to run on to have high uptimes (on the order of years, not minutes), since the hardware with optimal $/perf can fit more and more workloads per machine (I think this is all Moore's law is doing to help compute these days). That means you need a decreasing number of physical machines to hold your workload. At some point your "cloud" has 10 nodes instead of 1000.
Fun exercise: "cloud scale" code is typically 5-100 slower per node than single machine scale up code.
How much money would you save by consolidating smaller workloads to big machines? More importantly, how much developer productivity would you gain by eliminating network latency / marshalling for internal requests?
I think you'll see an increase of developers "coding around" devops over the next few years. I could be wrong, of course.
As an experienced sysadmin that's a really sweeping claim to toss out without details or supporting evidence — the latter being especially important given the amount of hyperbole bandied about.
The next largest SysV replacement was Upstart, which solved many problems but had curious oversights (e.g. restart with a delay or backoff, needing many releases before adding stdout/ stderr logging or launching as a user other than root), and SMF/launchd which weren't compelling enough to overcome their respective platforms’ drawbacks. Yes, you can install alternate init systems or run things under something like supervisord but supporting that was quite tedious compared to a solid standard init.
As a software developer, being able to target one init system which has all of the features I need and no real drawbacks is similarly a very nice change from the past needing to support variants for each major Linux distribution while wishing they'd hit feature-parity with Windows NT 3.1 (1993!).
The fact that every major Linux distribution has adopted systemd suggests that whatever reintroduced issues aren't gross exaggerations aren't as important as claimed; similarly, the features commonly dismissed as unnecessary inevitably turn out to be useful to part of the larger Linux community even if a particular detractor doesn't share those needs.
> As a software developer
So which will it be?
No true Scotsman fallacy creeping in here but it seems to me that anybody that has time enough to be a developer likely isn't a full time sysadmin. Now of course there are some miracle workers out there but I've met enough syadmins to know I'm not one of them even though I can probably hold my own on the UNIX command line and manage to get through a working day without having a feeling I've wasted my time.
> The fact that every major Linux distribution has adopted systemd suggests that whatever reintroduced issues aren't gross exaggerations aren't as important as claimed
It might simply mean that when RedHat moves the crowd follows because it is impossible to sustain the parallel development of two init systems.
And I'm all for that, I'd rather have one system than yet another fragmentation but it feels as if in this particular case that decision was not arrived at in a way that takes into account all the criticisms leveled against the 'upstart' new init system. (Pun intended.)
I won't claim that systemd is perfect or that I'm happy with every detail of its development history but in practice I find it's not something I need to think about very often. That was true of later Upstart releases, too, so I mostly don't get the bitterness some people have: flip a coin and either way we have a nice quality of life improvement over SysV. Yes, Red Hat carries a lot of weight but they also employ a ton of open source developers so it's not like that's unearned.
We need a system that people can install at home, and that never needs someone to configure or maintain it.
We need a system that people can throw on a VPS, and that needs no maintenance or configuration.
We need a system that a company can deploy over clusters of tenthousands of systems, and just works.
If you need a human to manually configure this stuff, it’s broken. The only situation where systemd isn’t useful is when you’re a small company, but large enough to be able to afford an ops guy for every issue there is. Generally, if you need ops to configure the base OS, you’re doing stuff wrong.
This whole point about devops and system is about automating sysadmins away, and this is a very necessary and worthy step.
>The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.
If you want that every child can run linux, that you can run linux on physical Internet of Things devices that are supposed to run for decades without maintenance (because you cannot access them), then you either have to build something so this can work,
or you end up with Windows 10 IoT and Windows 10 Cloud running everything.
No one’s gonna hire a sysadmin so they can manually upgrade every lightbulb and fire alarm on the planet, and so they can upgrade all of the servers running your containers manually.
Do you think Google has sysadmins manually pulling every update for every server, writing every config? Do you think they will just because you eliminate automation?
And that’s exactly what systemd does. It provides a baseline that just works, but you can always dive in and modify everything.
No, this has some basis: http://assets.csom.umn.edu/assets/71516.pdf
You never hear anyone going to post "oh wow I love systemd/my dell laptop/this website/" unless specifically asked, or when answering seemingly biased or sometimes incorrect "bad" reviews. I personally enjoy managing my computers much more since systemd.
That's like other technology stacks. For instance, some people were burned by CMake in 2008, didn't try to understand its logic and you still see them complaining on forums up to this day and you may think that this is a shitty unused buildsystem; but when Microsoft and Jetbrains did actual surveys it came out as very strong (more used than make) and growing:
Then say them. Simply posting that systemd is bad, and appealing to your position as an experienced UNIX user without giving specifics is just signalling at best.
I probably count as a Veteran System Administrator myself, and personally I am tired of anti-systemd people coming into conversations that have little to do with systemd, and trying to hijack the conversation with content-light posts.
You are aware of the feature creep and taking over of every sane unix/linux utility? Usually with very questionable results, changing su or dns resolver, for the worse?
You know about number of vulnerabilities that are introduced by systemd, then dismissed and not-our-bug or wontfix?
You are aware of the tying in systemd with every other program they can get their hands on? That makes harder and harder to run properly secured machine that does not have systemd. Not saying to you since you know system administration but: This is a BAD thing.
You probably know about all vulnerabilities introduced by systemd? Coupled with inability to not use systemd, with its huge attack surface this makes Linux much less secure and monolith.
Back to my initial intention: there is huge trend throughout the industry that kills variety and ties in systems in a monolith mess. Systemd is just one of the data points. I care same way about Oracle buying Sun with one of the main reasons being MySQL. Thanks to MariaDB folks that lock in didn't happen but I was aware they will try it as soon as Oracle got hands on MySQL.
You probably know about all vulnerabilities introduced by openrc or sysvinit?
Right you can't since every script itself could have a vulnerability
Arguments like this really don't help systemd and show either ignorance or worse.
And in a language without linters or typecheckers.
It’s literally easier to read the entire systemd code than trying to debug the interactions between these init scripts, all slightly buggy, interacting in slightly unexpected ways, and working mostly, although they never should.
There is something wrong with those numbers. Can you upload them somewhere or let me know what distro it was? I am curious to take a look.
If there is 1400 of them, each over 800 lines long then you have 1.1M lines of init script code, which doesn't seem right.
How many of them did you use daily? All 1400? 5? 10? Because checking sanity of ten of them, even if they are whooping 800+ lines of code (I'll have to see that) can't be compared to over 300K lines of monolithic systemd.
All of them, it was over several servers, running different distros, with similar services, all with similar but slightly different init scripts for the same packages, and all having to interoperate. All scripts from the distro’s packages.
All of them were running daily, and many of them were constantly causing issues.
In fact, I have a single sysvinit script left on my systems, and it’s exactly that one that doesn’t work reliably.
What is this about?
(And I think Oracle MySQL is still alive/supported)
Supported yes. Having history of Oracle handling of projects, Solaris being last example, I am very glad that we have active MariaDB.
Care to elaborate on the 'mess' created? I highly doubt you even used it yourself, sounds like something you've heard and are repeating for some cheap karma. The init scripts used before were the real mess, if you ask me. It's getting really tiring to hear these systemd rants with no good reasons to back them up. I guess ranting against systemd is the cool thing to do, just like calling Apple users sheep once was, there's no need for facts, just hyperbole.
While systemd isn't all bad and in some ways for sure an improvement over init (not that that is hard), there are also very questionable architectural decisions that have been made in systemd.
Next up you have the attitude of the two lead developers, Lennart Poettering and Kay Sievers and the way they handle community interaction and bug reports. If it doesn't fit into their rather limited view of how you should use your system and you break it, it's most likely not a bug they're going to bother fixing.
Last but not least, systemd is forcefully being pushed down our throats. Not with well-reasoned technical arguments, but with mostly emotional arguments about what they think is best for everyone. And since more and more of independent functionality is being integrated into and replaced by systemd, it becomes constantly more tedious to maintain software without hooking also into systemd.
* https://github.com/ServiceManager/ServiceManager/ (https://news.ycombinator.com/item?id=10212770)
My explanation about the systemd hate is that it isn't for technical reasons; its cultural ones. Specifically, people dislike change. Especially older people don't want to relearn fundaments which have been reliably stable throughout the years. Young people, OTOH, lack that connection and are more open to change provided they agree with the rationale.
Nice way of saying that young people need to redo errors of the old.
As for systemd - I'd wouldn't mind if it was executed properly. Booting speed is worst argument you can do for systemd - with ubiquitous SSD it doesn't really matter.
If you do want to talk boot speed, compare a sysv install versus a systemd install on an 850 Pro. There are a notable few seconds of difference!
Doesn't mean for one hot second I think I can just puff my chest up about a topic and blow people off when they ask a perfectly good question.
For me the jury is still out on systemd. My biggest concern is it seems to be slowly taking over everything and thus violating the core unix philosophy of "do one thing really well". It feels like systemd was started to "improve startup times" for desktop users so they'd be happy. Meanwhile servers were sort of forgotten about and many of the complexities added by systemd make getting things done harder for day-to-day system admins. I've made some unit's myself and I like the many options I have but honestly it's not a daily job. Writing three lines of bash in /etc/init.d was way quicker and easier to rationalize about in the heat of "getting shit done".
All this said I have been using systemd on my machines for a while now, at first I would back it out the second I created a new install image but now I'm trying hard to learn it and understand.
Again... the Jury is still out in my mind.
The unix core philosophy of "do one thing really well" is such a cliche though. "Do one thing really well", yet using a monolithic kernel (Linux, Solaris, *BSD). Microkernels like OpenVMS and GNU Hurd allow one to restart (including hot patching) a part of the kernel. The same is true for running something like Qubes. Apart from the kernel debate there's tons of monolithic software. Software statically linked on a commercial UNIX? You bet. Plus you use a full-blown DE, a web browser, Emacs. Vi? Sure, Vi. Yet people use Vim with all kind of plugins.
The larger attack surface, reduced "git 'r done"'ess when you're in the midst of a hot outage, and increased complexity to trace what happens give me concerns. Some touched upon in this StackExchange  conversation thread, but lots of good threads elsewhere on the Net along these and other lines. Personally, I'd rather see the entire idea of "booting" be looked at again.
The reason sysadmins value the "git 'r done" aspect of System V init is because servers are not booted frequently. But init scripts are changed more frequently than servers are booted, and business application teams forbid booting the server more than utterly, absolutely, necessary; and the sysadmins wanting to boot to test a modification to the init script doesn't count. Dev/QA/Pre-prod change control environments help, devops-based source control discipline helps, but the next time a server is booted is always at least a "sideways-glancing-to-see-what-breaks" moment for many a sysadmin. When the startup sequence breaks somewhere, it becomes a hot outage, and especially if correcting it requires application-specific domain expertise, outside the OS. In the middle of such a hot outage, the ability to get closer to the problem domain within the shell script is appreciated. Systemd's init compatibility indirection layer helps, and hopefully, some thought in the future is given to streamlining this layer.
The entire notion of "booting" has rubbed me the wrong way for an increasing amount of time, though. Microkernels tried to address this, but they never caught on. Solaris and AIX try to address this, and Linux is exploring this, with their live kernel migration features, but they don't really do much to help higher up the stack. The best I can do to mitigate this itch for the time being is highly-available three-node clusters, and regularly moving the application to one of the opposite nodes, booting the inactive node, and testing changes to that boot, and the aforementioned devops-orientation and source control. Having an OS that lets me "re-home" a running application Tandem-Kernel-like/VMWare-Live-Migration-like, to a newly-"booted" state of the OS though, would be the bees' knees.
So all you have is using your age as an appeal to authority? Pretty sad. All that's telling me is that you're used to doing things a certain way and are now upset that your knowledge is being uprooted and you need to learn something new. Otherwise you'll use facts instead of insults to push your argument.
If you belong to the silent but great majority (lol) of users that enjoy and cherish systemd -- just carry on by all means.
If anything there is A LOT of aggression any time somebody says anything 'unwelcome' about systemd.
Can you point to an article or blog post pointing out the bad decisions?
> If anything there is A LOT of aggression any time somebody says anything 'unwelcome' about systemd.
The downvotes came when you stated your age, "believe I know what I am talking about" instead of listing arguments and calling the parent poster 'kid'.
Parent poster did clearly state I am making it all up and that I have zero experience with systemd. Cultural differences? English is not my main language but I prefer clearly stating what is what, not really skilled in underhand 'polite' underhand attack. 'Politely' saying you are lying = fine, responding that I may be too ancient or experienced to even be able to explain all the falacies = bad. Gotcha.
Cliche but I am pointing at the moon and people are yelling at the finger. So be it.
Unfortunately, I never managed to get single node docker compatibility to work, and then there was a design flaw in the inexpensive atom server processors that it runs well on that leads to failure after a year or so.
Faced with a >>$1000 hardware expenditure to get a reliable replacement NAS that's compatible with SmartOS, I jumped ship to Synology and haven't looked back.
My Synology box is way more available than my ISP or Amazon Cloud Drive, and I reproduced months of setup work from SmartOS in an afternoon with the Synology.
I just wish google bought Sun for its Java and mysql, personally I do not want to have anything to do with Oracle as much as tech goes.