"I worked on Solaris for over a decade, and for a while it was usually a better choice than Linux, especially due to price/performance (which includes how many instances it takes to run a given workload). It was worth fighting for, and I fought hard. But Linux has now become technically better in just about every way. Out-of-box performance, tuned performance, observability tools, reliability (on patched LTS), scheduling, networking (including TCP feature support), driver support, application support, processor support, debuggers, syscall features, etc. Last I checked, ZFS worked better on Solaris than Linux, but it's an area where Linux has been catching up. I have little hope that Solaris will ever catch up to Linux, and I have even less hope for illumos: Linux now has around 1,000 monthly contributors, whereas illumos has about 15.
In addition to technology advantages, Linux has a community and workforce that's orders of magnitude larger, staff with invested skills (re-education is part of a TCO calculation), companies with invested infrastructure (rewriting automation scripts is also part of TCO), and also much better future employment prospects (a factor than can influence people wanting to work at your company on that OS). Even with my considerable and well-known Solaris expertise, the employment prospects with Solaris are bleak and getting worse every year. With my Linux skills, I can work at awesome companies like Netflix (which I highly recommend), Facebook, Google, SpaceX, etc.
Large technology-focused companies, like Netflix, Facebook, and Google, have the expertise and appetite to make a technology-based OS decision. We have dedicated teams for the OS and kernel with deep expertise. On Netflix's OS team, there are three staff who previously worked at Sun Microsystems and have more Solaris expertise than they do Linux expertise, and I believe you'll find similar people at Facebook and Google as well. And we are choosing Linux.
The choice of an OS includes many factors. If an OS came along that was better, we'd start with a thorough internal investigation, involving microbenchmarks (including an automated suite I wrote), macrobenchmarks (depending on the expected gains), and production testing using canaries. We'd be able to come up with a rough estimate of the cost savings based on price/performance. Most microservices we have run hot in user-level applications (think 99% user time), not the kernel, so it's difficult to find large gains from the OS or kernel. Gains are more likely to come from off-CPU activities, like task scheduling and TCP congestion, and indirect, like NUMA memory placement: all areas where Linux is leading. It would be very difficult to find a large gain by changing the kernel from Linux to something else. Just based on CPU cycles, the target that should have the most attention is Java, not the OS. But let's say that somehow we did find an OS with a significant enough gain: we'd then look at the cost to switch, including retraining staff, rewriting automation software, and how quickly we could find help to resolve issues as they came up. Linux is so widely used that there's a good chance someone else has found an issue, had it fixed in a certain version or documented a workaround.
What's left where Solaris/SmartOS/illumos is better? 1. There's more marketing of the features and people. Linux develops great technologies and has some highly skilled kernel engineers, but I haven't seen any serious effort to market these. Why does Linux need to? And 2. Enterprise support. Large enterprise companies where technology is not their focus (eg, a breakfast cereal company) and who want to outsource these decisions to companies like Oracle and IBM. Oracle still has Solaris enterprise support that I believe is very competitive compared to Linux offerings.
So you've chosen to deploy on Solaris or SmartOS? I don't know why you would, but this is also why I also wouldn't rush to criticize your choice: I don't know the process whereby you arrived at that decision, and for all I know it may be the best business decision for your set of requirements.
I'd suggest you give other tech companies the benefit of the doubt for times when you don't actually know why they have decided something. You never know, one day you might want to work at one."
I feel sorry for the Solaris engineers (and likely ex-colleagues) who are about to lose their jobs. My advise would be to take a good look at Linux or FreeBSD, both of which we use at Netflix. Linux has been getting much better in recent years, including reaching DTrace capabilities in the kernel. It's not as bad as it used to be, although to really evaluate where it's at you need to be on a very new kernel (4.9 is currently in development), as features have been pouring in.
Also, since I was one of the top Solaris performance experts, I've been creating new Linux performance content on a website that should also be useful (I've already been thanked for this by a few Solaris engineers who have switched.) I've been meaning to create a FreeBSD page too (better, a similar page on the FreeBSD wiki so others can contribute).
FreeBSD feels to me to be the closest environment to Solaris, and would be a bit easier to switch to than Linux. And it already has ZFS and DTrace.
BTRFS/ZoL doesn't beat Illumos ZFS. FreeBSD ZFS is pretty standalone in the OS; It can only since recently deal with a hot spare drive and thats it. IO scheduling on FreeBSD is spartan; It will always favor large IO and starve small read / writes.
LXC doesn't beat FreeBSD jails or Solaris zones since LXC is not considered a security boundary.
Openvswitch can perhaps measure itself with illumos crossbow.
Systemd doesn't beat SMF on Illumos. I think SMF really nailed it (Systemd is overkill and plain RC scripts in FreeBSD are a pain).
So IMHO Solaris/Illumos/SmartOS sits nicely between Linux and FreeBSD.
Just a reminder, we also run FreeBSD on our CDN servers at Netflix.
I have no information, but there aren't very many dots to connect here.
Adrian Chadd did most of the FreeBSD RSS work, and gave a good talk about it at BAFUG: https://www.youtube.com/watch?v=7CvIztTz-RQ
The RSS in Linux was just used for load spreading (the last I checked, I haven't used Linux much since I left Google 1.5 years ago). If this has improved, I'd love to hear about it.
Linux RFS depends on the packets being dispatched to the correct CPU for the connection by the interrupt handler running wherever the packet happened to land. This has cache & memory locality implications, especially on NUMA.
Linux aRFS lets the NIC do the steering. Unfortunately, each connection requires an interaction with the NIC to poke it into the steering table, and most NICs can't steer 100,000 connections.
So, to sum up, Linux has a lot of cool tech for steering individual connections and support for that varies greatly by NIC. Windows and FreeBSD use standard RSS to predictably steer an unlimited number of connections. For a large CDN server, the latter is more useful. However, for low-latency / high bandwidth applications, I can see the advantage to aRFS.
Linux is the platform of choice for bufferbloat research, although FreeBSD isn't far behind in adopting the results of it:
Netflix gets nearly 100Gbps from storage out the network on their FreeBSD+NGINX OCA appliances. Some details in the "Mellanox CDN Reference Architecture" whitepaper at http://www.mellanox.com/related-docs/solutions/cdn_ref_arch..... The closest equivalent I've found on Linux was a blog post on BBC streaming getting about 1/4 of the performance.
Chelsio has a demo video (with terrible music) using TCP zero copy of 100Gbps on a single TCP session, with <1% CPU usage https://www.youtube.com/watch?v=NKTApBf8Oko.
At SC16 NASA had a "Building Cost-Effective 100-Gbps Firewalls for HPC" demo, using FreeBSD and netmap: https://www.nas.nasa.gov/SC16/demos/demo9.html
Another interesting optimization we've done (and which needs to be upstreamed) is TLS sendfile. There is a tech blog about this at http://techblog.netflix.com/2016/08/protecting-netflix-viewi....
We don't have a paper yet about the latest work, but we're doing more than 80Gb/s of 100% TLS encrypted traffic from a single socket Xeon with no hardware encryption offloads.
I was very sad when alpha got axed, but I agreed with killing it. FreeBSD is about current hardware.
I work directly with both of the gents who gave this talk about 100G networking (on Linux) and still find that much of the actual cutting edge research is done on Linux. Perhaps I'm biased! I've also been to one of Mellanox's engineering offices (Tel Aviv) to speak with their engineers at my previous employer 7-8 years ago. They told me they do most all of their prototyping and initial development on Linux, and RHEL to be specific. Then then port to other platforms.
Maybe I was wrong on some of this, but my use case (due to my employer's industry being finance) is lower latency, where Linux absolutely and positively crushes anything else.
Actually, while we're on the subject, SmartOS with CPU bursting from illumos is the leader in low latency trading:
Additionally, I don't believe (Experts please correct me if this is wrong) SmartOS has an equivalent to Linux's isolcpus boot command line flag (or cpu_exclusive=1 if you're in a cpuset) to remove a cpu core entirely from the global scheduler domain. This prevents any tasks from running on that CPU, including kernel threads. Kernel threads will still occasionally interrupt applications if you simply set the affinity on pid 1 so that does't count.
These two features, along with hardware that is configured to not throw SMIs, allow Linux to get out of the way of applications for truly low latency. As far as I'm aware, this is impossible to do in Solaris/SmartOS. I'm not even getting into the SLUB memory allocator being better or the lazy TLB in Linux massively lowering TLB shootdowns, etc, etc. There is a reason why virtually every single major financial exchange in the world runs Linux (CME in Chicago, NYSE/NYMEX in New York, LSE in London, and Xetra in Frankfurt), it is better for the low latency use case.
On timers: we (I) added arbitrary resolution interval timers to the operating system in 1999 -- predating Linux by years. (We have had CPU binding and processor sets for even longer.) The operating system was and is being used in many real-time capacities (in both the financial and defense sectors in particular) -- and before "every single major financial exchange" was running Linux, many of them were running Solaris.
One final question while I've got you that your response didn't seemingly address. Does the cyclic subsystem allow turning off the cpu timer entirely ala Linux's nohz_full? If so, I stand corrected.
I've done a great deal of reading and research on OS ethos, IMO a thriving and production worthy operating system can be maintained with as few as 40 people in total. The superiority of Linux feels exaggerated, and systems innovation has chilled because of it.
Im not sure what you mean. Linux has led TCP implementations for a decade now.
The Linux network stack is great. It's the preferred system of choice for nearly every researcher in the networking field. I don't know what Facebook meant in their case.
The main remark seems to be:
> The predominant difference is that the FreeBSD network stack was much more carefully designed. The Linux stack was less careful and thus is much more haphazard. Also, more work has been put into optimizing the FreeBSD stack.
It is not my area of expertise, but the Linux skbuf seems to fit the description of haphazard while the FreeBSD mbuf seems to fit the description of more carefully designed. The same could be said about epoll versus kqueue.
The remark about more work in optimizing the FreeBSD stack also seems to be true. While I cannot speak for everything in FreeBSD's network stack, I do know that FreeBSD's netmap far exceeded anything Linux could do at the time and while it is available on Linux, I never hear of it being used anywhere but on FreeBSD:
Development of FreeBSD's network stack had plenty of innovative things in development at the time Facebook's post was made:
That included additional contributions from a major network equipment vendor that had made many contributions throughout the years. If I checked the commit history, I imagine I would find performance work done by said vendor. From what I can tell, FreeBSD's network stack is improving regardless of whether the rest of us hear about it.
Lastly, there have been multiple things discovered to be wrong in the Linux network stack since that facebook job listing. Two prominent ones that I recall offhand are:
They both could fall into the category of stability problems to which facebook had alluded. The second one more so though:
> The end result is that applications that oscillate between transmitting lots of data and then laying quiescent for a bit before returning to high rates of sending will transmit way too fast when returning to the sending state. This consequence of this is self induced packet loss along with retransmissions, wasted bandwidth, out of order packet delivery, and application level stalls.
This is covered by my previous team's page:
Note: "On newer Linux OSes this is no longer needed." (IE, it's already set properly).
For the second one, they fixed a bug in Linux TCP cubic implementation. FreeBSD didn't get cubic until 8.2, which was around 2009. So, you're criticizing Linux for having a in a bug in a feature that FreeBSD didn't even add until 7 years ago.
Again, I will repeat: I worked on a team that did multi-OS TCP/IP optimization. What you're describing in terms of oscillation is a well-known problem in many implementations. All of the people doing research on this are now using linux as their platform for research and development.
Not implementing cubic in FreeBSD when there was a bug in the only implementation of it in the world could have been an advantage in certain situations, including Facebook's.
There seems to be a hubris by many Linux users that Linux is the best solution in the world for everything and it is not. There is always someone who does something better. Maybe not in everything, but the same applies to Linux. No matter how good it becomes, it is not the best in everything. Networking is a broad topic. I don't think Linux is the best in every area of networking. I am not even sure if it is the best in many of them, given that many platforms do things very well and at some point, it is hard to be better.
Subsystems are now done with up front design and some degree of consensus in the BSDs, closer to the cathedral and commercial development than the bazaar of Linux. This necessarily means we are not usually at the forefront of cutting edge features. It doesn't necessarily mean we don't have features before Linux; if the idea exists in academia or other OSes enough to reason about it's reasonable to propose, design, and build. Netmap is a good example. The new FreeBSD selectable TCP stacks are another, where we avoid incremental growing pains and baggage. When these designed features hit, they tend to be coherent, usable, obvious, and lasting.
My opinion of Linux features is that little due diligence was done, especially public acknowledgement of inspiration and why one route was taken over another. For instance, the Linux KPIs are littered with questionable decisions made in isolation. epoll and the various file notification calls are examples. That attitude manifested strangely up to userland through IPC/DBus with the continued systemd drama.
A little bit of logical inference.. there are financial drivers vendors are fleeing the Linux kernel in preference of userspace (i.e. Intel's DPDK and SPDK). One is licensing, which is not an issue with BSD nor userland. The other is the rate and quality of KPI churn. Linux KPIs break all the time, switch licenses all the time, and it is a general nuisance to maintain a vendor tree whether it is open or closed source. The good side is that hopefully drivers and products end up open source. The bad side is, in many modern usages, that does not happen because GPL is not relevant to hosted services, as well as low motivation/quality/incentive/license violation for IoT type things. The BSDs start with no pretense of GPL nor flippant APIs, so it is a lot more comfortable to consume and build great products.
This remark seems more to me like a statement of belief that no one else can do good things other than Linux. That is far from true.
"In linux, buffers in the tx queue hold a reference to the socket so completions can be used to notify sockets. Implementing the same mechanism in FreeBSD should be relatively straightforward. "
"We don’t have software TCP segmentation, we have to carry information in the mbufs.
Performance was doubled, without hardware support, by doing segmentation very low in the stack, right before input into driver. (Student project.) Linux calls this approach GSO, pushing large segments through the stack; the hardware can do segmentation if supported, otherwise we do it at the bottom layer. Simplifies TCP code since you can send arbitrarily large segments. "
"Linux has their standard ifnet interface, with a single pointer to the extensions; if the interface does not support them, the system still runs. If it does, have interfaces to configure numbers of queues, numbers of buffers, etc.
All of this is slow-path (configuration) code.
Think we should go for a similar route — ease configuration of 10gig interfaces"
the rest of the stuff in there is just low level optimizations to update the design that was written out in the original FreeBSD book.
I never said that people can't do good things in OSes other than linux. I said that Linux's networking stack has been better than BSD's for ten years, I can cite numerous factual arguments and research papers to support this, along with my extensive experience with linux (my experience with BSD is less, but enough to know it's stack isn't magically better.
Linux does have plenty of nice things and plenty of nice work, but I am not going to dismiss everything being done elsewhere by declaring Linux to be "better". At best, I would say that it is ahead in some areas, behind in other areas and the same in many areas. As for what some of those "other areas" are, I recall Adrian Chadd implementing time division multiplexed atheros wifi support in FreeBSD that Linux does not have. Netflix also contributed a rather nice thing to FreeBSD that Linux did not have:
There are plenty of nice things in both platforms. Labelling one as "better" just doesn't do justice to either of them. It ignores opportunities for the "better" one to improve by denying that opportunities for improvement have been demonstrated to exist. It also denies the "lesser" one the acknowledgement of having done something worth while.
When I say something is "better", I mean "I've looked at the data, and integrated over a wide range of parameters".
I'm still waiting to hear about a magical BSD feature that is better. That hasn't happened in about 10 years, hence my statement.
If you are as experienced in networking as you claim, you should stop waiting to hear about magical features that are better. Nothing will ever impress you as being magical. That is a downside of having experience.
Maybe you would find talking to an actual expert on FreeBSD's network stack more interesting. I am not one and while I could list several other things I know, I am clearly is not doing it justice.
(I keep the DTrace book within reach when I sit at the keyboard. This is fan mail. Many thanks, for your work has helped me become a better computer person.)
Why not OpenBSD? I'm not an advocate of either; I'm trying to learn more about their usefulness in real world applications.
SMP scalability in general is far ahead of OpenBSDs the last time I looked, as is device support for 100G NICs, NVME storage, etc.
The performance monitoring is also far ahead on FreeBSD, with tools like Dtrace, Intel's PCM tools, and Intel's VTune available for FreeBSD.
OTOH, enterprise business workloads (SAP, OLTP databases, etc) typically serve thousands of users simultaneously. They do pay roll, accounting, etc etc. Such workloads can not be cached in the cpu cache, so you need to go out to RAM all the time. RAM is typically 100ns, which corresponds to 10 MHz cpu. Do you remember 10 MHz cpus? This means business workloads have huge scalability problems because you need to place all cpus on the same bus, in one single large scale-up server. If you try to run business workloads on a scale-out server, performance will drop drastically as data is shuffled among nodes on a network, instead on a fast bus.
Thus, business workloads use one single large scale-up servers, with max 16 or 32-sockets. This domain belongs to Unix/RISC and Mainframes. HPC number crunching use large clusters such as SGI UV3000 which has 10.000s of cores.
The largest Linux scale-up server is the new HP Kraken. It is a redesigned old Integrity Unix server with 64-sockets. The x86 version of the Integrity maxes out at 16-sockets only. Other than that, the largest x86 server is vanilla 8-socket servers by IBM, HP, Oracle, etc.
Linux devs only have access to 1-2 socket PCs so Linux can not be optimized nor tested on large 8-16 socket servers. Which Linux dev have access to anything larger than 4-sockets? No one. Linus Torvalds? No, he does not work on scalability on 16-socket servers. There is no Linux dev working on scalability on 16-socket servers. Why? Because, until last year, 16-socket x86 servers hardly even existed! Google this if you want, try to find a 16-socket x86 server other than the brand new HP Kraken and SGI UV300H. OTOH, Unix/RISC and Mainframes have scaled to 64 sockets for decades.
Look at the SAP benchmarks. The top scores all belong to 32-socket UNIX/RISC doing large SAP workloads. Linux on x86 has the bottom part, doing small SAP workloads. The HP Kraken has bad SAP scores, considering it has 16-sockets. It is almost the same as the 8-socket x86 SAP scores. Bad scalability.
Thus, if you want to run workloads larger than 2-4 sockets, you need to go to Unix/RISC. Linux maxes out at 2-4 sockets or so. The new Oracle Exadata server sporting SPARC T7 (same as the M7 cpu) runs Linux and it maxes out at 2-sockets. If you want 16-socket workloads, you must go to Solaris and SPARC. All large business servers, use Unix or Mainframes. No Linux nowhere.
Linux = small business workloads. Solaris = large business workloads. And the big money is in large business servers. If Oracle kills off Solaris, then Oracle is stuck at 2-4 sockets (small revenue). Only Solaris can drive large business servers (big revenue).
It does not make sense to kill of Solaris, because then Oracle can not offer (expensive) large business servers. Then Oracle will be stuck at small cheap business servers with Linux and Windows.
Regarding Linux vs Solaris code quality:
Fortunately, Solaris was open long enough that we in the open source world were able to fork it with illumos. And because illumos became the home for many of us that brought Solaris its most famous innovations (e.g., ZFS, DTrace and zones), it should come as no surprise that we've continued to innovate over the last six years. (Speaking only for Joyent, we added revolutionary debugging support for node.js, ported KVM to it, completed and productized Linux-branded zones, added software-defined networking and developed first-class Docker integration -- among many, many other innovations.)
So illumos (and derivatives like SmartOS, OmniOS and DelphixOS) is vibrant and alive -- but one of our biggest challenges has been its association with the name "Solaris": I don't think of our system as Solaris any more than I think of it as "SVR4" or "SunOS" or "7th Edition" or any of its other names -- and the very presence of Solaris has served to confuse. And indeed, it is my good fortune to be working with a new generation of engineers on the operating system -- engineers for whom the term "Solaris" is entirely distant and its presence as an actual (if proprietary) system befuddling.
So if the rumor is true (and I suspect that it is), it will allow everyone to know what we have known for six years: Solaris is dead, but its innovative spirit thrives in illumos. That said, I do hope that Oracle does the right thing and (re)opens Solaris -- allowing the East Berliners of proprietary Solaris to finally rejoin us their brethren in the free west of illumos!
The death of Solaris may well be a death blow to illumos as well. It sounds like Oracle, the owners of the Solaris code and copyrights, aren't seeing a future for it. That's an incredible vote of no confidence from the very owners of the code. And the positive energy they have put into Solaris at large for years (marketing, sales, staff) will cease.
While I loved Solaris and illumos back in the day, in the end I'm glad I left and switched to Linux and FreeBSD. I'm working on similar technical challenges with much bigger impact. It's been more difficult, but also more rewarding.
Have a piece of software which must run on GNU/Linux? No problem, it'll happily run inside of an lx-branded zone with zero performance penalty, where both it (/usr) and the illumos native commands will be available (/native), so one can keep one's cake and eat it, too. Otherwise - there are 14,000 packages ready to run, something Solaris never, ever had.
It's not a desktop operating system, it doesn't have that kind of a mass adoption. But on the other hand, when one considers just how Windows-like GNU/Linux became (systemd), it's better that it doesn't: it does one thing and does it well, and that's powering the high performance, massive clouds. For desktop, there's macOS, and that's fine.
Another factor feels critical for me as well. Troubleshooting has felt much faster on SmartOS and Triton due to the quality of logging and monitoring methods. Troubleshooting feels like O(1) because one often knows where to look and the tools are there to gather the data.
Triton and SmartOS are killer technologies, but the quality of interactions with the community are no less so.
That's what makes them true open source, IMHO.
Edit: Apologies for misspelling your first name.
So hoping that you do indeed check out illumos this weekend; I think you'll find that while some of the names have changed, the spirit remains vibrant!
Big fan of Solaris and zones, though at the moment using a mix of other technologies.
One thing I did notice about Solaris at least in the Linux 2.6.x days: Solaris is amazing at handling low-memory situations. Once I logged into a server that was swapping continuously via SSH and had about 2MB RAM left over - it was still somewhat response; while under Linux of that era it would have bogged down under the same situation.
I'm mostly interested as a developer of config management tools where our support tends to look like "shrug, probably acts like solaris". I just want a rosetta stone for those distros, particularly when it comes to packaging and service management.
I'd be nice to know which ones are dead and which ones aren't as well, we're still carrying around definitions for nexentacore that i'm not sure are useful to anyone any more.
I was a big fan of Solaris, and it had an edge over Linux for quite some time...as did the Sparc hardware over 32 bit x86.
The writing on the wall was around 2003, when AMD opteron servers came out. 64 bit Linux on dirt cheap, fast, servers.
UNIX rose to prominence because it's what everyone learned in college in the '80s. Linux, because you installed it on your laptop or a VM in high school or college late '90s. Don't underestimate the power of this.
There was a lot of commercial software that either didn't have Solaris x86 binaries at all, or only had 32 bit binaries.
It was arguably "better" from a purely technical view, but cheaper beat out better.
A video that amuses me with respect to engineering culture and organization blindness is Cantrill doing a DTrace demo at Google in 2007 https://www.youtube.com/watch?v=6chLw2aodYQ. The audience seems completely unaware about the significance of what they are seeing. The length of time between Linux getting cogent tracing support is telling. GOOG could have single-handedly propped up an extra-Sun OpenSolaris community, and there would have been nice symbiosis considering their early container usage and how long that took to grow as well.
The commodity always wins. Never forget that.
The reason I think it didn't rise to ubiquity in the same way that Linux did is the lack of customization.
One can easily customize the Linux kernel for their use case (i.e. Embedded), compile it, add busybox/dropbear, and you have a decent starting point for an embedded OS. You couldn't do the same with Solaris.
Now that I am thinking about that time it was like over night. Linux over took everything else so fast. I personally used Linux and BeOS in 1999-2001 and always thought Linux was coming and then it just happened.
Be realized they couldn't compete with Windows, so they wanted to sell dual-boot boxes. But the Microsoft EULA for OEMs banned that (similar to OHA and Google not allowing Google-Android manufactures to create Amazon-Fire products).
Crazy as it sounds BeOS wouldn't run on X86 for the first few years. They were trying to capture away Apple people. Also it was close to being the OS X successor, but Apple walked away when BeOS upped the price.
PS The only desktop OS that could pull off this stunt. Play a bunch of videos and music files and unplug the computer. Boot back up and everything is playing again just as you left it.
NeXTSTEP was Steve Jobs' attempt to build an OS that fulfilled the promise of what he saw during his visit to PARC (as opposed to just the graphical interface which is what was implemented with the Mac). It was a true-multi-user Unix with a beautiful UI and an object-oriented framework that was far more influential than its marketshare would have suggested (it led to Microsoft starting Cairo, IBM building WorkplaceOS, and Apple/IBM sinking fortunes and thousands of man-hours into Taligent/Pink).
BeOS was Jean-Louis Gasse's attempt to build a successor to the Mac (including Quicktime which came after Jobs) but built to be multiprocessor and SMT (symmetric multitasking) friendly from the start. It was --like the Mac-- a single-user OS but intended to extract all the performance possible from "modern hardware"
ACCESS Co. the current owner of the PalmOS and BeOS assets basically frittered away whatever potential BeOS had. So it definitely flopped.
I saw a BeOS demo of turning off processors. The GUI allowed you to uncheck all the processor checkboxes and the machine goes dead. :)
int32 is_computer_on(); //Returns 1 if the computer is on. If the computer isn't on, the value returned by this function is undefined.
double is_computer_on_fire(); //Returns the temperature of the motherboard if the computer is currently on fire. If the computer isn't on fire, the function returns some other value.
I was amazed that back then I could have four (!) windows open playing videos (albeit at low res) at the same time without hiccups -- doing it Windows 95 on the same box would choke it up.
The determination if a company violates that (as in, they're getting rid of YOU not the position) becomes a whole set of legal arguments.
redundancy = getting rid of the position and it can't come back under another name.
firing = getting rid of you as a person.
Why? Because if you get laid off, you can collect unemployment, but if you get fired for cause, you can't. But if you think the company pretended they had cause when they didn't, you can appeal, and the company will have to spend considerable resources defending their position that they had cause. As such, many employers legally classify all firings as layoffs because it's often not worth the hassle. And there's no penalties to doing so, either.
So if you get fired but the company officially considers it a layoff, it's a good thing for you: you dodged a bullet.
Of course, as these things go, it became an verb in management speak. "Yeah, remember so-and-so? He was ok but didn't really get it so we RIFfed him last year".
Given that, I see no reason why anyone should indulge Oracle or patronize them given their revenue-model.
As far as Oracle Cloud appeal is concerned - I can totally see the big "enterprise" type IT departments using Oracle/Weblogic stack going for it at least in the "paid POC" type mode to get things rolling.
As someone who works at an "enterprise" - the default is AWS. They have the consulting network, the certifications, and the list of other big companies already using them. Their biggest challenger is Azure, because Microsoft are already in the enterprise, and have good stories to tell around helping you cloudify your Office deployment model, Exchange, etc etc etc. At that point "hosting VMs" is an easy upsell for them.
The path to HIPAA compliance in AWS is just arrange to get a business agreement with Amazon.
You should probably be doing a bunch of other things to be HIPAA compliant in AWS, it's not just a box you check off.
In the past you could be HIPAA compliant and use Postgresql RDS by signing a business associate agreement and doing things like using dedicated instances in their own VPC.
At a minimum, you'd still have to sign that BAA with them. I mention that not for you, but for anyone else at home thinking "oh, I can deploy RDS/PostgreSQL and be OK with HIPAA without doing anything else!" That's (still) not the case.
In logic terms, this certification is necessary but not sufficient. It's not sufficient by itself, but it is a hard requirement because RDS hasn't been covered under their BAA up until the last day or so. That is, it wasn't covered the last time I checked, maybe a week ago, but it is now today. This was confirmed by our AWS tech reps when we recently talked to them: they absolutely did not HIPAA certify PostgreSQL the last time we asked about it. And oh, how I promise you we talked about it.
In the past you could be HIPAA compliant and use Postgresql RDS by signing a business associate agreement and doing things like using dedicated instances in their own VPC.
Citation needed. We were told multiple time by our reps and solution architect that RDS+PostgreSQL was not certified in any way. The only AWS options we had for HIPAA PostgreSQL were 1) hosting our own instance (that is, not using RDS in any way, just plain old EC2) or 2) paying a third party for managed PostgreSQL hosting.
You need to enter a phone to receive a verification code.
This is no longer a matter of convincing a few huge players. You need the mid/small size community to build vibe & hype. Oracle hasn't learn this lesson yet.
I guess it'll still live on in Illumos and its distributions like SmartOS, OpenIndiana, etc., but still... Solaris brought the computing world so many innovations (NFS, ZFS, dtrace, etc.), and it's going to take a while for it to fully sink in that it's gone.
It's been around for a while, initially started by some of the people flooding out of Sun in the immediate wake of the Oracle acquisition, in a (very slightly) less messy version of the Hudson/Jenkins split.
It's a fork. If you're interested how it came to be check out this epic talk by Bryan Cantrill https://www.youtube.com/watch?v=-zRN7XLCRhc
This is sad to see, but the acquisition of Sun by Oracle pretty much started the downhill slide.
It used to be said [...] that AIX looks like one space alien discovered Unix, and described it to another different space alien who then implemented AIX. But their universal translators were broken and they'd had to gesture a lot.
-- Paul Tomblin
Although I suppose that if you held a gun to my head and forced me to select a commercial Unix for a project, and if my stunned perplexity didn't get me killed, HPUX would be an admirable choice.
It is a shame. Many challenges we find today happen to have been solved in the 60s. Then in the 70s, then in the 80s...
EDIT: One thing that I love is the fact that they distributed apps in intermediate form and then compiled at installation time. (Sounds familiar, right?)
There were import libraries and symbols to be exported needed to be defined in export files.
Of course, eventually they converged into the standard UNIX model for shared objects.
Unlike what someone else wrote, it was IIRC always possible to do anything SMIT did via the command line, but frankly SMIT was often easier & faster than doing it by hand.
One of AIX's problems is that it didn't IMHO age well. It was designed for a different world than the one in which we live in now, and one thing it didn't feel that it was designed to do was evolve.
Would I recommend AIX for any project in the future? Hell no, because it would mean inviting IBM into the project, and that is second only to inviting Oracle into a project on the list of gross management errors. But it wasn't bad software for its time, and I don't think it deserves to be hated.
Then... it stopped being great, and a x86 Linux machine running the same JVM costed a fraction, and equally performed, if not better. Difficult to justify the licensing and support costs in Oracle days.
- SCO decides to sell off it's Unix and becomes Tarantella
- Established Linux vendor Caldera buys the rights to distribute SCO Unix
- Caldera changes its name to SCO and subsequently starts filing lawsuits
Oracle does not care. Oracle thrives on badwill.
The "old" Sun: Encouraged hobbyist use of hardware, put out software under a "free unless you need to pay for support" term, open-sourced Solaris , was generous with hardware donations  to various organizations, and realized that if a sysadmin liked playing with Sun gear at home, they were more likely to recommend it at work.
The "new" Sun: Oracle flips everyone the bird with both hands, won't even communicate with you unless it's about a paid support contract.
 I was lucky to be one of the 250 people picked as the OpenSolaris test/release/publicity team; still have my "xxx of 250" poster print on the wall of my home office.
 They gave a Netra T1 and a disk shelf to us to run the Sun-Managers mailing list with, told me to keep a review-unit T1000 to run sunhelp.org on, and sent me a loaded Ultra 10 after a bit of a "misunderstanding". These are just three examples of many, many instances.
Given RHs compulsive open sourcing of aquisitions it's one of the great tragedies of the software industry that IBM got cold feet over the concerns that Sun were facing violations of anti-bribery laws.
That, and the whole "Larry like's Larrys stuff more than anyone else's" thing...
@brendangregg: I'll bet ya $10 that neither Solaris nor SPARC are going away any time soon. :)
A generic reduction in force, of undetermined method. Often pronounced like the word riff rather than spelled out. Sometimes used as a verb, as in "the employees were pretty heavily riffed".
intel video - couldn't care less either - I just buy Nvidia, download their SVR4 Solaris package and am up and running in seconds. It JustWorks(SM), so I always buy Nvidia - go sgi engineers.
2. Why can't Linux vendors ship source code to be compiled on clients computers, so no distribution (forgot the legal term) takes place?
3. If the patent license only covers the code they released (and they reserve the right to sue over reimplementations), what will FreeBSD (or illumios) do if they'res a bug in the code? Once you change the code, you very sued
2. Most already do. Some even believe that 1 makes CDDL compatible with GPL as well and so ship binaries.
3. Patent protection in CDDL is extremely strong. Rumor has it that Oracle wanted to kill illumos via litigation, but never went ahead with it because they knew they'd never win because of the CDDL.
If anything, Oracle's software patents are a case against it because they could sue a clean room implementation like they did with Android's Java implementation. They would have a stronger case too due to the hundreds of patents covering ZFS. That is the elephant in the room with btrfs that no one discusses. :/
Anyway, I see no need to reimplement ZFS from scratch after consultation with attorneys of the SFLC and others.
For ZFS, they do. Debian's legal advisors say that shipping it as a DKMS module (i.e., as you describe) is fine, so they do that.
Ubuntu's legal advisors say that shipping the compiled module is fine, so they ship that.
DTrace is rather more closely coupled to the kernel than a filesystem, and there are good Linux-native alternatives now.
a) Bart Smaalders (with plenty of help from Glynn Foster and Shawn Walker) introduced IPS into Solaris
b) they did away with JumpStart(TM)
c) they did away with compressed Flash(TM) archives
d) they did away with sparse zones
e) Oracle closed the source code
I went to illumos / SmartOS and never looked back. I understand from a fellow engineer who still runs Oracle Solaris that 11.3 is the latest version, and I couldn't care less. I will never accept IPS because it has no preinstall / preremove / postinstall / postremove on purpose. Never, ever. I do my own builds of SmartOS from source code, PXE boot it from the network, and all is well with the world.
Joyent's SmartOS is built on illumos, and illumos is a fork of Solaris Express / OpenSolaris / ONNV, and a whole bunch of former Solaris kernel engineers, who were at key positions at Sun Microsystems, and are now across several successful companies, still commit fixes and features into the illumos source code. For example, illumos has OpenZFS and KVM, two major features Snoracle Solaris doesn't have and can't take back unless they open source the code again, not that anyone cares what they do.
illumos has been a thriving project for over six years, fully independent from Oracle. There has been zero code sharing, and little interaction of any kind.
Illumos will continue on their own just like they've been doing for the last few years. They'll probably have even more freedom to innovate even, since they won't have to worry about chasing Solaris 12 compatibility.
Edit: And Joyent is one of the major sponsors of Illumos, too, so their platform definitely isn't going away.
SVR4 / UNIXWARE / SCO / AIX.
Without you Linus would have never been cloned you as his own.
Going the way of UNIXWARE / AIX /
RIP UNIX .