Hacker News new | past | comments | ask | show | jobs | submit login
Triton: Docker and the “best of all worlds” (joyent.com)
202 points by bithavoc on March 24, 2015 | hide | past | favorite | 100 comments



I find these to be pretty weasel words: Is Triton open source?

Yes. Triton Elastic Container Infrastructure is the commercial offering built on the top of the open-source SmartDataCenter cloud infrastructure management platform. In addition to the open source components, Triton includes support, a DevOps portal, and intellectual property indemnification only available to commercial customers.

In more plain English, "Yes. Triton is open source just like Mac OS X"

https://www.joyent.com/developers/triton-faq#os


That's a good thing to clarify. It's not an open core system like Mac OS X. SmartDataCenter is open source and Triton is our product branding. https://github.com/joyent/sdc

There is work in progress to clean up and open source the self-serve portal and its dependencies.

The propriety bits are not ours: "The Joyent SDC USB images offered to our customers contain certain firmware binaries for use with Joyent-branded hardware. We cannot make that software generally available in the open source builds; however, it is available to all who have purchased Joyent-branded hardware and is of no use without that hardware." https://docs.joyent.com/sdc7/obtaining-software


But what you're branding Triton in fact appears to be a PaaS GUI on top of SDC? i.e. the way people manage their containers/hw in Triton the service, is not available in SDC which seems to be all CLI tools?


As the Docker Engine for SDC is being developed to the Docker API and CLI, Triton can be used through any Docker ux. https://github.com/joyent/sdc-docker/blob/master/docs/diverg...

The interesting bits are not the self-serve portal. The portal is good for introducing the stack and supplementing the experience, but few use a portal for infrastructure management -- they require more automation, specific to their business.

We're working hard on getting the self-server portal code architecture and hygiene up to open sourcing it. From there we can get these portal plugins open as well. Our on-prem customers are hungry for this, as many of them have a mandate for OSS.


What does the bit about security patches and minor and major updates mean? The Product version includes automatic updates and the open source version you manage yourself? Or something else?


Would you quote the bits here, so I can clarify with the product side of the house?

We have not and never will let anything jeopardize the continued security of any SDC deployments.

SDC having update channels is under active development:

'As of sdcadm version 1.5.5 and the release-20150319, SmartDataCenter has preliminary support for update "channels". A "channel" is separate stream of built SDC components with different stability characteristics.'

https://github.com/joyent/sdc/blob/master/docs/operator-guid...

I image that the support channel will be the most conservative.


The check-marks at the end of the page here, which compares Joyent Triton to Open Source SmartDataCenter:

https://www.joyent.com/private-cloud

    Maintenance and Upgrades
    Minor and major upgrades


Thanks for bringing that to my attention. Having seen how proactive our support team is and active in seeing customer's through maintenance and upgrades, my guess is that is the context.

All the information and technology is there in SDC. Without Joyent Support, it would require riding the change logs and announcements and taking it fully into your hands. https://github.com/joyent/sdc#community


Tldr.

Triton lets you run your docker containers in the cloud without worrying about virtual machines or setting up your own paas.

Signup here: https://www.joyent.com/lp/preview


Thanks, I spent a minute skimming through this post and ended up coming here hoping someone wrote a tl;dr like this. Must be challenging to write effective copy for something like this (not being sarcastic)


So how is this different from the container service from google?


One interesting bit from the article: "Does this mean that I'll be able to DTrace my Linux apps in a Docker container?" (yes)

Technically you can run dtrace on various platforms, and I don't know enough to tell you specific reasons why you would rather use dtrace on smartos than linux. I got the impression a few years ago the ports weren't as good. (perhaps relating to the linux port not yet being a kernel module?)


Debugging LX branded zones on SmartOS

https://www.youtube.com/watch?v=6oIBiWdh41c

"A gritty how-to on debugging LX branded zones on SmartOS presented internally at Joyent on 1/30/15. While given to an internal audience, there is quite a bit of information here that may be interesting to the broader community"


Frankly, the technical implementation.


What does this provide over running a Docker host running on a dedicated server?


I haven't heard of this before today, but it seems as though with this you could scale out to multiple servers and it'll continue to be treated as a single docker host. It also isn't linux.


Uh, so I've never really heard of SmartOS or Joyent's SDC before now...

This stuff looks pretty fucking unreal. What's the catch? It looks like it solves so many problems with clustered container deployment. Exposing a whole datacenter as a single Docker host seems like the end-game to me.

Am I missing something? I'm a little low on sleep today.

edit: To answer my own question a little: I guess the docker-sdc isn't quite fully baked yet. That's not really a huge issue since it's still a preview, I think?


The main catch (which may also be a plus) is that the Joyent set of technologies is quite opinionated about how to manage a cloud, so you more or less have to "buy in" and do things its way. Your node OS will be SmartOS, nodes will be PXE-booted, your filesystem will be ZFS, the network topology will work in a specific way, etc. More practically, it also takes some time to be proficient in operating it (or even just get it working), because there are quite a few moving parts. Getting a SmartDataCenter deployment running, and then things like Manta and now sdc-docker running on top of it, is more than an afternoon's work. But if you're willing to buy in to the system and its choices fit your needs, it's really well engineered, imo much more "done right" than e.g. OpenStack is.


Nope, SmartOS is pretty neat.


Ditto. We were early adopters at work. KVM + ZFS and practically no setup is ace.


No catch, it is a truly innovative/disruptive technology.


I'm right there with you. It feels like the beginning of skynet


Can someone explain this in words that a time traveller from 2010 would understand?


Well, since 2010 Sun has died, and opensolaris is no more. It lives on as an open source effort called Illumos. At one point, solaris could run Linux binaries unmodified under a solaris kernel and userspace. That became un-maintained, but it is now working again.

One of the reasons this feature fell out of use was because it could only run 32-bit linux binaries. Now it can run 64-bit binaries too.

If you want to try it out there is a company you may not be familiar with, dear time-traveler from 2010, called Joyent, that will let you use this.

What is very interesting dear time traveler is that there is a format for containing programs and their environments called docker that allows you to use features, as you could in a zone, but which is easier to use than zones. You can deploy your containers built on linux to solaris zones with this feature, and Joyent lets you do it.


Oracle bought Sun in 2009 and finalized in Jan. 2010, so this time traveller would have to be from a very specific slice of 2010 to not know about the demise of Sun. :P


I assume that the time traveller knows Open Solaris up to 2010.

Someone used Linux facilities that are somewhat similar to Solaris Zones and ZFS snapshots (you'll know those, since Solaris 10 from 2005 shipped with that and wasn't even the first) to build an ecosystem for lightweight virtual systems with standardized interfaces. That system is called Docker (and runs on Linux, obviously)

OpenSolaris is all but dead, but a community succeeded it, called IllumOS. People from that community decided to attempt to use Solaris Zones, with Linux branded zones (that capability was dropped in the meantime so they had to invest some effort to bring it up again) to reimplement Docker on an IllumOS (ie. OpenSolaris) foundation.


Love the question. This should be the new ELI5. I think a lot of articles on here would benefit from answering that question.


The Explain it like i have been in a coma for 5 years... Elihbiacf5y or just 5 years in coma, explain! 5YICE....

Or ETTT5YA or ET5... explain to time traveler from 5 years ago


Based on your username, I'm not surprised you like the time-traveling angle.


Ditto. It was a boon to me too.


Remember when virtual machines came out? That technology gave us tools to create a 'layered' effect for compute related resources on a single, managed machine. Microservice technologies, like Docker containers, are a logical extension of that layering into the process space on the machine, and in many cases multiple virtual machines running on a single machine.

It's all about the layers, man.


Joyent actually extends this and allows you to treat an entire datacenter as your docker host, similar to how you can use a product like VMware to cluster ESX hosts together.


Joyent has solved some really hard problems to make this work. I was lucky enough to get the chance to sit down with Bryan last week and talk with him about what this took. It's essentially reimplementing Linux on top of Illumnos, which is no tiny feat.

There are still messy hurdles to running Docker in production, but it's clear that Joyent has really tried to make something awesome here.

As someone who works deeply in the future-container space, I applaud folks who are taking us deeper down the rabbit hole. I think it's clear that this whole 'back-to-the-future' isolation technology is really cool stuff. Jails/zones/containers have been around forever and I think it's great that we're finally taking advantage of this technology.

Edit: at Terminal.com, we want people to push Linux forward, and this is a great example of taking Linux to new and intriguing heights. I did not think we would have Linux on Illumnos in quite this way in 2015 and it's delightful to see. We are all standing on the shoulders of giants and it's great to reach new vistas.


Isn't it an example of taking Solaris to new and intriguing heights? There is no actual Linux code in the implementation.


That's a great point, but I think the point of what Bryan has been doing is to make Linux work with Zones (and dtrace).

That's a primitive that Joyent has wanted to upstream into the Linux kernel for a long time and has never been able to get the necessary consensus around it (similar to OpenVZ's troubles getting their work upstreamed).

In short, this is sort of a hack to give you zones on Linux without needing to get zones into the upstream. Yes, there's no linux code, but there is a lot of required understanding of Linux code to make something like this work.

It's kinda amazing that they got 64-bit linux to run on top of Illumnos, right? I did not see that coming and maybe that's because I'm ignorant in some capacity, but it's been a pleasant surprise.


Emulations have been a Unix feature for a long time actually. NetBSD has had 64 bit Linux emulation for ages, for example, but its not very complete because no one has cared enough to implement more. For example Illumos is AFAIK the first system to emulate epoll. The Linux API is huge and historically the process has been just fixing stuff for a binary someone wants to run. It is very tedious work...

I dont really see it as zones in linux. More a gateway drug for non-Linux.


Hey Justin -- do you have insight into how hard any particular remapping (ie: epoll) is to perform ? I was talking to @bcantrill about their effort at a Docker meetup and mentioned the NetBSD emulation (he said "Oh! Of course!"), but whats interesting (in retrospect) is that they (Joyent) just tried running stuff and played whack-a-mole w/ unimplemented APIs... how tough would it be for "us" (NetBSD) to occasionally implement pieces ?

edit: parens


It is just tedious and you need motivation. Especially as Linux has a lot of interfaces, many of which are frustratingly annoying - there are three file change notification interfaces, of different dates. In fact there are at least two of everything!

I amagine much of the Joyent code could be easily ported to NetBSD/FreeBSD (which now has a 64 bit interface as of a few months back). epoll may well be the most difficult (it has edge and level triggered events and other annoyances). But a not very performant version should be doable.

Mostly, few people have been interested. I have a decent test suite though (rump based) so email if you are interested...


Speaking without familiarity with NetBSD, I think it depends on what kernel facilities the system happens to have; speaking for SmartOS/illumos, in many cases we were able to slightly rephrase Linux facilities as extant facilities -- saving a considerable amount of time and effort. For example, the big realization with epoll was just how naive it is -- so much so, in fact, that it actually looks very similar to a pre-port mechanism (/dev/poll) that we developed nearly 20 years ago (!!) and later deprecated in favor of ports. epoll would have been much nastier without /dev/poll -- which is likely the greatest service that /dev/poll has ever provided anyone...


Yes, NetBSD added some facilities (and general missing functions) that were Linux-like if that made sense. No one did epoll as kqueue is a bit of a mismatch and we never had /dev/poll...

A lot of the issue is just testing - NetBSD does not have any in tree tests for compat. I have some out of tree, which help a lot.


Hi Bryan -- I'm also aware that epoll may have been a bad example on my part, because isn't it subject to some nasty fork/share bugs wrt handling the (well) handle, and what file it's actually associated with the handle -- so a parent can get notifications on a handle it doesn't have, or worse, notifications for a socket that it does have that is not really the same handle that's issuing the event.

In cases like that, did you end up trying to be bug-compatible, or make a design decision to clear up the trouble ?

[edit -- spell "Bryan" correctly]


Funny you should mention that one in particular -- from our (SmartOS's) epoll(5) man page:

       While  a  best effort has been made to mimic the Linux semantics, there
       are some semantics that are too  peculiar  or  ill-conceived  to  merit
       accommodation.   In  particular,  the  Linux  epoll facility will -- by
       design -- continue to  generate  events  for  closed  file  descriptors
       where/when  the underlying file description remains open.  For example,
       if one were to fork(2) and subsequently close an actively epoll'd  file
       descriptor  in  the  parent,  any  events generated in the child on the
       implicitly duplicated file descriptor will continue to be delivered  to
       the parent -- despite the fact that the parent itself no longer has any
       notion of the file description!  This epoll facility refuses  to  honor
       these  semantics;  closing  the  EPOLL_CTL_ADD'd  file  descriptor will
       always result in no further  events  being  generated  for  that  event
       description.
So while we do aspire to be bug-compatible, we're not about to compromise our principles over it. More details (or some of them, anyway) can be found in the talk on LX-branded zones that I gave at illumos Day at Surge 2014.[1][2]

[1] http://www.slideshare.net/bcantrill/illumos-lx

[2] https://www.youtube.com/watch?v=TrfD3pC0VSs


Agreed. The application is what matters... in the end most people don't care about what Operating System their apps run on.

I've felt for a long time that with the right tooling, smartOS/Illumos would make an ideal Container OS. Glad to see that they're moving hard in this direction.


+1 on terminal.com. Love it.


Here's the signup link for Joyent's hosted container service: https://www.joyent.com/lp/preview.

We're seeing the initial explosion of the microservices ecosystem right now. I've been spending most of my time over the last few years thinking about trusted decentralized infrastructure and have decided that microservices, including hosted ones, could be one possible solution for instantiating trust as a proper 'knob' of the cloud. This Intracloud, if I'm allowed to use a new buzz term, will be smeared across 100s (1000s?) of datacenters world-wide. Have a high-trust use-case? Run it in a German or Dutch datacenter. Have a high-efficiency use case? Run it on a friend's cluster for free.

I believe in this idea of the 'trust knob' enough that I sought out a job with the folks at https://giantswarm.io/, where I am now a dev evangelist. We are a German hosted microservices stack which provides a Docker platform. Alpha signup is here: https://giantswarm.io/request-invite/. Demo of it in action here: https://github.com/kordless/swarm-ngrok#ngrokn-giant-swarm

I also hacked together a SF Microservices meetup last Friday: http://www.meetup.com/SF-Microservices/. 139 people have already joined. Would like some feedback for content! Planning on mid-April for the first event.

I'm excited to see what happens with this market in the next 6 months!


    > When Docker first rocketed
I thought this was an interesting way to start off the post.


"You can run your Docker containers across entire data centers without ever creating a "cluster" as other IaaS providers would have you do."

This is a little unclear. I get that you don't have to create a "cluster" because you're going to use the API to launch containers on existing dedicated hardware, but how is that a WIN for your finance team? I guess I'm missing something.. but wouldn't you have to already have systems online that you're paying for that these containers can be launched on?

From: https://www.joyent.com/blog/docker-bake-off-aws-vs-joyent


The difference for the operator is: Baremetal -> smartos -> container vs Baremetal -> hypervisor -> virtual machine -> container.

Theres root safety in the smartos implementation of docker. So they can do multi-tenancy.

For the customer, you dont have to provision entire virtual machines to run docker containers.


Is it multi-tenancy of various Joyent customers on the same hardware? So you get containers placed somewhere in their data center on shared hardware with other customers? Or do you already need to have dedicated hardware provisioned at Joyent to launch containers on? I'm still unclear on that point, which makes me question how it's saving the end user money.


Yes, you are running on shared hardware, securely, but with no hypervisor.

The biggest money-saver is in performance. IO-heavy applications can see anywhere from 5-10x performance improvements[1] by switching from hardware- to OS-virtualization.

[1]: http://dtrace.org/blogs/brendan/2013/01/11/virtualization-pe...


Yes, the the containers are running on bare-metal multi tenant hardware. Some of the savings comes from the performance gains as noted elsewhere, but additional savings comes from not having to pay for and manage an additional layer of virtual machines.

There's no need to provision anything other than the containers.


Very cool. Is the next step integration with Manta so you can specify an environment of Linux executables to map over your data?


Yes, that's definitely the direction that we're headed.

Manta allows users to spin up large numbers of transient containers around their data (i.e., without moving data around) in order to do map-reduce operations, and without having to think about the container management. Today, these compute tasks are specified as scripts that run inside a well-stocked SmartOS container, plus some optional assets to download (for custom binaries or other data). The obvious next step would be to let people specify they want to run an LX zone (rather than a SmartOS zone) so that they can run GNU/Linux binaries. Then we can consider whether it makes sense to phrase the task as a Docker image. That might well be a more natural way to incorporate a larger bunch of binaries and other shared data. Architecturally, all of this should be relatively straightforward (famous last words!).


Didn't see the actual repo linked: https://github.com/joyent/sdc-docker

"A Docker Engine for SmartDataCenter, where the whole DC is exposed as a single docker host. The Docker remote API is served from a 'docker' core SDC zone (built from this repo). ... Disclaimer: This is still very much alpha. Use at your own risk!"


To me it seems like Triton is about shoehorning Linux into Solaris/SmartOS containers. While this seems great for accelerating adoption, why not just use SmartOS-specific Dockerfiles/repos in conjunction with sdc-docker (Not directly addressing the parent, more of a general question)? Imagine all of the crazy bugs and edge cases that will arise when trying to run Linux binaries on UNIX.

I'm surely making some assumptions, but the primary languages/developers that something like this targets are probably like Java, Javascript/Node.js, Ruby, Python, C...which are probably all portable enough to run on SmartOS without much/any modification.


Docker multiarch support isn't finished yet, and getting all the world's Dockerfiles rebuilt on SmartOS is a big task.

There are also different philosophies about whether container images should be distributed as source (Dockerfiles) or as binaries. It looks like Joyent decided to run Linux/x86-64 binaries to be on the safe side.


Thanks for the link. I've updated the "current state" notes to better reflect reality. https://github.com/joyent/sdc-docker#current-state


Is there any pricing information for Triton yet?



Thanks for finding that link. That is a very attractive pricing model. I am currently running about 10 VPS running about 25 containers. The costs of the VPS hosting is affordable, but managing those VPS hosts, deployment, balancing, discovery, and scaling are the hard parts. I'm looking forward to learning more about Triton.


Is there any further work going on to support AMD in the KVM driver? I remember the codebase having sat idle for a few years last time I checked...


While sadly not merged upstream yet, it works very well. I've been running custom SmartOS "eait" builds from http://imgapi.uqcloud.net/builds (which include AMD support) for some time now without any issues. The source can be found at https://github.com/arekinath/smartos-live


This looks like a very nice way to deploy containers. I see the per hour prices (https://www.joyent.com/blog/expanded-container-service-previ...) but is there any pricing on bandwidth? I imagine there will be eventually, right?


I'd suggest looking at this page as an indicator (https://www.joyent.com/public-cloud/pricing -- Bandwidth tab): all inbound traffic free, outbound is 12 cents/GB after the first GB, cheaper at higher tiers.


I feel like perhaps I'm the only one concerned about this: (Somehow, “SmartOS + LX-branded zones + SmartDataCenter + sdc-portolan + sdc-docker” was a bit of a mouthful.)

Not only is it a mouthful, it means there is a much wider space for things to go wrong.


Compared to say "hypervisor + Linux + kubernetes + weave + etcd + lxc-docker" ...? Not to mention all the pieces that work together for lxc-docker.


Excited to see this. Congrats to the team on the release!


Nice!


I saw the name and my desire to read it plummeted. The same person wrote http://www.joyent.com/blog/the-power-of-a-pronoun which led to one of the most talented developers of node core and libuv leaving the project for a while.


That "most talented developers of node core and libuv" took time out of his day to revert a commit to take a gender neutral pronoun and make it back into a male one. Talented people can be shitbirds also you know.


Just because someone is "one of the most talented developers" doesn't mean that it's good for them to be part of the community.

One of the best things that ever happened to glibc was Ulrich Drepper, far and away glibc's most talented developer, leaving. glibc is a better project for having multiple good people run it instead of one, brilliant, impossible-to-work-with developer in charge.


What relevance does that have to this? It's sad that your post is even being upvoted.


The power of a pronoun is a great article, if anything him writing it should be a plus.

The only way it is a negative is if you feel gendered pronouns don't further sexism, but I don't see how you could support that argument.


Does Joyent refuse to hire speakers of Spanish, French, Italian, Hindi, or any of the other dozens of languages that have gender throughout the entire grammar?


You are missing the entire point. It is one thing to not allow gender pronouns (what you are suggesting) versus punishing someone who specifically set out to change gender-neutral pronouns into male dominated ones (the Joyent case). REAL BIG difference.


Rolling back a commit is punishing, but saying you'd fire someone who did so (if they actually worked for you) isn't?

That's just not reasonable. Going straight from "teaching moment" to pitch-forks is not going to help anyone accomplish anything productive.


Quoting Bryan[1]:

> It's not that he rejected the pull request...it's that when he was overruled by Isaac some hours later, he unilaterally reverted Isaac's commit. (And, it must be said, sent a very nasty private note to make clear that this was no accident.)

Rolling back the commit or disagreeing is and was not the issue. It was his attitude and behavior after reverting the commit.

[1]: https://news.ycombinator.com/item?id=9041086


Where's that note? I thought it was all public.

Either way, it's still just an issue of someone being stubborn about a PR that neither helped nor hindered the actual code.

If that's a fireable offense in your company, your HR processes are broken. Especially with no management in place to set any expectations on the topic.


Wow, bloody hell - hadn't read that before.


Me either. Wondering how I missed all that drama but not particularly sad I did.


The fact that he wrote it actually wins him points in my book but it's not relevant to this discussion. Kudos to the Joyent team, i've been curious about dtrace for ages but never looked into it thinking i couldn't use it anyway (only use debian based distros at work) - maybe now is the time :)


This is running Linux binaries on Oracle Solaris (which they call SmartOS), and not the real thing that Oracle is still developing, but a fork from 2010 that is maintained by 1/10th of the developers it once had. https://www.openhub.net/p/illumos.

This would have been pretty rad 10 years ago, when the world still cared about Solaris.

On a minor note: the post doesn't credit Oracle or Solaris, from where more than 90% of their SmartOS code comes from, until Oracle closed their code in 2010.


This is cool RIGHT NOW because WHO THE HELL CARES WHAT THE UNDERLYING OS IS? For a user trying to bring something to market, here's what matters:

1) it's cheaper (theoretically, due to performance savings from not dealing with hardware virtualization), and 2) it looks like a docker host

I honestly expected Docker Hub to expand into this kind of operation before anyone else, but Joyent, from out of nowhere, has done it instead. Why should anybody care that they're doing it with a Solaris-based kernel, Linux, on NetBSD, as MAME machines, VIC-20 cartridges, or with 1402 machine images and druidic spells?

To the user, it's just a magical world where your Docker host always expands to whatever size you need it to (and you of course get a bill for the usage).

The real test will be whether or not this is cost-competitive to deploying Docker swarms on other public cloud providers. If the performance and tenancy claims are as they claim (or even the same order of magnitude, which seems reasonable), and they're not insanely greedy with pricing, this could be the biggest tech since Linux itself.


I was wondering how this would compete with docker swarm. Is my understanding right that docker swarm basically lets you run an agent on individual docker hosts whereas triton will let you run a single virtual docker host?


Docker swarm basically lets you say "I want to deploy this image somewhere, you choose from the available actual machines" based on constraints that you've set up. You also had to actually set up the machines themselves (though docker-machine is intended to take some of the pain out of this).

sdc-docker (the tech behind this, Triton) basically lets you just keep deploying containers without any regard for the actual underlying machines (or EC2 instances, DO droplets, Linodes) that you are actually deploying to. It's like docker-machine + docker-swarm but all automatic and managed by guys on the other side of the curtain, so you can just write the checks and keep the Docker goodness flowing.


>sdc-docker (the tech behind this, Triton) basically lets you just keep deploying containers without any regard for the actual underlying machines (or EC2 instances, DO droplets, Linodes) that you are actually deploying to. It's like docker-machine + docker-swarm but all automatic and managed by guys on the other side of the curtain, so you can just write the checks and keep the Docker goodness flowing.

Not sure I get this - Is this any different from mesos/kubernetes? If I am running this software and deploying my own datacenter someone stills need to provision the machines.


Actually, because the container runs on bare metal, there's no need to provision virtual machines before running containers.

Yes, the data center operator needs to install the hardware compute nodes, but after that there's no additional step to provision anything else but the containers themselves.


I'm still fuzzy on this also, but it sounded like it would fill a very similar role as kubernetes except that they are claiming superior performance.


You've got it. This is a complete stack (sdc + lx-brand, sdc-docker, vxlan/portolan) fully benefiting from Illumos's architected OS virtualization.


Thats because Oracle deserves no credit. The illumos project was created as a derivative of opensolaris which was created by Sun. The lineage is actually sort of mind boggling. If we want to be pedantic, we could complain about nobody is giving credit to BSD.


And Bell Labs.


The author worked for the Solaris team in Sun back when it was actually Sun and back when Solaris was actually being developed. Among other things, he was behind the invention of DTrace. He left the company shortly after the Oracle acquisition, after working for Sun for 14 years.

I sorta feel like that gives him the right to credit or not credit Sun (and especially Oracle) as he sees fit.


Solaris is actually still being developed; I have no idea why you think otherwise. If anything, Oracle's been on a hiring spree since the acquisition and has grown the systems business considerably compared to Sun.

If you think Solaris isn't still in development, just wait for the Solaris 12 announcement details.


I meant that with a bit of snark, which might have been completely misplaced. I did realize after writing that that Solaris is still under development. But most, if not all, of the Solaris that SmartOS incorporates (and DTrace and zones, specifically) was developed in the past at Sun, which is what 'redwood631 was referring to. That's also the only Solaris that I use personally (via OpenIndiana).

I will keep an eye out for Solaris 12 though, but I probably won't be using it; I have enough closed-source OSes in my life already. :(


Companies don't innovate, people do.

All of the technologies that illumos (and Solaris) is now known for (ZFS, Dtrace, crossbow, zones, RBAC, SMF, FMA) were developed by small teams of engineers taking charge. And nearly all of the developers of those technologies are now in the illumos community and notably not at Oracle.

Specifically, Dtrace is primarily the brain child of Bryan Cantrill of Joyent (co-developed with Adam Leventhal of Delphix and Mike Shapiro who left Oracle in 2010) and Jerry Jelinek of Joyent was one of the primary developers of Zones. Matt Ahrens and George Wilson, co-developers of ZFS along with Jeff Bohnwick, are now at Delphix with Adam. Though, as far as I know, Jeff is no longer involved with ZFS development.

All major features of Solaris 11 were developed in OpenSolaris. Solaris 11.1 major features are improvements to SMF, the installer, and the addition of ASLR. Major features in Solaris 11.2 are "kernel zones", OpenStack and SDN [1]. That's very little to show for five years of development.

[1]: http://en.wikipedia.org/wiki/Solaris_(operating_system)


Your response seems to imply that there aren't still teams working on Solaris that can deliver large new pieces of functionality.

You also seem to be unaware that many of the "major features" you talk about in past releases were designed and implemented by people that are still working on Solaris today and sometimes in those same areas.

Also, I have no idea why you place kernel zones in quotes -- kernel zones was actually a massive project. Not a simple variation of existing zones technology. It provides true virtualisation of Solaris on Solaris with minimal overhead compared to alternatives.

Solaris has lots to show for five years of development if you understand the engineering effort required and even more is coming -- just wait until 11.3 and Solaris 12. You'll see things from Solaris you never expected. It's bit insulting to imply that bringing things like OpenStack to Solaris wasn't a significant effort. Many of these technologies are Linux-centric and required significant engineering effort from an architectural and technical perspective to provide an integrated solution.

Solaris also has interfaces and functionality that is not available in other Solaris-based distributions; especially in upcoming releases.


And illumos has interfaces and functionality not available in Oracle Solaris. That's Oracle's decision, not ours.

I didn't mean to imply that there aren't talented and smart engineers working on Solaris at Oracle. I am, however, underwhelmed by 11.1 and 11.2, which I see as a management problem, not an engineering one. But the point I was making is that when the illumos community talks about Dtrace, zones, ZFS, etc, you can't discount that and say "no, that was Sun" because the people who were the primary developers of those technologies are now with illumos. Saying that Bryan, Adam and Mike can't take credit for Dtrace is just silly.


> Solaris has lots to show for five years of development if you understand the engineering effort required and even more is coming -- just wait until 11.3 and Solaris 12. You'll see things from Solaris you never expected. It's bit insulting to imply that bringing things like OpenStack to Solaris wasn't a significant effort. Many of these technologies are Linux-centric and required significant engineering effort from an architectural and technical perspective to provide an integrated solution.

I think that I can hold the following opinions simultaneously without contradiction:

- It's an impressive amount of technical work.

- It's not really an impressive technical work, per se: OpenStack already works well on Linux, whereas DTrace, zones, ZFS, etc. were and still are innovative. This makes them categorically different. (I definitely admit that my use of the phrase "still being developed" was definitely wrong, but I was replying in the context of the article and of SmartOS in general.)

- If Solaris were still free software, there may well be interest in porting OpenStack to Solaris from anyone other than Solaris engineering. Which, unfortunately, means I'm less inclined to take the hiring rampup positively: I now wonder how much of that work could have been done in the community.

- It's cool for your customers that you're doing this. (I admit I don't understand why someone would be a Solaris customer for any use case other than running other Oracle software, but that's not really relevant; there are quite a few customers, whether or not I understand them.)

- It's not really relevant for people who aren't your customers. Even keeping Solaris closed-source, it's still possible to deliver innovative features. OpenStack isn't one.

This may be less true for other features, but it's why I look at Solaris' current marketing, which is heavily touting OpenStack, and it doesn't cause me to be impressed with Solaris' pace of innovation.

I'm having trouble figuring out what kernel zones are. (Which might be part of the reason that it's not getting the respect it may deserve in general, or why 'bahamat put it in quotation marks, in specific.) It sounds like... KVM or lguest (both using virtio), but plugged into the zone framework and management tools? If so, then again it'd be certainly an impressive amount of work, but less-than-impressive work, compared to Solaris' past glory (DTrace, zones, ZFS, etc., none of which had even remotely comparable features on other OSes for quite a while after their invention by the Solaris team). And since SmartOS has had KVM anyway since its inception, I'm curious how kernel zones in fact stack up.

To be fair, I also work in enterprise software and specifically in systems/OS stuff, and I spend a good chunk of my time doing hard, low-level systems work that's cool for my customers, not really relevant to anyone else, and very rarely innovative in a global sense. I'm reasonably happy with what I do, but I'm also okay with the fact that nobody outside my management or our press releases will ever call 90+% of my work innovative, even though I put a lot of high-quality work into our product. A lot of enterprise software work is making a great product for people who aren't using a different, also-great product because of unrelated reasons. It's an honest and fun way to make a living, but we shouldn't call it more than it is.


> until Oracle closed their code in 2010

At that point, why credit Oracle instead of Sun? Oracle finalized their deal to buy Sun in January 2010. How much did Oracle contribute between then and the announcement that they were closing their code?

Edit: Oracle announced closing OpenSolaris on August 13, 2010. The last "release" of OpenSolaris was June 2009.


Oracle never publicly announced it. The last commit that I see copyrighted by Oracle is https://github.com/illumos/illumos-gate/commit/ea10ff14a02f7... on August 18, 2010. After that, updates from Oracle silently stop.

But your point stands well. Oracle only had a paltry 7-8 months of contributions.

In my opinion, and I really don't know who would share this view, since Sun opened Solaris, illumos is now what Solaris was intended to be; it's not the illumos community that forked Solaris, it was Oracle.




Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: