Hacker News new | past | comments | ask | show | jobs | submit login
IncludeOS: a minimal unikernel operating system for C++ services (includeos.org)
380 points by lelf 25 days ago | hide | past | web | favorite | 181 comments



The description on github is much clearer:

> IncludeOS is a minimal unikernel operating system for C++ services running in the cloud and on real hardware.


This is quite helpful, thanks. What would prompt someone to go this way, I wonder? There are some pretty minimal OS options out there these days.


The first example is of a network service which is what came to mind first for me. Traditionally the kernel logic handling sending/receiving the packets and the app run in different security contexts and there is a cost to hop back and forth. There has been a large amount of work over the years to optimize the data exchange for this use case but this eliminates the idea of a problem at all (while also being more minimal than any minimal normal OS could hope to be).


You can also come up with more benefits by thinking alongs these lines, for example: the kernel doesn't need to keep track of which packets go to which process because there is only one process (the kernel itself). There could be a performance benefit thanks to this.

> more minimal

Which carries other benefits: boot time, memory overhead, etc. You could probably treat IncludeOS VMs like containers.


It’s mainly for use in stronger isolation (ie. VMs instead of just containers). In a container, the kernel is already up and the application just has to start. In a VM, that’s not the case. By making the application “the kernel”, very fast startup times are possible.


See the recent ccc talk on the co2 footprint of services.


And how much of that is from the OS? It seems ridiculous to optimize something that is already so optimized rather than just optimizing your services. Maybe you don't need a dozen Vms and containers to do the job of one server. Maybe you should use efficient algorithms and tools.

If you're using runtime languages on the backed (node, python) then you're already failing the environment. An efficient compiled language can perform the same functions much more efficiently.


> If you're using runtime languages on the backed (node, python) then you're already failing the environment.

Not necessarily. If you make a living running a typical CRUD software-as-a-service offering, your code may really just be glue mapping HTTP requests to database requests. There, your choice of language doesn't count for much; your code isn't doing heavy lifting, the DBMS is.

It's quite possible your hardware requirements could be lower by writing in Python and tuning your database, rather than spending the same time writing non-performance-critical code in C++.

Frameworks like Tornado, for Python, are able to quite efficiently handle highly concurrent workloads, despite using a single-threaded, interpreted, language.

For computationally intensive code, then sure, a language like Python will need far more hardware horsepower to get the job done than C++ would.


>If you're using runtime languages on the backed (node, python) then you're already failing the environment. An efficient compiled language can perform the same functions much more efficiently.

This post is about an optimized environment for C++ services though - not node or python services.


Less code, smaller attack surface.


I can’t believe this is a thing. Are software engineers in the “ban straws” phase of global warming now?


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


As the scale of data infrastructure rapidly grows, the carbon footprint of that infrastructure grows as well. It is already non-trivial. Consequently, the widespread use of excessively wasteful software implementations that require several-fold the hardware infrastructure of a more efficient design are becoming material contributors to total carbon emissions, more so than many things we focus on for the sake of climate change.

Unlike some other methods for reducing carbon emissions, which require subsidies to be competitive, massively reducing server infrastructure footprint often improves the absolute economics through radical reductions in OpEx/CapEx for data intensive businesses.


I'm skeptical that you couldn't get similar reductions in carbon footprint through investing the money you gain by delivering early on renewable energy infrastructure and carbon sequestering charities. Of course, how that cost-benefit analysis works out depends entirely on required engineering hours and the growth of your business, which are both notoriously difficult to predict.

That said, if your goal is reducing expenditures, reducing your carbon footprint is a great cherry on top.


A better question is Why Not? Is it really so inconceivable to try to apply energy conservation to computing as well?


Just as importantly, energy use loosely correlates with resource use => cost. If we save the planet, great, but it's also good for our bottom line.


[flagged]


You're sliding down the slippery slope argument you've created. No one said anything about laws.

namuol 25 days ago [flagged]

What matters is that you found a way to sound smarter without doing anything.


Please don't reply to a bad comment with another. That only makes the thread worse. Especially please don't cross into personal attack.

https://news.ycombinator.com/newsguidelines.html


I think people confuse two important things. 1- Energy consumption of the lower stack versus total of a single implementation (could possibly be very low) 2- Energy consumption of the lower stack multiplied by everywhere it is deployed

1- Yes, optimizing for lowering this consumption by an app-developer can definitely be an ill-advised endeavor economically (you're saving 0.1-10% of your cost.. by adding a huge investment of time, possibly larger than the application development itself) 2- Optimizing across all deployments, can definitely move the needle, in a very significant way, against the effort required, since you're automatically deployed on 1000-1M places.

It's the same reason library developers (especially system/langugage/heavily-used-libraries) have a very good reason economically (and therefore ecologically as well) to optimize. Competition here is a huge benefit to everyone, including the environment.


I was still trying to figure out was the co2 spelling meant, when I read this light grey text.

If you want to see what real computing energy looks like, look at Bitcoin.


The difference is between a clean slate approach vs. a retrofit adaptation.

Technical speaking, a clean slate solution always offer better quality for the targeted usage scenario.

The problem however, is how much users would be willing to pay for the difference in quality.


OK, we changed the title to that from "IncludeOS – Run your application with zero overhead".


The industry is certainly moving towards less and less stuff included by default in a production environment. What used to be a full install of Debian with your app checked out in /var/www is now a Docker container containing only the packages you need. The even more careful people don't even use Debian, they use a stripped down distribution like Alpine. The even more careful people don't have a distribution, they just have their application and the one or two support files it needs (ca-certs usually). It is intriguing to apply this approach to OS features itself, because less dependence on the OS means that more of the kernel can be safely shared. And, of course, there is less surprise at something that is enabled by default that you don't need. (I feel like 99% of security is disabling stupid things that are on by default. With Linux, you never know if you got them all.)

Being able to share the kernel safely means potentially better utilization. (The progression is dedicated machines -> hypervisor-based VMs -> multiple customers sharing the Linux kernel. gVisor exists for that last step, but I don't see anyone willing to sell me the ability to run my random container next to their mission-critical security-sensitive app, whereas people are perfectly happy to share CPUs with me if there is a hypervisor in the middle. But we're certainly moving in that direction.)

It is also intriguing to imagine a world that moves the other direction; just have a giant computer with thousands of power-efficient CPUs in it and no virtualization. When a request comes in to your app, your CPU is just powered on, your app boots in a millisecond, handles the request, and shuts down. I am not sure even the big cloud providers have solved the time-of-day utilization problems; being able to turn most of the computers off at night has some potential to save money. (I would turn my workstation off at night if it booted in 1 millisecond in the morning, for example. I can also imagine some power savings for mobile devices if they could go hard off for 59 seconds out of every minute.)


"Safely" sharing a Linux kernel?

There's no reason to believe Spectre and Meltdown were a one time thing, especially since the response has been to apply ad-hoc fixes just good enough to break the proofs-of-concept. On the contrary, they demonstrate that such vulnerabilities are possible and we should expect to see them again.

Even barring CPU bugs, the kernel is one giant attack surface, all written in C, all running with full ring 0 privileges. Find ONE hole to pwn.


It's not mainstream yet but it's the direction people want to go. Like I said, making the OS smaller means less attack surface, so it's certainly something worth investigating. But a lot of work has been done to make Linux safe enough, so that is a likely direction for the mainstream to go.


Using containers/Docker isn't reducing anything at all, neither from an energy efficiency, nor bloat PoV, since what's in the container comes on top of a full OS install. Moreover, Docker almost always runs on k8s (or Mesos) with its multiple networked re-implementations of base infrastructure such as dns, ntp, schedulers, etc. I think Docker and "microservices" with their heavy network use can't be counted as a win for energy efficiency, but have at best been portrayed as benefiting developer convenience.


Suppose you have ten microservices. Running them in containers would surely be more energy efficient than in full blown VMs or dedicated machines?


Then don't use ten "microservices" to accomplish a single task?


Hurr durr. I fully agree that the microservice architecture is abused and overused to all hell, but it is still valid in a lot of cases. It is a separate discussion, and I believe you know that.


What about using ten "microservices" to accomplish ten different tasks?


> I am not sure even the big cloud providers have solved the time-of-day utilization problems

Do AWS spot prices fall at night? That would be one way of increasing utilization - assuming, of course, that there’s sufficient demand at any viable price. There most only be so many jobs people are willing to run unobserved overnight in order to save money.

Or perhaps AWS (and others) are actually powering systems up and down to meet demand? That makes a lot more sense, I suppose.


This looks great, but I'm wary of projects that sound this great but don't include a "downsides" section. I'd love to know what the tradeoffs are, so I don't have to spend time discovering them myself. Please consider adding a section like that.


There was a damning security paper (not peer-reviewed) about IncludeOS, a few months ago. It turned up on HN: https://news.ycombinator.com/item?id=19738905

I don't know whether things have improved since then.


They link to one:

> A longer list of features and limitations can be found on our documentation site.

But unfortunately it looks like the linked page doesn't actually talk about limitations :) Unless you consider a warning about hlt-ing yourself in the foot a list of limitations I guess.


Some downsides then:

- You can't just port anything to IncludeOS. Since you have direct hardware control and you can schedule work directly to numbered CPUs you can take advantage of things, but if you don't do that then you are just living in a limited C++ environment.

- The network stack hasn't been built with multiprocessing in mind, and some parts need to be rebuilt because of that

- There are codepaths that could lead to C++ exceptions, such as being out of memory, which trounces performance.

- There is a paging and memory allocation system in the OS which doesn't have to be there (for example paging can be "burned in" to the image for the most part, while only leaving room for adding pages for stacks. Memory can be handled by donating everything to the C standard library) - the goal being to keep the attack surface lower.

- Debugging is hard - you will have to connect to GDB while pausing somewhere in the OS.

This is really just me thinking out loud. Add salt.


It would be interesting to run a NodeJS clone or QuickJS (https://bellard.org/quickjs/) on/with this.


This is why I love HN, I never knew about QuickJS. So yeah, that and includeOS...dude...!


Duktape is also cool: https://duktape.org/


Googled it in the morning and got this: http://runtimejs.org/

Also a big collection of info on unikernels: https://github.com/cetic/unikernels


So basically you now run your app in ring0, with none of the security mitigations the community developed in the past ten years, and force everybody to code in c/c++. Is it just me or does that sound like a disaster waiting to happen?


It’s a single process running inside a sandboxed VM. That process might not have code written for reading files from disk (let alone writing them) or a full TCP/IP stack (let alone the ability to forward SSH sessions) and it certainly wouldn’t have any capabilities of launching a remote shell. So you’re not going to be susceptible to the vast majority RCE vulnerabilities that happen with even a minimal GNU/Linux port. Even if sloppy C++ coding did make you vulnerable, you’re still stuck inside the VM with literally nothing available aside that process.

The bigger risk is DDoS attacks but that’s risk when writing sloppy code in any language, runtime and hosting environment.


What security systems do you need if it's the only thing running on the computer/vm/container?


All of them if any external party has access to that "only thing running".


Do you really need something like SELinux if the “only thing running” is a network service that doesn’t have access to a file system? Even if that service was compromised, what else could be gained? Especially if the unikernel doesn’t spawn other processes.

Am I missing something obvious?


Nope, you are right on the money. It’s just new and different, and people don’t like new and different.


Not being able to spawn other processes is actually a huge security and performance benefit when you know you're deploying to a virtualized environment to begin with. (eg: cloud).


No, not missing anything. Trendiness wars with actual benefit.


"the computer industry is more fashionable than the women's fashion industry" - Stallman


But that's the same either way, isn't it? If an attacker compromises your application, they own the app. It doesn't matter if the app is under an OS or a VM.


Most (all?) attackers dont care about apps. They want access to your server to do other things. That could be downloading a monero miner or doing something as simple as mysqldump. Either way that's another program which isn't possible in a single process world such as a unikernel.


On the other hand this is a single application running in its own VM, so it also makes isolation easier/lighter.


I don't actually have a lot of experience/knowledge about this, but here goes.

Two ways to address those concerns:

- Your app runs on bare metal and security is implemented on the network layer. For example, something external to app handles authentication and you have something in front of the bare metal box examining and approving requests (like other apps).

- Your app runs in a VM and the hypervisor handles security.

The local user-based security model has been insufficient on the server end of things ever since publicly widely accessible cluters of HTTP servers with load balancers became the norm. Of course you want to protect the local server but we're much more interested in authenticating and applying security beyond the boundaries of any individual OS. There is LDAP and numerous standards but they require implementation beyond the OS - your own or reliance on a third party like Google or Microsoft. If you have to reimplement this with microservices then who cares about the local OS?

A microservice that is only spoken to via HTTP GET or POST requests could get away with removing a lot of things in a traditional OS, such as shells, TTYs, console support, all drivers unrelated to CPU features or networking (no SCSI, floppy, etc.), all drivers unrelated to sending/receiving data through a NIC (no xtables support, no NAT/routing, no weird TCP options no one uses, no weird protocols no one users, no conntrack), no dnotify/inotify support, and maybe even go so far as to remove code behind syscalls that the app doesn't actually use. No TTY support means no one's logging in, so who cares about users at this point?

A non-microservice app, like CGI scripts invoked via Apache, is probably a poor choice for unikernel, but a front-end caching webserver may not be.


There is no such language as C/C++.

The alternative for people using this approach is running in a kernel module, or a kernel-bypass setup with root permissions. Grown-up stuff. Eliminating most of the kernel, and using C++ not C is a very substantial improvement over status quo.


> There is no such language as C/C++.

This is a really weird hill to die on; I see you've made the same comment several times in this thread. C and C++ fit hand in glove, and lots of folks use them in conjunction. I don't think anybody here has the misconception that C/C++ is a single language, unless perhaps they don't know either.


The point is that the practices that almost unavoidably lead to bugs in C code are not even tempting in modern C++. So, if you are interested in correctness and memory safety, stepping to modern C++ provides more value for time invested than the sum of most other choices.

Equating the two is extremely common, even by people who are certainly equipped to know better. When you feel a need to tar C++ as if it were C, you signal the weakness of your argument before you have even expressed it.


"Modern C++" is a term that mostly just means "C++ sans the C parts". So yeah, it's different than C.


> you signal the weakness of your argument before you have even expressed it.

Please turn down your flamethrower, that style of conversation is not welcome here.


Writing "C/C++", in contexts where it is implied they have the same failure modes, is provocative. Don't like response, don't provoke.

It is fine when talking about ABI or object-code generation.


C/C++ means either C or C++. Most compilers allow your project to contain both C and C++ source code at the same time.


Or Rust and C. Rust calls C libraries easily. So, is it Rust, or is it C/Rust? You choose.

The failure modes of Rust code are very different from C's, but calling C libraries, as is usually a practical necessity in production, exposes you to all of them.

Switching to modern C++ nets 90% of the benefit (plus other benefits Rust lacks, and may always lack) for 10% of the cost and 0% of the hump.

So, is it C/Rust and C/C++, or Rust and C++?


Obviously it's Rust/C/C++/Python/Ruby/Java/Kotlin/Scala/JavaScript/TypeScript/Perl/Haskell/OCaml/Lisp/Racket

The "/" symbol is an infinitely extensible shorthand for a union.


Well, remind me of “Premature optimisation is root of all devil”. This is not an optimisation, but it tripped many parts.


Old concept in HPC world, most of ancient supercomputers had ability to compile compute tasks with unikernel and spawn over compute nodes. Few years ago concept of Xen bases unikernels circulated in xen mail lists, but look like there was little interest in community. Includeos can be a “fat lambda” for those who want to control everything, but how many of such tasks we have



Would it be possible to build a C/C++ app with #include <os> at the top and then compile it to a fully bootable image, and then boot a real server using that?


They claim that it works on real x86 hardware.


"A C/C++ app ...". No. There is no such language.

What you are describing is an embedded system, which is done, very commonly. This thing is effectively running like an embedded system within a larger system.


I'm really surprised this isn't in Rust considering security is a priority. However, I think something like this will become increasingly common for high performance cloud applications. Many programs do not need a full operating system with processes isolation and hardware drivers if they're going to run alone on a VM anyways.


> I'm really surprised this isn't in Rust considering security is a priority.

The first commits to IncludeOS were 2014, when Rust was in a high state of flux; before the 1.0 release. So I'm certainly not surprised given the historical context of the two projects.

Rust has some cool features, but believe it or not, C++ is still a popular language and people continue to develop projects that are older than Rust. So even if you're of the opinion that C++ is entirely deprecated by Rust, we're not going to stop the entire world and rewrite every C++ project in Rust.

But good news! You can run your Rust apps in a Rust-based unikernel OS [1]. Since these OSs are meant to run in containers, the ecosystem is big enough for both and we don't need such shedpainting conversations.

[1] https://hermitcore.org/


I'm really surprised this isn't in SPARK considering security is a priority. Rust's type system still isn't capable of formal verification of lack of run-time errors... You know, considering security is a priority.


The most urgent goal for this effort is usefulness. Geek trend box-checking has to come second.


Rust is not the right choice because it's trendy, but because it provides memory and thread safety properties that are hard to enforce in C++. For systems software the advantages are huge.


Rust is also years away from maturity (which it probably will achieve, on schedule). Systems built today need to work with tooling that is mature today.

Memory and thread safety properties can be achieved in C++ by operating at a higher level than C with mature, well-debugged, well-optimized libraries. No pointers means no pointer errors.

Integer overflows are equally possible in Rust and C++. Both offer debug build modes that can watch for those.


> No pointers means no pointer errors.

Use after free can be perfectly well expressed with C++ references, or invalidated iterators to standard library containers.

> Integer overflows are equally possible in Rust and C++.

In C++, unsigned integer overflows can often be leveraged into out-of-bounds access; in safe Rust, they cannot.


Use-after-free can be expressed in C++, but there is no temptation to do it.


> Memory and thread safety properties can be achieved in C++ by operating at a higher level than C with mature, well-debugged, well-optimized libraries. No pointers means no pointer errors.

Eh, not to the extent that Rust does. It’s still reasonably easy to get use-after-frees by using a value that has been moved or destroyed.

> Integer overflows are equally possible in Rust and C++. Both offer debug build modes that can watch for those.

In Rust the behavior is defined regardless of signedness.


Defined is not the same as correct.


But undefined is the same as incorrect. Any signed overflow in your C or C++ program is a place where you’ve opened yourself up to some quite problematic consequences.


Yes, just as in Rust. Pretend correctness is not correctness just because it is not UB.

I have the same discussion with people who insist unsigned types in C++ or C are safer than signed types because of the UB boogeyman. They only demonstrate their own limited understanding.


So where is it being useful? A quick tour around the website was not enlightening in this regard.


It is mostly used in bespoke, proprietary projects that have extreme performance requirements, and specialized deployments. Not your garden-variety portable app. So, not typically visible. But 99+% of production software in use is more like that than what you can find online.


Is this project still active? I don't think it is any more.


> Sorry, but no. There are things happening here and there, for example pthreads just got merged, but otherwise development is very slow. There are plans to run on RPI4 soon (TM), if that's of any interest, otherwise the general development is low-frequent.

https://github.com/includeos/IncludeOS/issues/2183


The Pi 4 would be cool. I've always wanted to mess with a unikernel but they either seemed to be dead, barely started, or written in an unusual choice of language.

Anyone else have a unikernel recommendation besides IncludeOS + rpi 4?


If you are interested in Rust, there's HermitCore and you can run them on the Raspberry Pi via Firecracker.

https://hermitcore.org/


What caught my eye was the openness for vulnerability disclosures. Right on the front page. Good stuff


I feel like most commenters in this thread didn’t read this page that explains the goals of the project with far more detail:

http://www.includeos.org/technology.html


It's like sticking the program floppy into the Apple II disk drive and powering up. Or is that too simple an analogy?


I like that analogy. I guess these images are more single-purpose minded, and often will do things read-only. Perfect for immutable infrastructure, but it's not a requirement.


A viable alternative that supports more languages is OSv: https://github.com/cloudius-systems/osv


I think the biggest bummer is the lack of threads. Everything seems to be running in a single loop:

> Node.js-style callback-based programming - everything happens in one efficient thread with no I/O blocking or unnecessary guest-side context switching.

Unless I'm missing something.


Article blows it around the third para by asserting that publishers could/should make ebooks much cheaper than paper ones (because obviously paper costs money, right?).

Then asserts that the retail price for the paper edition is the point ebooks should undercut by, oh, 25%.

This ignores the fact that publishers are not retailers, they're manufacturers; the retail sector takes 50-70% of the price the customer pays for a book, and this includes Amazon's kindle store. In actual fact, the physical cost of goods for a paper book is less than 10% of the price the customer pays. Another 10% is editorial/typesetting costs which are exactly the same for the ebook, maybe 10% goes to the author, and 50-70% to Jeff Bezos and co.


Sir, this is an Arby's.


I think you may have commented under the wrong post.


D'oh, I think you might just possibly be right!


Had a few hangouts meeting with the original includeos team.

Great technology, top tier team. Although I believe their distance to Silicone Valley and relatively cheap capital causes their difficulty.


This is pointless. Run Linux and isolate it to the first core. Lock your own app to the other cores, with no context switching. Gdb works. SSH works. Awesome.


It'll be interesting to write a container system using this and build an orchestrated with this in mind. Would give you a truly minimal container OS.


“Container” and “OS” don’t mean anything in the context of unikernels. Unikernels are applications with everything they need to run (on bare metal or a hypervisor) baked into the application. So there is no OS to speak of, and the “orchestrator” is probably something like AWS EC2.


OP likely meant service orchestration


I think that’s what I meant as well, but I’m not sure what you mean by “service orchestration”. If Kubernetes is an example of a service orchestrator, then I think something like EC2 would be the unikernel analog.


There is no need for containers in this architecture.


Using includeOS with an application would require installation as an OS, whether on a VM or bare metal. Either way, it would take up many resources. This would be great if the application is a resource heavy app and thus would mostly utilise most/all resources. But for most apps this isn't the case. Hence they're better to be in containers so that multiple applications can be run.

I'm thinking as an alternative to CoreOS or COS. These OSs will likely have a relatively high overhead in comparison to IncludeOS should such an OS exists.


I don't quite understand. certainly if you install it on bare metal there isn't any sharing..but I assume with a virtual machine you can multiplex a large number of instances to share the resources of the underlying machine?


Yes, that is indeed how it has been used the most right now


I keep hoping for a model like this to displace Docker.


Is there similar thing like this but for Node JS server? I think the direction is totally right. Just hope we can do this for the JS world.


I couldn't find a mention on what drivers for hardware devices it supports


It's basically only meant to be used on hypervisors:

https://includeos.readthedocs.io/en/latest/Features.html


good luck turning your debugging tools on anything that produces.


You debug it the same way you debug embedded systems. GDB stubs, and a mapped, shared page.

Advantage over embedded being you don't need jtag hardware.


that's rather my point.


I debug regular apps, via gdbserver, routinely, in exactly the way one would debug embedded apps, and give up no convenience at all, but gain quite a lot. Gdb runs on my full-featured development environment, and the target process runs under docker, or on some other machine with the funny network hardware. I hardly ever start a new Gdb session; I just point it at binaries that I run wherever is convenient. Debugging a unikernel is trivially different.


The good news is you can do easier debugging than through a single LED /s


Hardware support is fairly limited


Sounds like torrents for apps


[flagged]


> A shame it is done in C++ tho. We all know this kind of project should be done in Rust.

I'm taking this and your other remarks as sarcasm. But still, here we go again.

> When will there be a decent Rust unikernel as a drop in replacement for the standard lib/async-std/tokio/whatever ?

You seem to be knowledgeable and interested in such a project. Perhaps you care to create or contribute to such a unikernel project to test against the IncludeOS project?

If not then sit back and keep on waiting.


Nanos, https://ops.city has had rust support for a while now as does OSv.

I'm involved with nanos/ops.


First time hearing about this, looks nice! but:

- is the Nano unikernel open source ? A quick (maybe too quick) google search tells me no.

- can you customize the component you want a la MirageOS, eg is it possible in the future to add a fail2ban to your app with a single command line ? same for filesystem and so on.

- is performance on par at least with other linux systems ?

- does it support rust ? not metal barebone rust, but rust with at least the std lib ?


Little late here but yes nanos is Apache and found at https://github.com/nanovms/nanos .

It also runs rust very well.


Found a license link at the bottom of the page https://ops.city/license.txt

To your Rust question it seems like the standard library is supported https://github.com/nanovms/ops-examples/blob/master/rust/02-...


Ops is merely an orchestration tool to deploy these to google cloud and aws. The actual kernel is nanos and we decided against the recent trend of custom licenses to go with apache.


> When will there be a decent Rust unikernel as a drop in replacement for the standard lib/async-std/tokio/whatever ?

When someone write it? Why not you even?


> We all know this kind of project should be done in Rust

As a user, I don't care whether the tool is in Go, C++, Rust, or whatever else. If Docker was written in C (or Rust, or whatever else), it would have zero effect on my usage of Docker.


As proven by the ongoing CCC talks, if Docker was written in C, security patches will be coming up daily.


> As proven by the ongoing CCC talks, if Docker was written in C, security patches will be coming up daily.

SQLite is written in ANSI C. What's your point?


Which includes a battle test of unit tests, and yet:

https://www.cvedetails.com/vulnerability-list/vendor_id-9237...

This is my point.


To have only so few entries accumulated over more than a decade, for a project as well-known as sqlite3, which is in the 100Ks of lines of code, is an incredible feat, and would be so regardless of the implementation language.


Not really, given that in 2019 they are still fixing heap corruption and use after free exploits.


What are the ongoing CCC talks? Sounds interesting.



[flagged]


> Your brain is being polluted. This is just wrong.

This isn't even an argument here. How is he/she wrong?

It's true that the majority of C/C++ programs are riddled with memory corruption vulnerabilities. Just look at the countless CVE's of WebKit, Linux, Chrome, V8, VMWare and the list forever goes on.

There is a case for Rust to reduce or possibly eliminate these age old vulnerabilities which C/C++ find it hard to do.

You are free to change my mind. :)


There is no such language as C/C++. You undermine your argument right out of the gate.


OpenVMS was promoted as ultrasecure back in the day. Hint: exploits happened too.

Here's your hard slap in your face.

https://medium.com/@shnatsel/how-rusts-standard-library-was-...


> Here's your hard slap in your face.

Not really. I said 'reduce or possibly eliminate'. I already entertained the fact that even Rust has bugs. But I can't see how one example of a documented Rust vulnerability which even requires a tricky exploit chain in practice, would change my mind given the tens of thousands of memory corruption CVEs in C/C++ software. The effectiveness of Rust's claims to 'reduce' these cases of memory corruption vulnerabilities have been proven over the years.

Rust is no doubt already being tested with confidence in production by many companies for years, making it possible to compliment or replace C/C++ based programs. You too are more than welcome to change my mind. :)


You are comparing the number of CVEs in hundreds or thousands of low level code vs a few Rust libraries , you might want to wait until you have a few popular OSs, full browsers,drivers ,VMs made from scratch with Rust then compare or do a fair one to one comparisons.

Just wanted to let you know that your comment feels very misleading because you present the large number of bugs in say web browsers and contrast it with no Rust bugs in web browsers because nobody uses a Rust browser(I know Firefox has some parts in Rust but those parts are new so you can't compare it with years old code that had time to accumulate bugs and people to find them)


This. Let's see is Mozilla rebases the entire browser in Rust except some dependencies on FreeType and MESA and let's see how many bugs do rise. If Chrome did the same shit would happen, it's 20x more code in Chrom{e,ium} than the OpenBSD base with LLVM.


Probably significantly fewer memory corruption vulnerabilities.


Let's wait and see how "significant" will it be.


Easy, according to Microsoft and Google security reports on C related memory corruption exploits, 68% less.

Applies to any memory safe programming language, not only Rust.


The Microsoft report wasn't organized by language, it was organized by vulnerability class. It was memory unsafety, across projects, not C specifically.


Then let's solve how Rust will save us from wasm running at full speed.


I’m having difficulty understanding your comment. Would you mind expanding on it? What’s the concern here?


Rust and WebAssembly implementations will clash somehow in a near future, if Mozilla tries to rebase almost al Firefox into Rust.


How, and why would this have anything to do with memory corruption vulnerabilities?


Surely they did happen, a tiny set, meanwhile the C flaws that allowed the Morris worm to happen 30 years ago, are still present in C11 being written today.


Current compiler and OSes have some protections to avoid that precisely.

It won't be that easy.


Apparently it is, given the exploits presented at CCC talks.

Or the ones given by Google and Microsoft at Linux Kernel Security Summit 2018 and 2019.


I use OpenBSD, other OS'es focusing on performance over security doesn't interest me. Except 9front, but because of the huge networked grid as a bare computer.

But in OpenBSD even plan9port compiled software is done with Retguard:

>nm $HOME/Docs/c/p9p/9.aecho | grep retpo

00003150 t __llvm_retpoline_r11

https://undeadly.org/cgi?action=article&sid=20170819230157

It makes using acid(1) in OpenBSD a bit more difficult, but I almost always omit that.


I advise you to watch the CCC talk about validation of OpenBSD exploit mitigation features.

Yes, it is great that OpenBSD values security over performance, but as the talk shows, not every security feature is as secure in practice as OpenBSD sells it.


Ok, will do. Still Rust is not magical, even if it helps a lot.


Rust is not the only option to get rid of C, there are plenty to choose from.

Neither of them are magical.

All of them don't suffer from memory corruption, UB and use-after-free like C does.


There are a number of attacks that retguard can’t help with.



[flagged]


You don't do security by switching programming languages. If you think you do then you have more serious problems to deal with than picking a programming language.


The security teams from Microsoft, Google and Apple think differently.


Until the Rust runtime gets vulnerable and hilarity ensues. Well, not much. Everyone trusted rust blindly, shit happens x200 everywhere.


C folks are so afraid of Rust taking their baby away.

When we security minded people talk about secure languages, we have a panoply to chose from, Swift, D, OCaml, Java, .NET, ATS, Ada, Go, Nim, Zig, Crystal and many more, yet you always think that we speak only about Rust.


> C folks are so afraid of Rust taking their baby away.

This tribal way of thinking makes no sense at all. I qualify as a C and C++ developer and I'm very excited with Rust. The only baby there is is the software project we are working on.


It makes sense in the context that every time one mentions safe systems programming, most people of that background only think about Rust.

Meanwhile I, and others have been replacing C and C++ written applications by safer stacks long before Rust was born.

And on mobile OSes, the alternative languagea that are being pushed are not Rust.

Yet the replies are always as if we are only talking about Rust as alternative.


I like Go. It's a logical step over the C and Unix portability philosophy, expecially the C from plan9/9front, where cross-compiling is dead easy.


(Rust has about as much runtime as C and C++ do)


No.


OpenBSD -stable base errors:

     anthk:~>syspatch -l | wc -l     
     
     15
Who cares about your toy language backed from Mozilla. Do you think the Go runtime from Docker is bug free?

https://www.cvedetails.com/vulnerability-list/vendor_id-1418...


I don't use any toy language from Mozilla, neither do Google, Microsoft for that matter.

We are counting memory corruption and UB bugs here.


But why is microsoft experimenting with the toy language then?

https://www.infoq.com/news/2019/11/microsoft-exploring-rust-...

70% of microsoft errors would be gone in future if they start using what you demean as "toy" language.

https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


Go talk to anthk, please. I am not the one talking about toy languages, other than using sarcasm on my reply to anthk.


This is not a "tool" for "users", this is to write your own programs in a funny way, in C++


It already exists, with several deployments in production, including libraries used by Docker on macOS and Windows, MirageOS written in OCaml.

Yes, it isn't Rust and uses automatic memory management, however as proven by their deployments in production, it is quite usable.


For zero overhead, I have this weird idea of shipping source code, recompiling it with all the optimizations needed, and executing it without virtualization but under a different user id.

So I stick to ./configure ; make ; make install


Userid means user management and context switching etc, means overhead


Imaginary overhead doesn't count. Show me the benchmarks.


Context switching is expensive when you’re writing high-performance code. Conventional OSes require that you context switch to the kernel to perform I/O (disk or network), to say nothing of other processes vying for CPU time as well.

All this is well-documented and -understood. The overhead is in no way imaginary.

Unikernels don’t need to do context switches, by virtue of not having processes.


Since io_uring, this isn't the case for most IO on Linux anymore.


I’m not terribly familiar with io_uring, and haven’t used it myself yet, but does it not still require a syscall to drive things, even though the buffers are shared? Brief investigation suggests the need for that might go away in the future? But all this io_uring stuff seems fairly immature yet and not widely used, though still definitely of interest.


Syscalls to set up, otherwise just watch for atomically updated pointers to change..


If only context switching were an imaginary cost then we wouldn't have cared about spectre/meltdown impact.


That's true. So on my own servers, software I write gets a pass and is run as root after a while.

As for context swicthing etc I need some unixy bases. Instead of reinventing everything, I've found in practice that the linux kernel running my programs on baremetal (no virtualization) keeps the overhead to an acceptable minimum.


So basically your point is that the overhead of Linux is already low enough that you don't care about this project?


If you’re happy running on Linux (root or otherwise) you have I need of a unikernel.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: