Hacker News new | comments | ask | show | jobs | submit login
NixOS Linux (nixos.org)
231 points by MarcScott on June 12, 2015 | hide | past | web | favorite | 100 comments



NixOS has been discussed many times on HN already. However, Nix 1.9 was released today (/yesterday for the some):

Release Notes: https://nixos.org/releases/nix/nix-1.9/manual/#ssec-relnotes...

Downloads: https://hydra.nixos.org/release/nix/nix-1.9


> nix-shell can now be used as a #!-interpreter. This allows you to write scripts that dynamically fetch their own dependencies. For example, here is a Haskell script that, when invoked, first downloads GHC and the Haskell packages on which it depends:

Wow, that sounds super useful! I'm finding more and more default.nix and shell.nix files scattered through my hard drive, along with wrapper scripts for invoking nix-shells.

I can probably replace a lot of these with appropriate shebangs :)


This is a great idea. It would be cool if Docker supported something like this.


I have been using it for a while as my main OS. So far it has been great. There are a few warts, like any distro, but with NixOS the fixes tends to get written in the main configuration file, instead of scattered among many files. That meant a lot when I had to install it in another machine. 1) Install NixOS with my configuration file. 2) Copy dot files to home 3) There is no step 3) !


How have you found package support? The quality/breadth of packages available in ArchLinux AUR would stop me from using any other distro.

Arch's primary package set contains the standard set - which I assume is similar to NixOS - but all of packages requiring tricky/custom configuration tend to be in AUR. Which has saved me on a number of occasions from having to follow pages of tutorials that you'd normally have to follow in order to install things when using Linux for a desktop.

For example, setting up nice fonts with great default settings in Arch is a single AUR package away.


Arch is great. The only reason I switched away from it were the occasional manual intervention updates which could just mess up your system if you weren't paying attention.


Eventually I kept the install ISO on a USB key for those situations. My main issue is how often I felt like I had to update.


I do have a rescue USB like you. We should write something for arch to get rollbackable system.

ps: while googling for it, I discovered https://wiki.archlinux.org/index.php/Arch_Rollback_Machine


and here i am on debian stable. always complaining i can't update often enough. But thanks to comments like this i'm happy again :)

good news is that it will be some 4 years until i have DRM on firefox... i mean iceweasel.


I was great with Arch on USB, until I started trying to boot to a full UI from the image, and I'd start getting I/O errors from the ramdisk halfway through a 'pacman -Sy plasma-meta' (last three iso releases). That was a dealbreaker for me.

I switched to KaOS (because they used pacman), but their install image fails to install before creating users and installing grub, so I'm back to a Gentoo LiveUSB, which doesn't fail.

Every time I leave gentoo, I'm eventually forced back. maybe Nix will be better.


I currently use Arch, but would like to switch to Nix.

A main practical issue is that Nix packages are often built with all possible options vs Arch minimal packaging philosophy. In practice you install something as innocent as mutt, and python comes as a dependency!


I can't imagine a scenario where having python installed on a Linux desktop with non-ancient hardware would be a drawback.


For a traditional package manager the drawback is that your version of python is now tied to your email client and can't be upgraded separately. As I understand it this is the value of NixOS, that one can have multiple versions of e.g. python side by side...


I don't see your point. On Debian, which has the most traditional package manager in the world, I can still easily install several versions of Python at the same time.

I'm not even sure what you mean by "your version of python is now tied to your email client and can't be upgraded separately". As long as the deps for each package are satisfied, either one can be upgraded.


My current Arch install doesn't have Python (because I don't need to), and I explicitly installed the same packages as in Nix.

This is just an example that some of their packaging policies are a bit odd, but they are working on this. Imagine using Nix on an embedded device. All those extra dependencies might make a difference in terms of resources used.


>My current Arch install doesn't have Python (because I don't need to),

That still doesn't explain why installing python is such a big deal for you, or why your desire to not install python is stronger than your desire to install mutt.


Cool! Can you share your NixOS configuration file?


I just commented on one IT segment not learning from the past. NixOS deserves praise for doing the opposite: seeing various problem involved, identifying proven ways for solving them, and implementing them into a usable solution. Combining declarative and transactional properties into the package system as effectively as they do is very smart. I hope to see more distro's follow suit.


I agree wholeheartedly. However, I sincerely doubt most distros will adopt it, for the simple reason that unifying the packaging infrastructure means more-or-less obsoleting the concept of a distribution.


A "distribution", at its core, is simply a (nominally curated) collection of software which works well together. The packaging system is, or should be, a red herring.


Would it, though? Wouldn't a distribution just become a specific configuration of source, binaries, and other data?


Many people pointed out that NixOS has been discussed here many times already, but nobody mentioned Guix, so here it is, for reference: https://news.ycombinator.com/item?id=9127679


See also GNU Guix and the associated distro GuixSD, which is based on the same underpinnings of NixOS: https://gnu.org/s/guix


I'm much more interested in guix since it has all the same upsides but uses scheme instead of a weird NIH language, and it also has some standards when it comes to the licenses of the packages. (Nix disappointingly gives you the Adobe Flash plugin when you ask it to install Firefox, etc.) Still grateful for the solid foundation Nix provides.


> it has all the same upsides but uses scheme instead of a weird NIH language

The packaging language is great, but I agree that it would make more sense to embed it in an existing language like Scheme.

Is guix compatible with nixpkgs, or do all package definitions need to be translated into scheme first?

> Nix disappointingly gives you the Adobe Flash plugin when you ask it to install Firefox, etc.

Really? I've had Nix refuse to install stuff, telling me to override the "allowUnfree" option if I want it to work (this happens when a Haskell project doesn't specify a license in its cabal file, for example).

I use NixOS, so maybe the defaults are different from standalone Nix.


> I've had Nix refuse to install stuff, telling me to override the "allowUnfree" option if I want it to work

Oh awesome. My experience with this was quite a while ago; it sounds like they may have fixed it since then. Glad to hear it.


>Is guix compatible with nixpkgs, or do all package definitions need to be translated into scheme first?

They have to be translated into Scheme, but we have a 'guix import nix' tool to assist in that.


Come join us for a hack sometime. It would be cool to get some of your elisp projects packaged.


Similar effort in debian : https://wiki.debian.org/ReproducibleBuilds

(one of the project lead, https://people.debian.org/~lunar/blog/, is a haskeller, so no surprises)

Read his blog, very interesting to see how tiny things matter.


It's worth clarifying this a bit because there are a lot of subtle implications about what it means to be 'reproducible' or 'deterministic'.

To make the terminology clearer, I like to use the term 'deterministic' vs 'reproducible'. A deterministic build is one which, if it works, will always work, and follow the same steps. A reproducible build is one that you can reproduce identically, down to each individual SHA1-hash. This is what Debian has spearheaded, and what they mean by 'reproducible build'.

At the moment, Nix packages are deterministic - but they are not reproducible.

If I have a specific git revision of nixpkgs (the repository containing all package descriptions), and I say 'install mutt', I'm guaranteed that I will always get the same build results. No matter when/where I do it. That means I'm always going to get mutt version x.y, with dependencies A, B, and C, all with their own exact versions, and features enabled. The same optimization levels. The steps the compiler uses to build everything will be the same, etc.

You can essentially think of a 'nix expression' as a program that gets compiled to a shell script, which does a build, and you run the shell script. So if you have git revision DEADBEEF of the 'nixpkgs' set, and you 'run the compiler', so to speak, to generate a 'builder' from a Nix expression - it seems very obvious it will do the exact same thing, every time, regardless if you run it today or next week.

The thing is, this is a pretty nice property. In Debian for example, if I 'apt-get install' something, I may not get the same program if I do it today and then do it tomorrow, because it may get updated. This, in practice, is kind of a big deal actually. It means it's impossible (unless you use your own mirror) to actually have things like prod/development environments the exact same without imaging them like Docker. And of course, this makes rollbacks impossible because the set of packages on your system is really a bunch of mutable variables.

For example I used to do security research, and it was a massive pain in the ass when I would try to 'apt-get install' mysql and get a patched version already. I needed to write code to find this vulnerability, AND I want to regression test that code in the future. But I can't. It makes it very hard to do things like write automation to make sure you can detect if mysql is vulnerable (because where will you get the pristine package you originally tested with?)

Similarly, because of this property, you can be sure a developer's computer is exactly the same as a production or staging environment for example. Because the 'state' of your whole computer is really a function of two arguments - a function of your configuration.nix and your nixpkgs package set. This is what it means to be a 'purely functional' distribution!

But the builds aren't reproducible yet. There's all kinds of actual 'non-determinism' in the build process for any one specific project that makes bit-for-bit reproductions difficult. For example people use `__DATE__`, or they do weird things like rely on build mtimes, or other crazy stuff. Debian has tracked tons of these down - so it's doable!

There is a branch of Nixpkgs that aims to fix this. Hopefully it will be solved for NixOS 15.10.


So reproducible is mostly bitstream equality ? One thing I liked in the debian effort, is, IIRC, they tried to define some form of equivalence classes to express binaries with or without debug symbols, etc etc.


I have been thinking about switching to NixOS because of the great things I keep hearing about it. I would enjoy reading reviews from HNers, has anyone used it as a desktop?


Nix is awesome. The last time I ran it stuff like gnome-shell wasn't packaged, which made it hard to use for a desktop (they did have a number of window managers packaged though).

That's the hardest part about nix right now (the lack of packaged software). It is pretty easy to write your own packages for basic software though (and the Nix project is good about taking pull requests).

Looks like gnome3 is packaged now, I'll have to give it another shot: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/s...


NixOS is really nice, what I enjoy most about it is it gives you the tools to be explicit about the state of your system.

We know that implicit state and mutation are terrible in the programming world why not extend that philosophy to your entire OS?

You might be missing some packages so make sure to check this:

http://nixos.org/nixos/packages.html

before installing.

It's still early for NixOS, the OS is solid but it's likely you'll need to package up a program or two (but it's not difficult)


Yep, I echo this experience. It is my main desktop OS, I am very happy with it. Myself + another developer at my company both use it, and it's starting to gain interest from others, seeing as it's mostly a panacea to all sorts of environment and configuration issues.

I was pretty unhappy learning how to write the strange config language at first, but I've made my peace with it. You will almost certainly need to write a package or two, but there's a lot of active development to learn from, and lots of example to cargo cult.


Other developer here. I've been running nixos as my main OS since about November after running Funtoo for a year or so before that. My tolerance for hacking on my distro is maybe slightly above average. ;)

On the cool side: Nix is the first linux distro I've contributed to, because it's the standard "hack, fork, pull request" process that most github-hosted open source projects use these days.

The biggest thing is that you absolutely have to drink the nix koolaid if you're going to run nixos. You can't really do the normal ./configure && make && make install, because paths to libraries are all non-standard. But once it all really clicks, it's pretty great.

As far as desktop goes, the rest of this thread rings true for me too: most of the important things are there, like desktop environments, window managers, browsers, etc. There are occasionally niche/older packages that are not available. If you're willing to learn a little bit of nix language, it's relatively straightforward to add most packages.


I've been using it for nearly a year. One comment I'd make is that learning the nix package language will make life much easier.

I've written a few thoughts about nix at http://chriswarbo.net/essays/nixos/


I've tried to install NixOS on an old laptop, for a minimal installation, there were some things I did non like:

* Default packages are compiled with all the dependencies

* The configuration language is a functional programming language, and that's a plus, but the syntax is quiet weird, not similar to ML, Haskell nor Lisp.

* some command line tools are cryptic at best, for example searching for available packages is done with nix-env -qa \* -P | fgrep -i "$1"


Getting it running on actual iron took more effort than I expected, but lately distros like Ubuntu has spoiled the linux-crowd.

The hardest part about going all in is of you find some of the software you rely on not being packaged. There's just no way to "cheat" and ./configure && make'ing for way around: you have to learn the Nix language and package the software yourself. (Which can arguably be said to be a viral feature for getting more software packaged)

To me this was too much work to fit in an otherwise busy weekend and I just had to give up. Had I had more time to do things properly, I would probably have stuck with it.

The concepts it introduces are quite nice and well executed.


We're using it for servers, laptops, and desktops. I'm using NixOS now on bare metal to compose this message. The system is built like a rock. And oh, managing just a single text file to configure everything is a big plus.

I highly recommend it. :-)


> just a single text file to configure everything is a big plus

What ever happened to Linux folks mocking Windows' registry from back in the day :)


Heh. At least, we don't have to have Carpal Tunnel Syndrome (CTS) anymore navigating down the registry tree. It's a completely different story though, if one is going to edit the complete .reg dump, and import the settings from there.


The windows registry is a tad more complex. It's funny a while back Arch main selling point was single point of .. config too. Now it's systemd fine grained graph of .service. But nix being built by language designers, I guess abstraction and reuse will stay baked in.


They recreated it 20 times over in the name of modernity.


Didn't even install for me. It's for tinkerers currently, I think. I basically wanted an Ubuntu that didn't deteriorate with time, so I guess I'm not the target audience.


Looking for an ubuntu that didn't deteriorate, I switched to debian sid two years ago. Sid is a rolling distribution, so there are no releases. It's not that it does not deteriorate. Instead, it deteriorates at a rate that allows me to fix things as they break.

Mind you, I used Gentoo for many years before ubuntu, so I'm used to rolling breakage. Sid is a lot more stable than Gentoo (circa 2010)


That's interesting. If I never fixed anything, how often would I need to reinstall? I'm currently considering turning off updates on Ubuntu and just reinstalling every time there's a new release. I'll probably switch to Windows, though, as I don't expect to code much on my home computers now that I'm no longer a student. Coding on windows is hell and the DE sucks, but windows is stable and has a lots of software available.


I use Debian stable, dist-upgrading only a few weeks after each release.

Kernel modules are the main things that deteriorate, mostly the GPU support. It's normally a matter of changing the kernel version (I freeze the kernel every time I remember about it) or removing some old package that isn't permitting a clean upgrade.

Last time I reinstalled a desktop was in 2007 (I had 3, now I only have 2), because a Windows machine got a virus at the same LAN, and I wanted to be sure it didn't get anywhere. Last time I reinstalled a laptop was during the setup of my new one, this year, because I got a pretty messed-up set of kernel packages, and decided it were easier to just start from scratch.


No idea. The thought of having something break, and not fixing it immediately, escapes me. A properly maintained Linux install goes down with the hardware. It was like that with Gentoo and my Thinkpad R40, then the X61T used various installs of Ubuntu, and now the X1 Carbon has the same sid install for a couple of years, and it will last a couple of years more.

Ubuntu solemnly annoyed me by botching every upgrade and forcing me to reinstall every half-year. It was window-esque...


I ran a NixOS desktop VM for a little while to familiarize myself with it and eventually be able to tackle learning the server side of things. It's very different from a normal GNU/Linux, that's for sure.


I'm intrigued by the "atomic upgrades and rollbacks" claim. Let's say that deploying a package requires placing two files somewhere, e.g. vmlinuz and initrd. How can this be done atomically?


The handwavy answer is, each install (even later versions of the same package) gets installed into a unique directory. There is no /bin or /usr/bin.

bit more info here: https://nixos.org/nixos/about.html


Literally, there is no /bin or /usr/bin? So most shebang scripts won't run at all? That is indeed a radical departure.


Sorta buried in that link

"A big implication of the way that Nix/NixOS stores packages is that there is no /bin, /sbin, /lib, /usr, and so on. Instead all packages are kept in /nix/store. (The only exception is a symlink /bin/sh to Bash in the Nix store.) Not using ‘global’ directories such as /bin is what allows multiple versions of a package to coexist. Nix does have a /etc to keep system-wide configuration files, but most files in that directory are symlinks to generated files in /nix/store."

So you get bash :)


There's a /bin/sh because libc needs it, but we're going to remove that one too in the future somehow.

But /usr/bin/env is in nixos to make it easy for people to run scripts. But packages never use /usr/bin/env, it's just there for convenience.


Ok, if env is supplied then scripts can at least use it to locate Python, etc. which makes a lot of sense. The text made it sound like these directories did not exist at all.


So what does PATH look like?


If you as a user wish to have a normal-ish Unix commandline experience, you "install" into your personal nix profile various nix packages. I put "install" in quotes because there are two steps:

1) Adding the package to the nix store, /nix/store/checksum-coreutils-version

2) Linking this into your nix profile: ~/.nix-profile/bin/ls will be a symlink to /nix/...coreutils.../bin/ls

Your PATH is then ~/.nix-profile/bin.

Now if you're packaging a shell script, and it calls coreutils and rsync, then inside the build scripts for that package, you might do something like this to wrap the script with the required PATH:

  wrapProgram myScript --prefix PATH ${coreutils}/bin:${rsync}/bin
That way, the script will still work even if the user running it does not have coreutils or rsync in PATH. If will also work if the user does have rsync-1.0, but the script requires rsync-2.0, because the custom PATH being used by the script includes the required rsync.

In this situation, both coreutils and rsync would be present in /nix/store, even if no user has linked them into their profile.


They use atomic file operations involving symlinks. Moving a new symlink to overwrite an old symlink is one way to atomically switch out a directory for another. I don't know if there is any special handling of the kernel packages, but that's basically how it works as far as I know.


I am not sure if this is the same, but Redhat's Project Atomic does similar thing using rpm-ostree

http://www.projectatomic.io/


It's a different approach. That's basically an implementation using ostree. Nix is way different than that. Neither is better in my opinion, but I use Nix happily.


The package manager could just flock() a specific file before making any changes.


From svn's red book:

By atomic transaction, we mean simply this: either all of the changes happen in the repository, or none of them happens. Subversion tries to retain this atomicity in the face of program crashes, system crashes, network problems, and other users' actions.

Does Nix mean the same thing? Is it atomic in the face of program or system crashes?


Yes. In many cases, switching to the new version of the package is nothing more than rewriting a symlink. Sometimes it's more involved, like restarting a service. Either way, the cut over happens after building the new package has completed successfully. If building the package fails, it has no effect on the running system.

I suppose there is still a window in which an error could leave the system in an inconsistent state, but it's pretty narrow, and the error would have to be a kernel panic or something like that.


Err, are there any systems that try to cut over to a new package before it's been built successfully?


Yes. Almost all of them, actually. By "build" I mean the whole package installation process, including moving the compiled files into their final destinations.

If you're upgrading an existing package, and that involves overwriting one binary in /usr/bin and another in /usr/lib, a crash in the middle of that process leaves the package partially upgraded, and very likely broken.

Nix never overwrites an existing package installation. Instead, it installs the new version in a separate directory, then overwrites a symlink that pointed to the old version with a symlink pointing to the new version.


I guess that traditional package managers could install a helper script or a config file early in the build system before the main package is finished.


Things you must be afraid when using nix:

1) If you install a bad grub, you cannot rollback it using nix. That means, not a wrong grub line, but if grub itself is broken.

2) Anything that is not under nix control, the data: like desktop configurations, databases, ecc.


But after the first successful install you shouldn't have to worry about GRUB anymore. I don't know what it's like on NixOS, but on GuixSD we just change the grub.cfg.


You also upgrade grub versions I hope :) Or maybe improve the grub installation scripts.


The contents of any package are immutable; once they're written they're never modified again. New versions of the package get installed in a different directory tree.


NixOS would be awesome on the server, but I'm worried about security updates. Debian/Ubuntu have a lot of resources and protocols set up for this; for users like me, I can follow the mailing list where every package updated with security fixes is announced, and the updates seem to meticulously follow announcements about known vulnerabilities. How is NixOS in that regard?


Has anybody here got the experience of running NixOS in production or on the desktop (as the main OS, not in a never booted partition)?


At LumiGuide we use NixOS on our development machines as well as on all our production servers. I used ansible before but NixOS is on a whole different level.

See the following for a story about a system we just deployed which uses NixOS:

https://news.ycombinator.com/item?id=9690683


Yeah, I've been running NixOS on production servers for about a year. I'm at a small startup, so it's not huge scale (yet!) but it works great.


Got any problems with package availability?

I imagine those should be very easy to fix, is that true at daily use?


All the packages I need have been part of nixpkgs, except for node packages from npm. For those, I run npm2nix to generate nix expressions from the package.json files. That's occasionally tricky when npm packages do "clever" things that don't work when translated to nix. Generally, though, npm2nix works pretty well.

I also have about 3K lines of nix code for packaging my apps, configuration, cluster definitions for nixops etc. There's a bit of a learning curve, but I find it really pleasant, now that I have the hang of it.


I've been using it on my HTPC for a while. Getting it to work for a relatively exotic setup (boot into Kodi, still allow admin logins, mount NFS media shares on boot) was a bit fiddly, but once it worked it's been great. The one concern I have is that it was pretty happy to recompile Chromium all the time, but switching to Firefox made it more bearable.


Does this not lose the major advantage of a traditional package manager? Namely that I update openssh in one location and all my packages are now secure. Rather than having to update every package that depends upon openshh separately?


Generally, all packages depending on that library will need to be rebuilt. But that is automatic when you do an update, it just may take time (either for packages to be rebuild locally, or by the build farm if you're using channels). But you can manually do a hack to substitute a patched version when you want the fix right now: https://nixos.org/wiki/Security_Updates


I'm really intrigued by NixOS, but I have to wonder if it's too much too fast, and the Atomic project isn't a better stepping stone.


I have been writing a guide on how to set up a desktop system with NixOS, but I’m not quite finished yet.

I wonder how high the interest in something like that would be? I can throw up an unfinished beta version of it if people are interested.


Is it better than ArchLinux?


Is this proprietary? I can't find a link to the source anywhere.



That doesn't have the source code to the operation system on here. So, agin, I ask, is this damn thing GPL'd or is it on some prop. bullshit?


https://github.com/NixOS/nixpkgs/tree/master/nixos

If you had bothered looking in the wiki everything is listed there.


It does, but it's not in it's own repo, it's in nixpkgs at

https://github.com/NixOS/nixpkgs/tree/master/nixos


At the start of this year, I tried to install various Linux distros on my laptop using a USB drive. Only Ubuntu and OpenSUSE worked. Why is this? Is the desktop really that irrelevant? This is not a bug report. I'm wondering what the institutional reasons are.


This isn't really the best place for the question, and besides that you'd need to provide a hell of a lot more information for anyone to help you.


[flagged]


You are looking at it the wrong way: things do not work by default (imagine you got world's first computer, some one has to write the code for it), hence someone has to spend the work on it for it to work.


I thought Linux distros had enough in common that the low level stuff would mostly work the same. It's not the world's first computer. NixOS has the Linux kernel, x11, kde and systemd. Maybe that covers less than I think, but it sounds like a lot.


The high level stuff is the same, and works out of the box. But if you're asking about why Linux doesn't work on some hardware, then it's about the low level stuffs having slightly different optimization and changes and break older code.


Because you haven't fixed them.


Don't worry about me; I got a refund.


Distros have to choose which kernel version they want. This can be a tradeoff for stability or new features etc. If they pick an older kernel, the decide which newer drivers they want to try to backport. Then they decide which drivers they feel like supporting and putting in the repositories, and which of those get loaded onto the installation media. Some distros might have closer relationships with specific vendors and have information about configuration details or workarounds. Laptops are notorious for using nonstandard hardware from niche suppliers and taking a lot of work to configure.


Thanks


First, it's really not the case that only Ubuntu and OpenSUSE work on the desktop. You may have run into issues which YOU were unable to solve, but those issues may also have been specific to your laptop hardware configuration. It is well known that certain laptops tend to work better with Linux than others, in fact Im running Fedora on a Lenovo right now and it works great. So, is the desktop really irrelevant? Depends, different distros have different priorities. To date, Linux on the desktop has not shown mainstream appeal, but it does seem to gain more and more users every year. If you really insist on being obtuse regarding the reasons for poor Linux desktop support, the primary answer is that Linux has a long history of being a very good server OS. Security, stability, speed, price and customization are all reasons that is true. There is limited time in the day, and so one thing must be prioritized over another. Most linux distros choose to prioritize server development over the desktop experience, because that is where their primary user base is. Other teams made different choices. Why, at the start of this year, I tried to install various Windows versions on my rack server using a USB drive. Only Windows Server 2012 worked. Why is this? Is the server really that irrelevant to Microsoft? This is not a bug report though, I'm just wondering what the institutional reasons are.


Yeah, that makes sense. I was just under the impression that most of the hardware-facing software (kernel, drivers etc) were the same across distros and therefore keeping up with Ubuntu would almost be a given.

I was about to express my shock that Windows server had immediately obvious bugs, but then I sensed the irony. I would guess that Windows server on any given x86 server works a lot better than nixOS on my laptop. So not sure if I get your point there.


Ubuntu has a lot of non-free/proprietary drivers available for laptops. Other Linux distros don't, especially new ones with innovative aspects to the underlying system that are just becoming stable.

Try running the lsmod command on your laptop to see which kernel modules are loaded. Google a few of the module names - especially ones that look related to the make of your laptop or its video card.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: