Hacker News new | past | comments | ask | show | jobs | submit login

If the RAM is user-upgradable, this will make for an excellent dev machine. Given it's a chinese laptop, it will surely not be locked, so Ubuntu will run well as well.



pretty naive. especially since it has a dual graphics with nvidia. which is really akward to "correctly" install on linux. and then there are drivers. pretty sure that there won't be many stuff that directly works on linux.


In my experience nvidia has always been much easier to install than amd (fglrx).


That there even worse alternatives isn't helping. Avoid third party binary modules. Sure, they might work right now but you'll have no guarantees it'll work in the future. You also risk interactions with other subsystems (sleep etc.). It would be like Windows, where your printer might prevent you from upgrading your operating system.

If you intend to run Linux on the thing, buy things that Linux supports out of the box. That probably means sticking to the working Intel stuff for now. You'll thank me later (or not, because you won't miss the problems you don't have).


Go nvidia all the way. Ive got GT460 and it still gets up-to-date drivers (look how old that thing is!)


While your theoretical point of view is solid, reality kind of disagrees with it.

The binary blob provided by Nvidia have been quite reliable across the years (and I think they've been available for more than 10 years) while the open source ATI/AMD drivers have frequently failed to deliver.


Nvidia drivers do fail quite regularly, especially on laptops. Brightness control randomly stops working, suspend/resume is extremely unreliable, there are login issues on systems with both Intel and Nvidia graphics (supposedly solved with Mesa 12, which can only be found on Arch and Gentoo right now.) It's certainly not a panacea.

In my experience, the open-source radeon drivers are significantly more stable. Fglrx was a stinking pile of crap that has officially been abandoned - the whole driver team has switched to the open-source drivers, now.


Binary blobs provided by Nvidia are extremely unreliable. I've had to ssh to a machine because the graphics don't work at all after some kernel upgrades.

Linking to Theano-CUDA is always a nightmare.

Forget about optimus. That has never worked correctly on Linux.

The only reason I can see someone wanting nvidia on linux is for CUDA. And in that case, you can't just use the ubuntu pre-built version of the drivers.

If you are not doing neural nets locally on Linux, you shouldn't get a discrete graphics card. And if you do get a discrete one, make sure it's not Nvidia since their open source drivers are much inferior to AMD's.


Quite reliable with Optimus? I'd bet your experience is based on desktop GPUs, but Optimus is painful with Linux.


Extremely so.

My current (and only) working solution is to disable the Nvidia GPU on a host and use it PCIe passthrough to assign it to another Linux or Windows VM. That's the only stable solution that doesn't break the world when enabling/disabling the discrete GPU.

We shouldn't have to revert to such extreme measures...


At the moment my only two issues with Optimus are that they crash X on wakeup if loaded, and that DPMS doesn't seem to disable properly after the screensaver ends. For the earlier you can add an a suspend hook to unload the nvidia modules, and for the latter you just need to switch to a tty and back to get the screen working again.

Both are somewhat annoying, but far less than only being able to use it in a VM. That said, I'm curious about how you manage the VM output, considering that PCIe passthrough requires a dedicated screen.


I normally run the GPU headless for machine learning. In the rare case I need to access the desktop shell of the VM, I just connect via Chrome Remote Desktop.

In the even rarer case that I wish to play a game, I have an HDMI connection between the GPU and my projector, which I can enable with a remote control.

Learning how to use qemu is a bit of a pain (hint: use qemu directly, virus is a huge waste of time) but after the initial learning curve the setup is seamless for my use case - and I feel safer without the GPU drivers having access to my normal desktop. I much prefer this setup to dual-booting Windows for gaming. The VM spins up in a few seconds and shuts down when not in use (turning off the GPU in the process).


Not anymore, these days radeon is far better than fglrx and works great straight out of the box.


For what GPUs?

This is an honest question, and I appreciate your answer in advance.

I have a desktop with an HD 6850 and the radeon driver has always worked well, I used to play DOTA2 on it.

But this is a very old GPU now, and all newer models (HD7xxx and up) were having performance issues with the radeon driver last time I was reading the forums.


R9 270X in my case. The Rx 3xx and newer (and they're slowly porting it to older devices) use the newer AMDGPU driver stack, which is supposed to be even better, but I haven't tried that myself yet.


Install nvidia, install bumblebee, add yourself to the bumblebee group. Not really that hard...


Until you realize you want to connect a monitor or projector: https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-mo...


I've connected both of my Optimus-running ASUS laptops to projectors multiple times with no issues. But, as the article says, YMMV.


That's actually stupidly hard. Why do you need to do anything at all?


As opposed to the Windows approach (unless you enjoy being stuck with preloaded malware, such as Superfish, McAfee, or Mac OS X):

* WTF, why isn't my NIC registering? I thought we had a standard for this stuff by now.

* Download NIC drivers from secondary computer

* Put NIC drivers on a USB stick

* Install NIC drivers

* Nothing happened, turns out you found the drivers for the wrong revision of the chipset, go back to square one

* Download Nvidia drivers

* Try to install, nope, you need to find the Intel drivers first

* Download and install Intel drivers

* Install the Nvidia drivers again

* It's 2AM already, go to sleep

* You forgot to turn off Windows Update, and now you're stuck with Windows 10

* Reformat again, and start over

As opposed to:

$ sudo pacman -S nvidia bumblebee && sudo usermod -aG bumblebee $(whoami) && reboot


You are comparing a 7 year old version of Windows to a recent Linux release. Obviously Windows 7 will not have all the drivers.


Windows may have been released 7 years ago, but it gets updates pretty frequently. Windows 7 has had the ability to install drivers from Windows updates for a few years now.

The linux comparison isn't recent either. That was the exact same procedure I had to do on an nVidia-stricken laptop (thankfully not mine) four years ago.


Use Windows for the metal OS, then run Linux on top (VMware or WSL). Best of both worlds.


How is that best of both worlds? That sounds more like a Frankenstein to me.


You get Windows driver/graphics and power support. You get the applications that need Linux. No screwing with blobs and updates and whatnot.


Not really, a modern Linux will do a real boot faster than Windows does its fake-boot-it-really-was-hibernation-wake-up thing.

If you do web development it is also a much better development experience than using Windows.

Bluetooth devices work much better in Linux than Windows (at least the ones I have).

USB devices are also much better handled in Linux. I never see an 'Installing device driver for this USB stick' message.

I plug my kindle in Ubuntu, it opens Calibre. I plug my kindle in Windows, it shows a folder.

I lose all these niceties if I run Linux inside a VM.


i'm curious, do a lot of people run this setup? does the overhead of running in vm for your daily tasks worth it?


I was running OSX in VMWare every day for a couple of years, did iOS development. With 8 GB RAM and fast Intel SSD in my Windows laptop, the VM was faster then hardware MacBooks I saw. Modern hardware-assisted virtualization is very efficient, BTW that's exactly what allowed those cloud computing.


Doesn't VMware have really lacking graphics support? I remember Yosemite and on with their fancy effects are barely usable.


VmWare’s virtualized 3D accelerated GPU is IMO very good.

Not sure about compatibility with the recent OSX, though. I used lion and mountain lion, and those worked well. AFAIR I’ve used vmware tools from Fusion.


if i knew a few weeks ago i would probably think a bit more of buying another macbook. osx allows vm installs?


Technically yes. Not sure about the latest OSX, but when I used that, everything worked fine for me.

Legally no. VM or not, the real hardware needs to be made by Apple.


no need to tweak kexts like you do with a hackintosh?


No need to tweak kexts.

In the host machine, you need reasonably fast Intel CPU, hardware-assisted virtualization enabled in BIOS/UEFI, enough RAM, also unofficial VMWare patch to unlock OSX guests on PC hosts.

In the guest OS, you need VMWare tools for OSX, they exist because VMWare supports OSX guests when running on OSX hosts.


Or just wait for Windows 10 Redstone on Aug 2 and run the Bash on Linux Subsystem for Windows. No overhead and you get full speed with unmodified debs.


Not having cron is a deal breaker. Cygwin is still better in that aspect.


This image from their website would suggest it's not. http://c1.mifile.cn/f/i/16/mibookair/design/design-mainboard...


The $540 version uses Core M3, which basically has 15% lower performance than the latest iPad, so I doubt it's going to be too great of a dev machine (just because the iPad seems fast with 1 app per screen, doesn't mean the same performance will be enough for 20 tabs and 5 other programs running at the same time):

http://www.extremetech.com/mobile/221881-apples-a9x-goes-hea...


What kind of development would we be talking about? I still use my 2009 MBP for day to day development. It's ~10% slower than a Core M3, I usually do webdev on it, but it's also fine for AOT things like Rust, Haskell and even some C here and there (I recently compiled RethinkDB, no biggy).

Things that really hurt are short battery life which is ~1.45hrs, and that it misses the modern CPU extensions which will obsolete it soon. Both of this issues obviously won't be present in a Core M3, so I see no reason why you wouldn't dev on an M3 unless you compile big native projects with great frequency.


I'm surprised you find Rust and Haskell bearable. Because for both languages slow compilation is a pretty well known problem[1][2]. On my old laptop (2011 MBP) compiling and interpreting (ghci/ghcid) took so long that it was distracting. This isn't even a big project; just a medium-sized one with a little over a hundred Haskell modules.

[1]: https://m.reddit.com/r/haskell/comments/45q90s/is_anything_b... [2]: https://m.reddit.com/r/rust/comments/2uxt46/rust_vs_c_inc_co...


Well to be fair I only do hobby projects in both languages. My Haskell project is a C compiler, and my Rust project is a game engine, both compile in seconds. Both have just a few modules. Most complexity is in the dependencies, which have some compile time consequences but not terrible so far.


There's a PR open for supporting the beginnings of incremental recompilation in Rust, so hopefully it lands soon!


Ubuntu and batteries are not good friends :-( I tried few times and still using apple


Using tlp and powertop can significantly improve the battery life.


I didn't know powertop, I'll give a try. Thanks


This is my gripe, unless you spend hours customizing each laptop ubuntu makes a 10 hour battery life shrink to around 2 hours with only the default 'power saving' features turned on.


And the bad NVIDIA support !


Well was that with Apple products?


The reason why I mentioned Apple Products is that Macs battery life on non OS X operating systems have poorer battery life.

http://www.zdnet.com/article/the-other-hidden-cost-of-runnin...


If they use a good Wifi card or it can be opened easily (which should be a given if RAM can be upgraded). I bought the XPS13 2015 and had a lot of struggle with it until I exchanged the Broadcom card for one from Intel.


Are you saying Ubuntu runs well on all laptops which are not locked?


I think he's talking about the UEFI non-sense that often happens with western manufacturers to lock it to windows (preventing the switching of the signing keys).

I'm not sure how true his thought is in relation to china,but neither here nor there.


I just wanted to point out that the absence of UEFI thing doesn't automatically make a laptop "work well" with Ubuntu, as someone could read from this comment. Ahh, it is edited already.


Are there any x86 laptops sold that are locked down without the ability to enroll user keys?


There is nothing wrong with UEFI. I had a laptop with it that had no keys by default.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: