If the RAM is user-upgradable, this will make for an excellent dev machine. Given it's a chinese laptop, it will surely not be locked, so Ubuntu will run well as well.
pretty naive. especially since it has a dual graphics with nvidia. which is really akward to "correctly" install on linux.
and then there are drivers. pretty sure that there won't be many stuff that directly works on linux.
That there even worse alternatives isn't helping. Avoid third party binary modules. Sure, they might work right now but you'll have no guarantees it'll work in the future. You also risk interactions with other subsystems (sleep etc.). It would be like Windows, where your printer might prevent you from upgrading your operating system.
If you intend to run Linux on the thing, buy things that Linux supports out of the box. That probably means sticking to the working Intel stuff for now. You'll thank me later (or not, because you won't miss the problems you don't have).
While your theoretical point of view is solid, reality kind of disagrees with it.
The binary blob provided by Nvidia have been quite reliable across the years (and I think they've been available for more than 10 years) while the open source ATI/AMD drivers have frequently failed to deliver.
Nvidia drivers do fail quite regularly, especially on laptops. Brightness control randomly stops working, suspend/resume is extremely unreliable, there are login issues on systems with both Intel and Nvidia graphics (supposedly solved with Mesa 12, which can only be found on Arch and Gentoo right now.) It's certainly not a panacea.
In my experience, the open-source radeon drivers are significantly more stable. Fglrx was a stinking pile of crap that has officially been abandoned - the whole driver team has switched to the open-source drivers, now.
Binary blobs provided by Nvidia are extremely unreliable. I've had to ssh to a machine because the graphics don't work at all after some kernel upgrades.
Linking to Theano-CUDA is always a nightmare.
Forget about optimus. That has never worked correctly on Linux.
The only reason I can see someone wanting nvidia on linux is for CUDA. And in that case, you can't just use the ubuntu pre-built version of the drivers.
If you are not doing neural nets locally on Linux, you shouldn't get a discrete graphics card. And if you do get a discrete one, make sure it's not Nvidia since their open source drivers are much inferior to AMD's.
My current (and only) working solution is to disable the Nvidia GPU on a host and use it PCIe passthrough to assign it to another Linux or Windows VM. That's the only stable solution that doesn't break the world when enabling/disabling the discrete GPU.
We shouldn't have to revert to such extreme measures...
At the moment my only two issues with Optimus are that they crash X on wakeup if loaded, and that DPMS doesn't seem to disable properly after the screensaver ends. For the earlier you can add an a suspend hook to unload the nvidia modules, and for the latter you just need to switch to a tty and back to get the screen working again.
Both are somewhat annoying, but far less than only being able to use it in a VM. That said, I'm curious about how you manage the VM output, considering that PCIe passthrough requires a dedicated screen.
I normally run the GPU headless for machine learning. In the rare case I need to access the desktop shell of the VM, I just connect via Chrome Remote Desktop.
In the even rarer case that I wish to play a game, I have an HDMI connection between the GPU and my projector, which I can enable with a remote control.
Learning how to use qemu is a bit of a pain (hint: use qemu directly, virus is a huge waste of time) but after the initial learning curve the setup is seamless for my use case - and I feel safer without the GPU drivers having access to my normal desktop. I much prefer this setup to dual-booting Windows for gaming. The VM spins up in a few seconds and shuts down when not in use (turning off the GPU in the process).
This is an honest question, and I appreciate your answer in advance.
I have a desktop with an HD 6850 and the radeon driver has always worked well, I used to play DOTA2 on it.
But this is a very old GPU now, and all newer models (HD7xxx and up) were having performance issues with the radeon driver last time I was reading the forums.
R9 270X in my case. The Rx 3xx and newer (and they're slowly porting it to older devices) use the newer AMDGPU driver stack, which is supposed to be even better, but I haven't tried that myself yet.
Windows may have been released 7 years ago, but it gets updates pretty frequently. Windows 7 has had the ability to install drivers from Windows updates for a few years now.
The linux comparison isn't recent either. That was the exact same procedure I had to do on an nVidia-stricken laptop (thankfully not mine) four years ago.
I was running OSX in VMWare every day for a couple of years, did iOS development. With 8 GB RAM and fast Intel SSD in my Windows laptop, the VM was faster then hardware MacBooks I saw. Modern hardware-assisted virtualization is very efficient, BTW that's exactly what allowed those cloud computing.
VmWare’s virtualized 3D accelerated GPU is IMO very good.
Not sure about compatibility with the recent OSX, though. I used lion and mountain lion, and those worked well. AFAIR I’ve used vmware tools from Fusion.
In the host machine, you need reasonably fast Intel CPU, hardware-assisted virtualization enabled in BIOS/UEFI, enough RAM, also unofficial VMWare patch to unlock OSX guests on PC hosts.
In the guest OS, you need VMWare tools for OSX, they exist because VMWare supports OSX guests when running on OSX hosts.
Or just wait for Windows 10 Redstone on Aug 2 and run the Bash on Linux Subsystem for Windows. No overhead and you get full speed with unmodified debs.
The $540 version uses Core M3, which basically has 15% lower performance than the latest iPad, so I doubt it's going to be too great of a dev machine (just because the iPad seems fast with 1 app per screen, doesn't mean the same performance will be enough for 20 tabs and 5 other programs running at the same time):
What kind of development would we be talking about? I still use my 2009 MBP for day to day development. It's ~10% slower than a Core M3, I usually do webdev on it, but it's also fine for AOT things like Rust, Haskell and even some C here and there (I recently compiled RethinkDB, no biggy).
Things that really hurt are short battery life which is ~1.45hrs, and that it misses the modern CPU extensions which will obsolete it soon. Both of this issues obviously won't be present in a Core M3, so I see no reason why you wouldn't dev on an M3 unless you compile big native projects with great frequency.
I'm surprised you find Rust and Haskell bearable. Because for both languages slow compilation is a pretty well known problem[1][2]. On my old laptop (2011 MBP) compiling and interpreting (ghci/ghcid) took so long that it was distracting. This isn't even a big project; just a medium-sized one with a little over a hundred Haskell modules.
Well to be fair I only do hobby projects in both languages. My Haskell project is a C compiler, and my Rust project is a game engine, both compile in seconds. Both have just a few modules. Most complexity is in the dependencies, which have some compile time consequences but not terrible so far.
This is my gripe, unless you spend hours customizing each laptop ubuntu makes a 10 hour battery life shrink to around 2 hours with only the default 'power saving' features turned on.
If they use a good Wifi card or it can be opened easily (which should be a given if RAM can be upgraded). I bought the XPS13 2015 and had a lot of struggle with it until I exchanged the Broadcom card for one from Intel.
I think he's talking about the UEFI non-sense that often happens with western manufacturers to lock it to windows (preventing the switching of the signing keys).
I'm not sure how true his thought is in relation to china,but neither here nor there.
I just wanted to point out that the absence of UEFI thing doesn't automatically make a laptop "work well" with Ubuntu, as someone could read from this comment. Ahh, it is edited already.