nVidia's GPU technology focuses on maximum performance. Their GPU's are power-hungry, are capable of rendering billions of triangles per second, and have an interesting programming interface to support operations such as custom shading. (Within the last few years nVidia and AMD have improved support for use of their GPU's by computation-intensive non-graphical applications. For example, people use GPU's for Bitcoin mining, for the good and simple reason that their computational throughput on SHA-256 hashes blows CPU's out of the water .)
Intel's GPU technology focuses on low power and low cost; as long as their GPU's can run the Windows Aero GUI and are capable of playing streaming video, it seems Intel doesn't feel a need to push their performance. The combination of low power and low cost means Intel GPU's are ubiquitous in non-gaming laptops.
nVidia has observed that Intel GPU's have become cheap enough that it's viable to put both a weak, low-power Intel GPU and a strong, high-power nVidia GPU in the same laptop. nVidia calls this combination Optimus.
Under Optimus, the Intel GPU is used by default for graphically light usage, meaning Web browsing, spreadsheets, homework, taxes, programming (other than 3D applications), watching video, etc. -- since it's low-power, you get good battery life.
When it's time to play games, of course, the driver can switch on the nVidia GPU -- hopefully near a power outlet.
I think Optimus technology has been available for over two years. However, nVidia's Linux drivers do not yet support it, leading to much gnashing of teeth, and the third-party Bumblebee solution which I discuss in another comment .
Current Intel CPUs have a very small GPU built on to the die of the CPU. By buying the CPU, you're paying for an Intel GPU anyway.
At the same time, unlike CPU performance, GPU performance really does scale with die area; if you want more graphics performance, get a bigger chip with more ALUs/texture units/etc, and because graphics is so parallel everything will just go faster. However, larger chips mean larger amounts of leakage when the chip is powered but idle, which means that battery life can be significantly worse.
What Optimus does is allow the Intel GPU to be connected to the display hardware and be used most of the time (e.g., when you're looking at email or whatever) when high performance isn't called for. At that point, the NVIDIA GPU can be turned off completely, meaning no leakage and no battery life degradation. If you want high performance, the NVIDIA GPU is enabled on the fly, rendering is done on the NVIDIA GPU, and the final result is sent in some way to the Intel GPU, which can then use its actual display connections to put something on the LCD.
Prior to Optimus, there was generally a mux that would switch which GPU was outputting to the screen. This was messy because everything had to be done on one GPU or the other, it was noticeably heavyweight, occasionally required reboots, increased hardware complexity, etc.
The biggest issue with Optimus on Linux is that the infrastructure for the actual sharing of the rendered output didn't really exist until dmabuf appeared--you need two drivers to be able to safely share a piece of pinned system memory such that they can both DMA to/from that memory and be protected from any sort of bad behavior from each other. (I also think it's impossible to have two different drivers sharing the same X screen, which is why bumblebee works the way it does)
I was not aware of this fact.
> the Intel GPU to be connected to the display hardware and be used most of the time...the NVIDIA GPU is enabled on the fly...
I did mention these aspects of Optimus.
> rendering is done on the NVIDIA GPU, and the final result is sent in some way to the Intel GPU, which can then use its actual display connections to put something on the LCD.
I guess I missed the point that the architecture is like this:
nVidia <-> Intel <-> Display
instead of like this:
Display <-> nVidia
Display <-> Intel
I was a little fuzzy on this point myself, so I appreciate the clarification!
> the infrastructure for the actual sharing of the rendered output didn't really exist until dmabuf appeared
I'd certainly believe that the current approach to the nVidia driver was enabled by dmabuf. But Bumblebee shows it's possible to use Optimus on Linux without that particular kernel feature.
Then why does the new HD4000 have twice the performance of the previous version? :)
Intel is trying its best. It just takes time to catch up to the lead of ATI and NVidia. With the HD4000 it's quite a bit closer.
This is simply false, since adequate open-source support compatible with the current Ubuntu kernel has been available in the form of Bumblebee for over a year.
I have had an Optimus laptop for a while now, and since I got it I have been running Linux Mint, with the Bumblebee PPA . With Bumblebee, applications use the Intel GPU if you run them normally:
If you want that application to use the nVidia GPU instead, you can pass it as an argument to Bumblebee's "optirun" command:
$ optirun wine
I will grant that Bumblebee's approach is very hacky, and the DMA-based approach used by the new proprietary driver is likely to lead to higher performance and/or less CPU usage.
But I have found Bumblebee to be very stable, and more importantly, it's been available since I got the laptop, unlike the proprietary Optimus support -- which is still vaporware.
Having installed Bumblebee, I can just use use optirun to provide CUDA enhancement just by initializing VMD using
$ optirun VMD
It's just that easy, and while it is a bit hacky, if you live on the command line it's not a major issue (especially when combined with aliases).
 - http://www.ks.uiuc.edu/Research/vmd/
I didn't intend to call the method of running things by adding another command to the beginning of the invocation "hacky." Although GUI-only users may see Bumblebee that way, until frontends / desktop support is developed.
I was referring more to Bumblebee's internal architecture: running multiple X servers and ferrying data between them.
And neither of these notes about the "hackiness" are any criticism of the Bumblebee developers: They've done great work making the software as reliable as I've personally found it to be, and getting it out there as quickly after the release of Optimus as they did, with AFAIK zero support from nVidia.
Even if the internals are hacky, as an end-user I have no complaints; unless you're messing with the config file, or upgrading from a very early version of Bumblebee, its complicated design is well-hidden from the end-user.
Doing front-ends and desktop UI integration is somewhat out of scope for Bumblebee as a project. It would be nice to have, I suppose, but I personally don't need it (I always use the command line and can figure out how to manually make launch icons if I need to), and it doesn't require the same intense knowledge of the guts of the Linux video subsystem that writing Bumblebee itself did. So I feel it would be much more logical to have those things be separate projects, since they require a different developer skillset and don't have a dependency on the internal details of Bumblebee.
Not all the x86 chips in Intel's current catalog contain GPUs, but most do, and ISTR all the laptop-class chips do.
At one point I had three Nvidia-equipped laptops stacked in my closet, all essentially bricked, each eventually replaced by a laptop equipped with an ATI/AMD adaptor.
I initially had nothing against Nvidia, and Dell seemed to prefer them in its offerings, but experience has forced a change in my outlook.
Yes, perhaps, but:
1. Nvidia would need to unambiguously specify extraordinary cooling requirements, to avoid difficulties in the field. Apparently they didn't do this.
2. The graphics adaptors of others, in the same laptops, had no similar problems.
Someone may argue that ... oh, wait, you do make this argument:
> High performance graphics chips are going to get hot.
Yes, but if they reliably melt down, that fact negates their impressive specifications.
I'm imagining an advertising campaign in a parallel universe where everyone has to tell the truth -- "Nvidia -- the hottest graphics processors in existence!" Well, yes, but ...
Not being able do transparently do this should be considered an important design flaw.
As someone who designed Space Shuttle electronics, of course I agree. But we're talking about a failure to deal with heat, not the heat itself.
They make powerful GPUs, but, unless they can reliably perform their functions with the software I want to run, their products are worthless.