I got mine last week, and I've been using it as my main machine since then.
I put on a small Debian-based chroot and installed Guacamole[1]*, so I can use my favourite editor and X11 programs within a Chrome tab.
Compiling large C/C++ programs takes a while, but the Go compilers (what I mostly use) work really well. And I can always SSH into beefier machines (Linode/University) if I ever need to.
This replaced a Thinkpad which cost 5 times as much and broke in less than 13 months. So far I'm not missing the Thinkpad. (Well, maybe the 3 button TrackPoint)
Ugh. I read this and cry inside. Perfectly brilliant idea that I didn't even think of.
Why? I just purchased a new ultra book burdened by Windows 8 pre-installed. If I switch to the legacy BIOS the machine won't boot up, period. The only way to get the machine to work is through Windows 8. No installing Linux, no changing firmware, no nothing. Wonderful little box that's magnificently useless for systems level programming...
Which ones? I've tried Ubuntu and Mint but not Fedora. I tried googling it but all I ended up with was a several pages of articles talking about UEFI and the secure boot without a single link TO a Linux that did it. I'm dying to know 'cause I can't stand another minute on Windows 8.
I haven't actually tried Arch on UEFI, but their wiki page seems fairly extensive on the topic. Ubuntu documentation also shows UEFI support, but their documentation is bit vague.
Can you quantify the differences in compile speeds between Go and C++ on the device? In particular, I'm interested in using Go to replace some of my scripts.
I used Debian's Multistrap on another computer to make the chroot. I just copied it into it's own directory somewhere on /home and used it from there.
You'll need to enable dev-mode, and you'll probably need to remount the partition it's on without the noexec option. It's a pretty standard chroot from there on.
I can write up the process somewhere or share the filesystem if there's enough interest.
The comparison to the Intel D525 is really unfortunate, as that's a 2.5 year old, 45nm Pine Trail box with a 13W TDP. The low power 22nm Ivy Bridge cores are only 14W, so I'd really wanted to have seen a comparison to a more recent CPU.
The Exynos wipes the floor on the parallel NAS tests. I'm not sure why exactly.
The Atom core has known issues with floating point relative to desktop CPUs, and the A9 was never particularly good at it. So the FFTE results showing the A15 about 50% faster than Atom and 2x faster than A9 are a surprise. It's almost what I'd expect to see from a desktop box at that.
On the other hand, the 7-zip and x264 tests are integer dominated and largely cache-bound, and the A15 doesn't show itself off particularly well here, mostly matching the A9 and Atom per-clock. Yawn. (This is an area where all these CPUs get toasted by Ivy Bridge, btw -- remember that 14W TDP?)
The "Smallpt Global Illumination Renderer" and "C-Ray" tests (which I don't know anything about per se) shows everyone about the same (and for Smallpt 2x as fast as PandaBoard). So I'm going to guess this means the tests are DRAM-bound, and thus infer that the A15 has a similar memory bandwidth to Atom (probably 2x 32 bit vs. the Pine Trail 1x 64 bit channels). That's not great, honestly, as it's still about half of what you can get with a desktop part. But then DRAM refresh draws power, so may not have been an option for the Chromebook .
7-zip and x264 make almost perfect use of additional cores, so it makes sense that a quadcore beats a dualcore. (aside: I rather wish that Phoronix actually mentioned how much a given test benefits from multiple threads, single-threaded tests are presented next to multithreaded tests when comparing chips with different architectures and core counts without any comment or recognition.)
Additionally, x264 is heavily optimized for x86 and nowhere near as much for ARM. 7-zip is probably similar.
For floating point, A15 has double the execution resources as A9 (two symmetric floating point pipelines each capable of a FMA per cycle) so that's no surprise to me.
Exynos-5, OMAP and Atom D5xx are all dual-core CPUs. And they all get about the same performance per clock in those tests. I don't think parallelism is a determiner -- I went with "cache-bound" as my guess. And again, this is a spot where I'd really like to see a comparison with something other than Pine Trail. One assumes the L3 on Ivy Bridge would be a big help.
The Calexia Highbank Nodes are quad core. Relative to the OMAP4 tested, Exynos 5 has a 1.4x higher clock but 1.9x better 7-zip performance and 2.3x better x264 performance. For reference, A15 has double the vector execution resources as A9 in addition to adding out of order execution. Integer execution resources remain the same (two symmetric pipelines) but gains triple issue, wider out of order, and the potential of two load/store per cycle.
So simple integer heavy workloads aren't expected to be incredibly faster from A9, just the ~40% ARM quotes and these tests back up.
Atom D5xx has two hyperthreaded cores, and hyperthreading does help a lot for x264 at least. IIRC when I tested on a single-core Atom, the hyperthreading gave x264 a 1.6x boost.
14W TDP is still too high, when a dual core A15 gets 4W at most. I doubt the IVB you're talking about is more than 3x faster than A15. Intel also has an advantage in process right now, and I'm not sure they will have it in the future. I think 14nm will be delayed, while others will catch-up to Intel on 14nm, and they might launch 14nm chips in the same time with Intel's 14nm chips, around 2015. We'll see.
Too high for what? Note that the backlight on these devices is probably 6+ watts already. And TDP isn't a good comparison for "max power" (and it's down right awful for "typical power") estimates. It's a thermal number saying what you should design your packaging for. The point being that the Netbook being compared corresponds in the modern world to a much more performant ("ultrabook", sigh) system at the same power draw.
As far as process nodes go, I'm not sure what you're citing as evidence there. Intel has been getting farther ahead with each node in the recent past. They launched 22nm more or less simultaneously with TSMC's 28nm logic (who themselves have a good lead over the rest of the industry!). Six months on, all you need to do is walk into a store to find 22nm digital logic in PCs everywhere. I don't believe there are any sub-32nm ARM SoCs in consumer devices quite yet (the Exynos-5 in the linked article is 32nm I believe).
What's with the only 6.5 hour battery life? I would buy it if it had >=9.
Edit: it's a serious question :) I am really wondering why the Nexus 10 has 9 hours and this 6.5?
Please Google, add a 3rd 'chromebook without chrome os' which is a Nexus 10 (Android but hopefully soon Ubuntu hacked) with a click-on keyboard (with battery) for $350. Thanks!
Given the same volume, a tablet has more space for batteries than a laptop. A laptop has to use space for the hinge, keyboard, trackpad, peripheral ports, and display. Worse, you can't fit batteries behind a laptop display without ruining the balance. On the other hand, a tablet is practically a screen with a ton of batteries behind it.
Here's some real-world data: The 3rd gen iPad has a 42 watt-hour battery. The 11" Air has a 35 watt-hour battery. The Air is 25% larger in one dimension and over 50% heavier, but it still can't fit the same amount of batteries as the iPad.
And of course tablets typically have slower processors, less memory, and less flash than laptops. Their lower specs mean tablets draw less power.
With so much going against them, I'm surprised that some laptops manage to come close to tablets in battery life.
Intel chipsets and processors have gotten very, very good with power management. With reasonable screen brightness and Wifi enabled, my X220 idles at ~8.5W. With a 90Wh battery this gives me ~10 hours on a plane, which is more than I can handle in one go. A tablet TDP (iPad) is about 5W which is 2x better. This is 2X better for a device with smaller screen, no spinning drive, no keyboard, and a much weaker processor.
Intel has been improving their power efficiency dramatically in the past few years. They have the most advanced fabs and arguably the best technology in the semiconductor industry. All the talk is about ARM and low-power computing but Intel is, watt for watt, a serious threat in the high-performance computing market.
That's a 6x larger battery, and it most likely scores only 3x faster in browser tests (this dual core A15 can score under 700 ms in Sunspider). And that's disregarding the much lower price and weight of this machine.
X2xx seem to be nice laptops; i'm afraid to buy one though because I had quite bad experience with intel laptops including my 2009 and 2010 macbook pro's; they just conk out after 3 hours with new batteries. And I check often to see what is running and work almost only in the terminal (must say, since i replaced the native terminal of mac os x with iterm3 http://www.iterm2.com/ it is much better). The other intel laptops I had didn't even (ever) make that (even the ones with tiny screens). And I don't buy the cheapest stuff.
On the other hand; my S2, iPad 1, Pandora and Zaurus easily make 10 hours. I don't spare them from hard work but they do much better. On the Pandora & Zaurus I can work normally mostly (despite the tiny screens), so I don't mind a 10 inch screen (but preferably with an insane high res like the Nexus 10) at all.
Agreed about one thing, Intel is finally starting to care about power efficiency. The CPUs have been solid since the Pentium M days, but the chipsets feel more neglected.
It only has a 4,000 mAh (15 Whr) battery, smaller even than Nexus 7's 4,300 mAh battery, and considering it also has a much larger 12" screen (more than twice screen area), the battery life would also be smaller than Nexus 7.
Nexus 10 has a 9,000 mAh battery, which is also used to compensate for the 2560x1600 resolution. iPad 3/4 has 11,500 mAh and it's pushing 1 million fewer pixels for comparison's sake.
But I agree, Google needs to rectify this in next year's ARM Chromebook. They need to use a battery large enough to hold for 10h (and hopefully maintain, or lower the price). If they use this big.Little chip from Samsung next year, it might be easier to achieve that:
Please don't try to compare battery capacity by looking at just the current*time. That's an irrelevant number if the voltages aren't the same. The Chromebook has a 30Wh battery, which is twice the size of Nexus 7's.
So when I suggested that mAh wasn't the optimal unit for measuring capacity, the implication wasn't that you should measure by number of cells instead. It has a 30Wh battery as even minimal research would show:
Coming in even or slightly faster (in most benchmarks) than an Intel Atom D525 from over two years ago is "crazy fast"? I'm impressed with the Chromebook overall and I think it's a cool product but talk about a bait-and-switch title.
I think people are reporting this relative to expectations rather than absolutely: they assumed it woudl be slower due to the lower cost and lower power consumption but the results defied that expectaion.
$250 is not exceptionally low cost for a netbook. Furthermore the power dissipation of the chromebook under load is only slightly better than a comparable atom system [1]. The idle performance is better, I will give them that. Still, the death of x86 in the face of ARM for performance-sensitive environments (even when power consumption is a factor) has been greatly exaggerated.
A15 is 40-65% faster than Atom and uses half the power consumption, according to your link. So yes, I'd say that's pretty impressive at its level.
Also, this specific Chromebook is higher quality than those netbooks. It's more like an ultrabook in terms of design. Not to mention that it has twice the battery life of those netbooks under normal usage.
>All my development happens in ssh and firefox. So performance is not an issue.
Does that really bear out? I like the idea of a low-powered netbook connecting to a high-powered server for development, but in my (very limited) experience it's a pain.
I have two machines, and old laptop with an AMD RM-70 and a Thinkpad X41 with a Pentium M 1.6 Ghz.
On both machines, using Firefox is a painful experience because the performance is so bad. Using chrome is pretty good.
I've been thinking about buying a newer netbook with better battery life, but the performance of newer low-powered mobile CPUs doesn't seem that much greater than the systems I already have.
Depends on whether your use of Firefox is for Javascript or server-side scripting. Running heavy non-minified Javascript with caching disabled is much more pleasant on a fast machine.
I'd rather get hit in the crotch than use Firefox over X forwarding. Slow on top of slow. There is some interesting work with NX and NX-like solutions that make it much more tolerable.
Out of RDP & NX, both require extra software but RDP is more widely re-usable (virtual box, windows etc).
The bandwidth requirements aren't the problem with X these days - a 3G connection at under 1Mbps real world is fine for reasonable display sizes, it's the round trip latency is the killer. It's such a chatty protocol, but on a LAN / WiFi it's fine and works out of the box with... scratch that, you need to install XQuartz separately since Mountain Lion. I almost forgot about that.
Yeah, my understanding is that NX mimicks the remote end locally and then translates the result and transmits that. There's an Android app called BVNC that uses X forwarding with a local X server to do the same thing. No extra software and it's more responsive than X-forwarding.
I should've just mentioned it before, it may be of use to you.
I think performance is still an issue. currently there are no gpu accelerated drivers if I understand correctly for non-chromeos linux, so until someone puts the effort into making that work.
I am also starting to consider this for development, but I need more assurance that it is a viable option.
Does everything mostly work out of the box, or would there be a lot of tinkering / troubleshooting required, e.g. because of the somewhat non-standard nature of running ubuntu on ARM architecture? What about drivers, e.g. for wifi and trackpad?
After running debian on an ARM box for the past ~5 years, my experience has been pretty favorable. If there's a solid community following (which is a good bet with this product given the hype around it) the drivers should be a non-issue. Installation went very smoothly for me. Everything else, if it can get built from source, should be fine. On my box apt-get install "just works" for almost all of the software I use, and maybe 75% of make installs go smoothly with relatively little tinkering required.
For getting actually work done, I use an x86 laptop. I wouldn't ever go back to using a netbook or any non-mainstream notebook with iffy linux support. It's just not worth your time to deal with technical issues. Buy the best tools that you can find (best being what works for you).
One of the reasons the Android emulator is slow, it's because it has to emulate an arm chip to x86 instructions. So if I'm running an arm pc as my main dev pc... Would the emulator run magically fast? Like unicorns, rainbows and such?
My guess is they would have to make a version of the SDK for ARM, and they haven't yet. But they may soon enough as these ARM chips get closer to PC-like performance, and more people buy ARM machines.
The emulator is qemu-based, and so is kvm. And, luckily the Cortex-A15 feature list has "hardware virtualization". And,
quick googling shows there's ARM virtualization support for linux. And, this has already been done for android/x86. So all the major bits and pieces are there.
I guess you meant "was slow." As of now, the Android emulator ships with an Intel Atom image that uses Intel HAXM tech for native execution (and is much faster than using the ARM image/emulator) on an x86 machine.
Isn't the entire point of the JVM: "write once, deploy everywhere" ???
Emulating the architecture makes sense from the standpoint that you want your development and 'production' (ie, the devices/handsets) to be the same ... but the Android emulator is INCREDIBLY slow. The juice is not worth the squeeze.
There used to be a "simulator" build of Android, where all of Android got shoved into one process, but it was flaky and broken as of Android 1.0 and never got revived.
For the things I've done on Android (games, slideshows, etc) I've been able to exclusively use GL and not the Android UI toolkit, so it's simple for me to have a "desktop" target and do all of my real work there, occasionally building for device and testing...
Have you checked out the Android-x86 project [1]? I remember, they were able to deliver some nice performance gains using an x86 emulator vs the ARM based emulators.
I should be able to figure this out for myself, but I will ask anyway: is 16GB SSD really enough to support Ubuntu and a development environment?
At a minimum, I would need Ruby+JRuby+Clojure+editors an an IDE. I could probably live with just 4 or 5 extra GBs for project files and whatever my current writing project is.
However, it looks like it has an SD card socket as well as a USB 3.0 slot. That means you could put a 32GB SD card in there and have plenty of room for projects. If that ounce is too important, you could probably get away with an Xubuntu distribution that's fairly stripped down.
Yeah, this. I've been using the Chromebook as a mobile dev device since the week it came out (lucky enough to get in on the original Play orders) and the primary frustrating thing about this box is that an inserted SD card sticks almost halfway out of the case.
I was hoping the device would allow sdcards to be fully inserted as is the norm on most PC laptops, so I could just insert a hefty 32GB card and treat it as almost internal storage. But nope, Samsung can't not take a design idea from Apple, even when it is one of their occasionally bad ones.
Does anybody know how well the Mali drivers work on vanilla Linux (non-Android, non-Chrome Linux)?
As a longtime Linux guy, these ARM machines (and the similar tablets) are enticing. I'm finding conflicting reports on the drivers, though. It appears they were closed-source, that there is a project to reverse engineer them, and also I found a website (malideveloper.com) where they appear to have source code officially available...
I don't know much but it's a bit of a mess. You're right that there's a reverse engineering effort but it's against the 400 GPU, not the 704 in the Exynos. Samsung has been under quite a bit of pressure to open up the GPU code, and they've acquiesced to two binary blobs. See the Arndaleboard in ref below.
ARM have released code but it requires the full DDK which is not for general use. It's only available to partners.
BTW, in this link is a full linux Jelly Bean android repository. Except for the blobs, which are binary.
Only the kernel code is available on malideveloper.com though you can probably obtain binaries for the userland stuff (EGL, GLESv2) easily enough. ChromeOS is close enough to normal Linux (normal libc, etc) that it should be possible without too much hackery.
With the caveats that the application must be written in a language whose compiler/interpreter seamlessly supports ARM, and whose dependencies all take ARM into account wherever architecture-specific code is necessary.
Say, for office users is open office the predominant office application suite used by these books, or to they virtual machine m$ office? I ask because i am currently a student
I wonder how much more performance can we squeeze off from compiler. Since x86 had the advantage from years of tuning. But if a Ivy Bridge of upcoming Haswell was throw into the chart i guess ARM as a performance solution still has a long long way to go.
Ah, that sucks then. Having done stuff with a BeagleBoard with everything on an SD card (even a class 10 one!) it's horrifically slow. Though perhaps with the OS on the built-in NAND and /home on an SD card would give not-horrific performance...
It has 16GB of on-board flash behind a flash/SATA interface chip, basically a 16GB SSD as parts on the board (more compact and cheaper than a replaceable 2.5" drive). That is plenty big for the OS, lots of programs, and your home directory as long as you don't go overboard with pictures and videos. Those should go "in the cloud" or on a SD card.
I put on a small Debian-based chroot and installed Guacamole[1]*, so I can use my favourite editor and X11 programs within a Chrome tab.
Compiling large C/C++ programs takes a while, but the Go compilers (what I mostly use) work really well. And I can always SSH into beefier machines (Linode/University) if I ever need to.
This replaced a Thinkpad which cost 5 times as much and broke in less than 13 months. So far I'm not missing the Thinkpad. (Well, maybe the 3 button TrackPoint)
[1] http://guac-dev.org