Hacker News new | past | comments | ask | show | jobs | submit login
Breaking BIOS: enabling VT-x virtualization support on Acer Aspire One netbook (sudonull.com)
133 points by userbinator on Nov 18, 2022 | hide | past | favorite | 78 comments



2012 should be added to the title.

FWIW the CPU in question is a 45nm dual-core, four-thread, 1,66 GHz Intel Atom chip that supports a max of 2GB of RAM. It was Intel's X86 alternative to ARM for phones/tablets.

Its micro-architecture was completely different and far more simpler and lower performing than Intel's Pentium or Core chips of the time in order to massively reduce die-size and save cost. Therefore it was slow and under-powered even back then in 2011, even without these artificial limitations put in place for market segmentation. Just ask anyone who used Atom chips of that era.

At the time you would often find these chips in cheap Netbooks and Tablets, usually by Asus, and running a bare bones Linux distro was the only way to get anything usable out of them, as getting run over by a bus was a more enjoyable UX than running windows on them.

In a way, those cheap, light, portable Netbooks were ahead of their time, held back mostly by the low performance of the low power chips of the era, the bloat of Windows of the era and the jankyness of bare bones Linux distros of the era that wouldn't ship with the mainstream apps people used on Windows.

So as cool as this hack of removing the artificial limitations is, I'm not sure how much this really helps to make a case for using this CPU for OS virtualization using Windows VirtualBox on all four threads, considering the already severely limited performance and RAM.


My first ever computer was a laptop with a first gen Atom and 1GB of RAM. Single-core, no hyperthreading. I was super excited and loved it, but it was so slow that it felt unusable even to me as a young teenager that didn't know any better.

Guess I should be thankful though, in trying to get it to be faster I discovered that some Linux distros can be very lightweight, and it started me down a rabbit hole I'm still going down.


That's the same computer I learned to program with. I was forced to use vim because of system resources. And now I'm glad.


It's funny how many of us have the same story.

It was a bit like the microcomputers in that respect.


TinyXP ran very well on this class of computers, especially for as long as Firefox supported XP, it was a really useful platform.

Sort of like a chromebook before chromebook.

I later did a lot of C, PHP and Python programming in Linux + Emacs on such a computer, so not completely useless.


A more useful hack is unlocking AES-NI with the same method. Some laptops have AES instructions permanently disabled on boot via BIOS by the OEM to comply with regional cryptography regulations, such as some Lenovo Thinkpad models in Russia. In the 2010s, disassembling the BIOS and patching a single line away can regain hardware AES functionality. Unfortunately it's no longer possible after Boot Guard's BIOS lockdown.


Reference for that last sentence?


With Intel Boot Guard (now almost pre-enabled on all laptops on the market), the CPU verifies the digital signature of BIOS code. The public key is burned into the chip [0] by the OEM before it leaves the factory. If signature verification fails, the CPU refuses to execute any code. Naturally, any code modification in the BIOS causes the check to fail and effectively bricking the BIOS.

[0] Actually the PCH, aka southbridge, not the CPU. But it's not much of a difference for end users.

See:

* Intel Boot Guard, Coreboot and user freedom

https://mjg59.dreamwidth.org/33981.html

* Why You Don't See Coreboot Supported By Many Modern Intel Systems

https://www.phoronix.com/news/Intel-Boot-Guard-Kills-Coreboo...

* Bootguard (for a technical description)

https://trmm.net/Bootguard/


> it was slow and under-powered

As someone who used one of these laptops as a daily driver for years, I disagree. The CPU was quite powerful. I would regularly use it to take notes in class, do development work (with an IDE) & browse the web. Compilation of larger components could be quite slow.

If you didn't insist on running the latest desktop environment with a bazillion bizarre graphical effects, it worked without issue. Battery life was 'good enough' that I could get a couple hours of regular usage.

Fast forward to today & many "productivity" applications like Microsoft Teams are so bloated they consume more memory than the entirety of that system had.


Same; I had an EeePC as my main laptop for four years. Never really felt slow, aside from when compiling software (I distinctly remember installing MacPorts on my hackintosh partition and it taking three hours to compile the base set of packages).


>As someone who used one of these laptops as a daily driver for years, I disagree. The CPU was quite powerful.

The benchmarks, reviews and poor sales disagreed with you. Maybe it wasn't under-powered for your standards and needs back then, but compared to what the mainstream laptop market of the time, they were very slow in windows apps which is what most consumers ran back then.


Past Windows XP I've never used a Windows system on any hardware I would consider adequate. Other than the non-stop "windows update broke thing X which worked just fine" it has a user interface that still manages to feel behind Windows 3.11.

I think the fact that I consider XP to be performant is the fact that I had less to compare against back when it first released.


> reduce die-size and save cost.

reducing power consumption was also a goal, which this worked for too. Though the early models at least were paired with hungry chipsets that negated most of that benefit.


Being battery efficient is irrelevant if it comes at the cost of drastic reductions in performance that make the device unusable. The sales numbers reflected this consumer choice in that era.


The sales numbers for these devices were very good for at least a couple of years. They ran the current Windows (Vista) at the time very badly, partly due to the CPU but mainly due to not having much RAM, so MS extended the life of XP for them as it was scared Linux would win more mindshare otherwise. Linux and OpenOffice (IIRC this was before the OpenOffice/LibreOffice thing) perfectly well. Web browsing was OK too as long as you didn't overdo the number of tabs open (things were mostly not as bloated as they are today in that area, by quite a margin), and of course tasks like email were fine.

My little netbook running Debian was a bargain and did sterling work for a while, and I know others that found that form factor and other properties to be rather convenient.

Sales numbers did fall off precipitously eventually, partly due to new models being hobbled by restrictions from MS (they wouldn't sell them XP for higher spec models IIRC, so no models got much of a RAM upgrade, and as that became an issue market share was lost to tablets (and to a small extent smarter phones) and higher spec laptops).


>The sales numbers for these devices were very good for at least a couple of years

Sales number were good at first because all consumers were drawn to the massively small price tags of these machines along with the small form factor/increased portability. They were like 50% cheaper than a regular laptop and could fit in your purse/cargo pants. Who wouldn't want that?! Especially in developing countries like Eastern Europe where a regular laptop would cost several months(plural!) wages.

> Sales numbers did fall off precipitously eventually

Yes, because after purchasing them, consumers wised up and realized the low price tags came with some huge compromises the Average Joe did not expect (Atom CPU was too slow for Windows which let's face it was by far the most used OS back then, low-res display, minuscule eMMC storage that filled up far too quickly, the Linux ones didn't ship with a sane mainstream distro like Ubuntu or Mint but with some weird bare-bones custom one that had no mainstream apps available, playing PC games was an absolute no-go, etc.) so the Netbook market crashed, as unless you absolutely needed the small form factor and willing to put up with these compromises, you were much better off buying a second hand used regular laptop at the price of a new Netbook.

In a way, these slow, small-storage, weird-Linux, Atom Netbooks kinda poisoned the well for the rest of the market segment, even for the "good Netbooks" whcih were more capable, until Apple came with the Air and the PC market brought the Ultrabooks.

The only place I saw Netbooks flourish was, no joke, the taxi drivers in my Eastern European country, which, due to their diminutive size, would attach them to the central console of their ancient cars as some DIY infotainment system, and use them to watch movies and play games while waiting for customers (not gonna lie, a 2010 Netbook with keyboard bolted to the central console is still a better infotainment system than the touchscreen ones shipping in most modern cars today). I even saw one taxi driver playing DOTA on a Netbook in his car (not while driving, although for Eastern Europe that wouldn't be out of the picture).


I seem to remember netbooks (even with the power sucking chipsets the early ones had) being pretty popular for short awhile. Ultimately they were kind of a flash in the pan, but Microsoft was sufficiently afraid of them they brought XP back from the dead just so they could get linux off the things. (Vista/7 are way to big of resource hogs to run tolerably on these, although they did have some starter edition of windows7 for the better performing later models)


Its micro-architecture was completely different and far more simpler and lower performing than Intel's Pentium or Core chips of the time in order to massively reduce die-size and save cost.

...and yet they decided to add VT-x for some reason, but disable it?


Atom cores were getting pitched a lot for micro servers at the time. At the time containerizing tech hadn't really taken off; Vagrant using full VMs was still bleeding edge env orchestration.


It's called market segmentation. It has been happening for decades.

You can also buy cars with included heated seats and they're disabled unless you pay extra.


The first generation Atoms were some of the worst chips ever made. By the end with the Bay Trail and Cherry Trail chips they were quite good, but Intel killed them in favor of M-series chips with the same performance per watt but five times the price tag.


Bayfail was utterly broken on Linux until two years ago or so. There is a bugzilla entry with more than 1000 replies. And it appears that some platforms with that CPU still have issues with Linux, which according to that issue is yet another bug.


The biggest problems I had with Bay/Cherry is because they are 64bit processors but have 32bit UEFI. So the distro has to be able to install a 64bit version but from a 32bit UEFI. Most of the "mini PC" that came with those Atoms came with a 32bit Windows preinstalled, I assume for the same reason. Some distributions like AntiX, MX Linux, and Void Linux, can boot 64bit OS from 32bit UEFI.

Also WiFi/Bluetooth can be problematic when they are on SDIO, in my personal experience. Which anecdotally for me was about 50% of the time.


That might be why some manufacturers decided to disable 64-bit support on them, which as it turns out, is another MSR you can do a BIOS mod to unlock.


What does its compatibility with Linux have to do with how good it was as a chip?


Strictly technically speaking, not much, however having working Linux Open Source drivers extends the life of the product using it well beyond the point Microsoft, and/or the hardware manufacturers, declare it obsolete then stop any further development for that chip. There's a huge number of old systems out there that are supported by mainline Linux but couldn't run anything newer than Windows XP or 7, and they aren't just laptops. If you have put some money into a piece of hardware that still works perfectly and are forced to ditch it because either its manufacturer or Microsoft suddenly say "sorry, no more drivers, but here's an offer for the new model", that definitely sucks.


Earlier Atoms were close to crap for desktops, but they still have some uses.

I used a N455 based netbook for years as a motorcycle-laptop with a stripped down Debian. Unfortunately I didn't know about Alpine Linux back then, which could likely have made it a lot faster. Also, my home NAS uses a D410 CPU which is enough to support two ZFS pools and other services.


It still makes me angry the Surface Go and Surface Go 2 have inferior screens and build quality to the Atom-powered Surface 3.


Buying an atom netbook was what helped me learn about the difference between in-order and out-of-order CPUs.

I actually upgrade from the 1.66gz atom to a 1.3ghz ULV core 2 duo, and it was way faster.


For nearly 10 years I used a compact netbook with an Intel Atom Z530 1.6 GHz and 1G RAM, a Nokia Booklet 3G. I swapped the HDD for an SDD, and given its fanless unibody aluminum design and its 10.1in 1280x720 screen, it was silent and useable for basic office computing and 720p video, with a full-sized HDMI out. Even with Debian I used to get 6 to 7 hours of battery life. I used it extensively for note taking, dabbling in Python, and writing. It was slow, but more reliable and less buggy than running Debian on my older MacBook Pro (which had hardly any battery life left). I used the Nokia Booklet until its motherboard died in 2018, and I still miss that little machine's compact simplicity and utility.


The cheap netbook market just kinda died and got replaced by more expensive "ultrabook" market. I used my Samsung N130 quite a bit, only real thing it missed was serial port but no laptops nowadays have it sadly


I would say netbooks were replaced by phones as first computing devices. Can you learn how to program on a phone? Probably not, not the same way you could on a little netbook.


Most consumers buying netbooks weren't buying them for programming, they were buying them to, as their name implies, surf the net, and that made them the first victims of the mobile device revolution as now you could surf the net much better on a phone or tablet and had better UX and much more addictive apps.

Netbooks being used for programming was a very niche use case (maybe not on HN, but in general). Most programmers had more powerful and versatile PCs/laptops at the time for that.

That explains why they disappeared so quickly. But yeah, they were excellent for cheap linux devices.



I dodged the Atom and picked up an Acer 1810tz. 4gb RAM, intel U4100 CPU, ran rings around netbooks at the time and I still use it here and there to this day. Probably the best keyboard I've ever used on such a small device.


Even for in-order vs out-of-order, there's slower and faster.

Early atoms were really slow, compared to e.g. a SiFive P270, which is also in-order.

Out-of-order CPUs exist that are much slower. After all, in-order CPUs can be superscalar still.


I was wondering about the contents of the site sudonull.com - seemed a mixed bag, not a tech blog. Apparently it's translated repost from habr.com - both sites serve from different networks, so I'm not sure they're legitimately affiliated, but could very well be.

https://habr.com/ru/post/152056/

https://en.wikipedia.org/wiki/Habr


The article very much looks like it's machine translated from Russian because some phrases are weird in English but make perfect sense if you translate them back. Also some screenshots are in Russian.


Habr allows reposts from different media, but only by author.


I used one of these for years for programming whilst travelling at university.

It worked well, but I could pretty much only run one program at a time. It felt more like the old microcomputers in that respect!

But that forced me to use dwm and Arch Linux, and learn a lot more in the process.


I had Compiz Fusion with LXDE running on my Acer Aspire One and I was amazed at how well it performed. Wobbly windows for live video from the webcam and everything. Running Gentoo's emerge did take all day though. In retrospect, Arch would have been a better fit.


I distinctly remember having to use mplayer/mpv over VLC, and youtube-dl, etc. (definitely not Chrome!) just to be able to play any reasonable video (usually downscaled from 1080p).

And Zathura for lightweight PDF reading (with customisable colours!).

And of course vim and emacs (you're not running IntelliJ, PyCharm, etc. on that).

But it worked really well, and in retrospect was probably good for productivity as you couldn't really do anything else. So I'd download Coursera videos and watch them on the train with mpv/mplayer, etc. - it was frustrating when I had to fix some GUI code on-the-go though, and could barely use Glade with the tiny resolution and screen.


I have no idea why option to disable it even exists in BIOSes, let alone is often set to default.

Anyone have insight on it ? What's the utility of being able to disable virtualization ?

Even in case of "corpo disallows it" (for whatever idiotic reason) it can be turned off one way or another in OS...


> What's the utility of being able to disable virtualization ?

I consider the option to disable (or rather: enable this function manually if you need it) virtualization to be quite useful because if this feature is disabled, malicious software cannot install a virtualization rootkit, i.e. a rootkit that makes itself a hypervisor and moves the current OS instance into a virtual machine running under this hypervisor.

See for example https://en.wikipedia.org/wiki/Blue_Pill_(software)


Well I consider this an idiotic but pragmatic reason: I've encountered laptops where Windows Subsystem for Linux will cause windows to bluescreen on boot (do they still call it that?) when VT-x is enabled. There's an open bug report tracking the on WSL for a couple of years now.


Market segmentation to make more money (it needs to be configurable to lock if 'off'), and configurable with default 'off': older platforms and software was really bad at dealing with virtualisation. Only years after the likes of the Q35 and Q45 chipsets did it get better. It was so bad that up to a point you were better off disabling it and using paravirtualisation instead. The same happened during the Vt-d introduction btw.


older platforms and software was really bad at dealing with virtualisation

As far as I know, it just enables a few new CPU instructions, but older software that didn't know about them wouldn't use them anyway?


It's not using vs. not-using, but software compiled at a time where the Vt-x and Vt-d quirks were not known causing all sorts of crashes and intermittent freezes.

So for 'broken' software (and there was a lot of it - even back in the day of the turbo button) it often can be better to just not allow the software to see/use all the hardware.

Example of the weird stuff you have to first know about, then detect and then mitigate, on a per-device basis: https://marc.info/?l=linux-iommu&m=139305749014927&w=2 and that's something that when you don't do it, the software doesn't deal with it very well. Since not all software can be influenced by its operator (be it lack of skill, resources or simply legally not allowed), it can be much easier to just have a switch in the firmware where you can hide it from the software.

Switching code paths based on MSR is something that is one of the safest (and broadest) things you can do early on when you start up, so in a way, such a switch is merely "supplying configuration" but in a pre-boot fashion.


That example seems to be about IOMMU, which is also something that software needs to know about in order to use. I guess you were actually referring to features which the software knows about, but have errata?

As another datapoint, I have a machine with VT-x enabled and Windows 98 runs fine. It just doesn't know about the extra features the CPU has (like it also doesn't know about it having more than one core).


Yeah, there is a divide between what software will or will not crash. Essentially it's a combination of software using features and the features not being stable.

The best combination is either software that knows the issues for each implementation and can work with it, or software that doesn't use the features (either not using them natively or by making it unavailable to the software).

It's amazing how string-and-chewing-gum some hardware is with software working around so many issues to not crash. Sometimes there is good documentation, errata or usable flags (like MSR bits), sometimes it's just 'lower the pressure until it stops crashing' and you get a fat list of quirks.


> Even in case of "corpo disallows it" (for whatever idiotic reason) it can be turned off one way or another in OS...

Corpo can disallow that too via central policies


IIRC hardware CPU virtualization enabled in BIOS could have some negative impact/bugs on some SW, and also open the SW to some security vulnerabilities so it's a good idea to give the user the option to disable it and use SW virtualization instead.


It's true that Blue Pill and friends were an issue 15 years ago. I would have expected VT-x to be on by default now though given that Windows 10 and 11 both use virtualisation for various security features (Credential Guard, HVCI and so on). There's no advantage to disabling it...

(Edit: of course, that wasn't the case when this particular article and BIOS was written.)


Another reason is market segmentation. This model has xyz, but for 50 more you can have one that does this cool thing. Even though the hardware is there. Pretty much all of the large OEM's do this. In the micro economics world it is called 'perfect pricing'. Basically segmenting your product enough and putting the prices at the right spot so everyone wants to buy along curve. In practice it is harder to do than that. So you get weird bits of hardware that can do things but are fused out. Car manufactures are trying to do this with 'rent your heated seats' kind of things. On the back end it simplifies what you buy into your pipeline and you control it with software.


In some cases virtualization can affect performance. Mind you last time I checked this to be the case was around 2017-2018. I don't know about the situation right now (though I wouldn't be surprised if it would still apply in some workloads).


Because Average Joe doesn't use virtualization at all, while Average Malicious Hacker would use it in a heartbeat if he can.


This should have (2012) in the title.

Also, there's no breaking involved as there's no signature anywhere, it's a reasonably simple mod. For example, re-enabling CPU undervolting seems to be impossible in modern ThinkPads because the UEFI firmware is signed.


Do you have more details?

Why can't you upload your own private key in the new bios payload, like you can for SecureBoot?

Everything is sitting in a flash chip. Some parts may be RO, but it can be replaced by another chip if you can't tweak it directly with flashrom.


oh well, I thought the same, but got curious because the framework laptop would be way nicer with an open UEFI...

Vendors burn a key into the system which prevents loading an unsigned BIOS. This is a one-time-programmable fuse (OTP fuse) and part of Intel BootGuard protections.

You get a laptop with _fused burned_ to prevent loading other firmware to your device. Booting won't work if you load a replacement SPI chip with anything but a vendor-signed firmware.


I'm guessing Intel Boot Guard is enabled and provisionned with Dell keys, but it's just a guess.


What a strange coincidence, I was just looking at an old Aspire ZG5 that was sitting in a box of old laptops I've been hoarding, and thought to myself that it was a cool form factor too bad it's so ridiculously underpowered to be useful for anything.

I don't think I'll be doing this, nor, as I briefly considered, trying to wedge a raspberry pi in the case. Cute machine though.


I used an extremely weak Acer netbook — the kind that shipped with Windows XP and still slowed down — as an Ubuntu home server and it worked surprisingly well. Apt didn’t mind the small, slow hard drive and everything seemed to actually run relatively fast. (I guess it just really hated drawing a GUI?) It was mostly a printer server, serving CUPS for a USB-only printer, but it did its job well for years and years. Plus the battery had an upside of serving as a UPS, heh.

But I did learn the hard way that they’re really not meant to run like that: the fan became increasingly noisy until it sounded like a car ignition, and the heat (when the fan was working fine!) managed to slowly wear out the membrane keyboard, killing more and more keys, making in-person debugging kinda difficult.

But it was really cool. And I still have its more performant cousin Asus netbook somewhere!


Netbooks were pretty cool as far as form factor went, but indeed, they were mostly utterly useless, often running clocked down single-core Atom chips...


I used to see a lot of them in conferences where college students or industry folks had them for note taking and had a bigger, bulkier, more powerful computer elsewhere. As a primary computer they were lacking, but as a second one they were cheap and easily mobile.


The tiny keyboard combined with my oversized hands made it really hard for me to justify as a note taking tool haha. But yeah, I definitely can see them being used for this purpose. Especially since the laptops of that era weren't particularly lightweight either.


This is interesting. I hope someone will do the same to re-add PCIe 4.0 support to AMD AM4 B450 chipset mainboards (it was available temporarily until AMD took it away again in later AGESA versions). Not a trivial hack i reckon.


It's weird considering the PCIe is wired directly to CPU anyway, why would chipset affect level of support ?


I think the official reasoning was that the motherboards weren't made with gen 4 in mind, so signal integrity problems could arise (as they do with many gen 3 riser cables).


Exactly. PCIE 4.0 isn't some software feature that you can turn ON/OFF without consequences, but it requires quality PCB traces to maintain the faster PCIE 4.0 signal integrity, otherwise it would introduce instability.


Don't you think board vendors tested the PCIe 4.0 compatability before releasing the BIOS with PCIe 4.0 support?

I think the more likely explanation is that AMD wants to sell 500 series chipsets and is using its leverage on the board vendors.


Reminds me of the time (in 1990) that I hacked my 386 BIOS so I could add an IDE drive. That motherboard did not allow manually specifying C/H/S (https://en.wikipedia.org/wiki/Cylinder-head-sector) and the drive was much larger than any of the presets. Initially I tried just patching the table of preset values, but my first attempt failed because the checksum had changed and the BIOS failed POST. I ended up modifying two other unused table entries to force a checksum collision.


I forgot to mention the most "exciting" part of this hack. My (UV) EPROM programmer was an ISA-based peripheral and the 386 was my only PC (with an ISA bus), so I booted it up and then removed the EPROM from the socket on the motherboard, put it into the programmer to get an image (because I did not have confidence that the copy in RAM hadn't been altered), then reversed the image and burned a new EPROM. (If I hadn't used a new EPROM, I would have rendered my computer useless with no way to revert.) Fortunately I had a UV eraser so I could give it another go when the first iteration didn't work.


Nice! Was it difficult to find values that would force a collision? What kind of checksumming did it use?


Back in the day of 16-bit (386sx) 25MHz CPUs with no FPU, the original "Sum Complement" checksum was still more popular than CRC, BCH, or hashing functions.

https://en.wikipedia.org/wiki/Checksum#Algorithms


it has to be the classical crc32 and is easy cause you know it is just 32-bit :d


Yeah, just a small off-point: some CPU support VT-d but for some reason the corresponding chipset does not (yet their BIOS offers this as a configurable item).

Looking at you, Dell BIOS and Intel i7-2700 and Intel Q65 chipset.


Huh, I was under the impression that the Q chipsets were the ones that officially supported this.

I have a few ivybridge era chipset boards (B75, Z77) some of which actually don't officially support this feature but the motherboard manufacturers enabled it in bios anyway and it works. The sandybridge era was much messier for this feature.

It is very irritating Intel tried to gate this behind chipsets since I believe it is 100% handled by the processor.


Needs (2012)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: