Hacker News new | past | comments | ask | show | jobs | submit login
Librebooting the ThinkPad T480 (ezntek.com)
193 points by axiologist 30 days ago | hide | past | favorite | 80 comments



> That is about as simple as librebooting gets.

I had a nice chuckle at this. Buying chip clips? A separate Raspberry Pi to wire everything together and perform the flashing process?

Is there really no chance of some kind of click-and-reboot process, same as how official proprietary firmware gets updated?


Hackability is always at odds with physical security.

The general rule of thumb for the security-paranoid is that once you lose sight of your device, you should assume it's been owned (any imaginable variant/combination of evil maid, DMA exploit thru a physical port, etc).

In recent years there has been a steady push to raise the bar (TPM, SecureBoot, etc). Whether that's effective for protecting the median user's privacy and security is a separate matter, but the side effect is of course that this is increasingly becoming a hurdle for power users, enthusiasts, OS developers, etc.

ARM Macs are at a very weird spot on this spectrum. On one hand, we have a new, bespoke, and undocumented system architecture, and keeping a macOS partition is a requirement to continue receiving firmware updates; on the other, Apple has left a clearly labeled escape hatch for OS developers, and kept it from accidentally breaking. You can't have a fully libre boot chain, but it's not like Lenovo (or most other PC vendors) would endorse that either.


Personal opinion: flash chips of all kinds should be write-protected so that even a clip flash does not work... but they should have an authentication mechanism with, say, a 64 bytes passphrase that the end-user gets on a keycard. That way you'd need a literal "evil maid" in the household of the owner to do any modifications that might compromise the device.


This is simultaneously complex, hostile to the consumer, and a non-solution to the given problem. Write protection protects the chip, not the computer. And, do you seriously expect anyone to remember where they put those cards? What happens when those computers get re-sold used, and the new owner doesn't get the card with the computer?


That doesnt solve stuff. I had to desolder the SOIC8 chip in an X220 because my programmer didnt like the power draw of the remaining attached circuitry that consumed additional power. I also upgraded OpenWrt Routers by soldering bigger RAM and bigger flash chips in the past


> I also upgraded OpenWrt Routers by soldering bigger RAM and bigger flash chips in the past

Sadly this is harder than it used to be because with devicetree the flash size is hardcoded, where before it was auto-detected (so previously you could swap the flash and continue to use stock firmware, not you need to compile custom firmware).


That just means the adversary will need to spend two dollars[0] and an extra 15 minutes replacing the flash IC, rather than reprogramming the existing device.

[0] https://www.digikey.com/en/products/detail/winbond-electroni...


What you're describing isn't that much different than using cryptographic signatures. Give up control of the chip and let em write whatever they want to it, but only use the data if it was signed by some private key. This is better for libre too, because you can manage your own keys (presuming you can get root access to whatever low level controller reads and loads the flash).


From consumer electronics PoV, this is a complex and brittle measure; for simple on-board microcontrollers, write-protect could be established with a basic e-fuse. But even with e-fuses, we're talking fractions of a cent per unit, which adds up at scale - there will always be a cut-off point where a company will pursue margins instead.

Meanwhile, exploding pagers.


These days most such chips have write protect in built in. They don't need efuses because they can just use flash bits and extra logic at negligible cost


It doesn't have to be 64 bits passphrase. You can implement this as TOTP. Look at heads.


Then we have to implement TOTP (which requires a clock) in firmware. Complexity just exploded


I think that is nothing compared to what modern UEFI does. There are entire hardware drivers in UEFI. To protect against software evil maid attacks, you need to authenticate the device before you use it. So it has to be some type of challenge response protocol. It can be achieved with fido type keys or it can be HOTP/TOTP.


Fair point, we're already at the level of near full OS in the UEFI. Odds are good there's already full clock and crypto libs, so maybe it's not that much of an addition.


As long as Flash Write Protection is a thing and the default Lenovo BIOS enables it, yes that is how it goes.


There are some boards where librebooting can be done entirely via software


Subsequent updates can be. The original flashing process has to be this way for everything other than chromebooks.


It's just more straightforward and robust that way, because you won't be exploiting anything but using a product as the (ROM chip)manufacturer intended. Software's come too complicated these days.


What else would you like to put on the wish list for Santa?


Yeah, I have a Chromebook where the process to switch to coreboot was basically unscrew a write-protect screw (AIUI newer models might not even need that) and then you just run an installer script. That's a (...kinda) different particular firmware, but the process works. It just depends on your hardware.


As an aside, I'm surprised that the author's suggesting that 16GB is a sweet spot for a configuration – I'm not sure that's true today (I don't think it is for quite a few workloads on top of heavy webengine apps), but I doubt it'll be true in five years.

This is coming from an M1 MBP user with 32GB who, even with aggressive paging in and out of an uber-fast disk, manages to fill about ~20GB on a regular basis.


And that's a really really sad story.

And then seeing people say >I don't understand it either, a 16 GB DDR5 stick costs like $50

50 bucks is a lot of money to a whole lot of people. Yes, actual computations and compilation etc take a lot of memory, but there is so much memory wasted through js bloat, it's just sad. But if you take a little effort and optimize your system, then 16GB is still more than enough and "just works"


Got a refurbished X200 with 8 GB RAM and a CPU that has only 2 cores. Can develop software using it just fine. Can even watch YouTube videos in acceptable quality just fine. Can do all things ordinary people do just fine, all applications I need opened at the same time just fine. It all depends on ones software choices. If MacOS cannot cope with less than 16GB RAM, then that is a defect in itself.


It's not just the price, there are tons of new computers out there being sold with <=16 GB of RAM soldered onto the motherboard. It's not like one can just pop it out and put a new stick in.


Yup, $50 is definitely a lot when the whole computer was $150! Whatever it came with from the person I got the machine from on Craigslist, that's what it's generally staying with until I have to replace something - especially as I try to acquire multiples of my machines so I can continue to use them for a very long time. An entire functioning machine is more important to me than a RAM upgrade. (That said, it's really nice to be able to pop in new RAM if I happen to have some on hand, so having RAM slots is pretty important regardless!)


There's also a lot of power wasted. That extra stick of RAM will cause a hit to your battery life.


I'm hoping I haven't become an out of touch tech bro, but I believe if you're able to flash libreboot on your software development machine you're able to afford $50.

_should_ you have to spend more money to support bloated software? No of course not, especially since many users of such software _aren't_ tech bros, but as someone in tech, shelling out a little more money seems like a much more pragmatic solution vs having your computer be slow and/or waiting for the industry to change.


> manages to fill about ~20GB on a regular basis

Unused memory is wasted memory, so makes sense to always have a lot in memory. Doesn't mean that you'd have a worse experience with 16GB.


"unused memory is wasted memory" is a meme, technically true from a narrow point of view, but leading to bloat and encouraging bad practices. A little bit of care could shave off orders of magnitude of memory use, as well as performance, which could ultimately allow for cheaper computers, sustainable use of legacy hardware and keeping performance reserve for actual use. In reality, I the idea of increased efficiency by using more memory ultimately leads to software requiring that memory that used to be optional, and software not playing nice with other programs that also need space. Of course even with the idea to have everything ready in memory, software is not generally snappy these days, neither in starting up and loading even from fast SSDs and during trivial UI tasks. Performance and efficiency is also generally not something that programmers regularly seem to consider the way real Mechanical-, Civil-, or Electrical Engineers would when designing systems.

I accept trade-offs concerning development effort and time-to-market, however the phrase "Unused memory is wasted memory" does not seem appropriate for a developer who's proud if their work.

Little friday rant, sorry :-)


No, unused memory should always be used as cache if it has no other use at the moment. It's wasted otherwise.


I think a lot of this comes down to semantics confusion for most people. Intuitively one would assume "unused" memory would be the inverse of "used" memory, with not everyone thinking what even counts as "used" or "unused" in the first place. In reality on macOS/Windows/Linux "used" memory is counted as a specific type of usage (e.g. processes/system/hardware), cached things are counted as cached, and there are multiple ways to refer to which "unused" portion you mean (e.g. free vs available) as well as anywhere between a half to several dozen ultra specific terms to break things up further with which probably don't matter in context.

Once you clear the semantics hurdle it's surprising how much people are in agreement that "used" should be optimised, "cached" should fill as much else as possible, and often having large amounts of "free" is generally a waste. The only remaining debate tends to center on how much cache really matters with a fast disk and what percentile of workload burst should you worry about how much "free" you have left after.


Is that generally how unused memory is used, and will this kind of "cache" be released if another application truly needs it to load actually vital things?


Yes, that’s the main job of the OS memory management.


Using memory doesn't have to be about badly written software though, there's many legitimate use cases for actually using your memory to make your experience better.


My comment has not suggested that there were no legitimate cases for using more memory.

It's too easy, and happening too often on HN these days, to reply with a low-effort contrarian statement without engaging with the central point of the argument.


I think a more accurate statement is that developer time is more expensive than RAM now


Developer time is more expensive to the company than the user’s ram is, of course.


Anyone considering a T480 is not at all interested in performance at this point in time. Modern chips are going to be substantially more capable and use less power.


I assume a workload with many Docker containers will need much less memory on Linux compared to osx.


I don't understand it either, a 16 GB DDR5 stick costs like $50

Not having enough RAM slows you work to a halt, I would always go a tier down in CPU or GPU to have enough RAM

And it's also easy to expand later


> And it's also easy to expand later

As I pointed out in another comment, RAM is soldered on most non-desktop computers these days. It's not easy to expand later. The hardware companies are well aware of that, pushing overpriced RAM upgrades at the time of purchase and it's not like you can just walk into a store and say "I'd like this laptop but with last year's GPU model and 2x the RAM for the same price."


> As I pointed out in another comment, RAM is soldered on most non-desktop computers these days.

It's still very easy to choose computers with modular RAM, even in portable formfactors. The classic SODIMM module format seems to have run its course and is not compatible with modern low-power memory but we now have the CAMM module which at least Dell and Lenovo are shipping and major memory vendors like Crucial offer replacement/upgrade parts for.

Obviously if you prefer your computers fruit-flavored you're SOL, but that's not news to anyone either. Their memory packaging in the M-series machines has its own advantages that may be worth the tradeoff depending on your application, but for a normal user it's more of a limitation than a feature.


Don't know what it's like today, but more ram always means more power usage, so maybe makes sense for people who work on battery frequently...

Otoh, my main rig is also on 16gb today and I never run into issues. But then again I don't run electron apps and don't do webdev or microservice stuff with 30 VMs.


creator here. greetings.

32 is great for editing and such, but I do assume that one would be using linux, and in that case, I can consistenly open over 100 tabs in firefox and do programming and have electron apps open on the side on KDE plasma with no issues, no out of memory errors. Things do get squeezy and noticeably slow at those extremely heavy workloads; for heavy tasks of course get 32 but if 16 can do you that far its fine.

I even won a hackathon with 16gb of ram on an X230, if I can do that and be productive even at home its enough. macOS is just very RAM heavy, theres always at least 50 weirdly names background processes active.


I have a T480 as my main machine, but after skimming this blog post I am still not sure why would I want to flash libreboot on it, what will it improve?


For this particular model, not much, other than having a partially open source bios. That can provide better security and bug fixes compared to the original bios, but that's the sort of thing that'll be mostly transparent to you. You can make this a robust system like chromebooks with verified boot or use a project like heads, but these require quite a bit of effort. For older models, there used to be more practical benefits too such as removing wifi whitelists.


https://libreboot.org/ Has reasons why you would want a Free bios


It improves your machine by disabling Intel Management Engine, which is a back-door in your computer.


It is not possible for 3rd parties to disable Intel ME. Nobody but Intel themselves can disable ME.

The most you can do is drop it to some kind of reduced functionality mode some time after boot (through the HAP bit, or hackery which overwrites part of the flash memory). This is why dishonest vendors like Purism resort to confusing terminology like "neutralize".

https://x.com/rootkovska/status/939058475933544448 https://x.com/rootkovska/status/939064351008395264


I wrote the deguard utility that made this possible. (The vulnerability being used was found by PT Research in 2017 however.)

While yes you cannot strictly disable the ME, what remains of its firmware in this configuration is a bringup module that is stuck in a loop handling power management events.

The network stack, HECI stack, etc are all gone here. Effectively the only way to exploit it is to put your payload into SPI flash, which we are already doing anyways :)

It is also possible to take over the ME firmware and bring up the CPU using open source code, and have full control over the ME at runtime. This isn't implemented currently, but that's the direction this is aiming in.


> The network stack, HECI stack, etc are all gone here.

I think there is a misunderstanding. Intel ME is a hardware feature. Yes there is some flash memory which contains more code and an operating system, but what is stored in flash memory is only part of Intel ME.

Peter Stuge from Coreboot noted during his 30C3 talk that even if you completely zero out the flash, it is possible for Intel ME to send a network packet out of the ethernet interface. The cutoff point when this started happening is the 965 chipset around 2006.

https://media.ccc.de/v/30C3_-_5529_-_en_-_saal_2_-_201312271... (relevant part starts at 17:19)


It is a hardware feature, but it does basically nothing without its software in flash....

The only code that is inside the silicon is a 128K bootrom that literally just sets thing up for the real firmware to run.


Just wanted to say thanks for your contribution to making this stuff possible :) fist bump


This model seems to be as opened as far as possible, though (https://libreboot.org/docs/install/t480.html):

One of the benefits of deguard for Intel MEv11 is that it sets the ME in such a state where you can run unsigned code in there. This is how the Intel Boot Guard was disabled, because it is the ME that enforces such restrictions; more information about deguard is available on a dedicated page.

The deguard utility could also be used to enable the red-unlock hack, which would permit unsigned execution of new CPU microcode, though much more research is needed. Because of these two facts, this makes the T480/T480s the most freedom-feasible of all relatively modern x86 laptops.

With deguard, you have complete control of the flash. This is unprecedented on recent Intel systems in Libreboot, so it’s certainly a very interesting port!


>It is not possible for 3rd parties to disable Intel ME. Nobody but Intel themselves can disable ME.

...Dell? I have multiple of their machines which have been configured via their B2B panel to have ME fully disabled.


HAP disables the ME's runtime interface, it doesn't prevent the ME from booting.


Depends on how you define "booting". While its true that the microkernel always boots, and there is one userspace process running, it's a bit more subtle than that imo.

The bringup module always boot which configures the clock controller, bootguard parameters, and releases the CPU core from reset. When in HAP mode, after that it only handles power management events and doesn't really do anything else. No other ring 3 processes are started on the ME in this mode.

Stuff like even the real read-write VFS, fw updater, HECI comms handerl, AMT, PAVP, ISH server, etc are never started in HAP mode. It effectively reduces your runtime attack vector to data in SPI flash only.


> Depends on how you define "booting".

As mentioned in one of the linked tweets, ME was possible to exploit through early-boot attacks before the HAP bit was even checked. So non-negligible things happen while it "boots".


Absolutely is, one of those exact attacks is being used here to bypass BootGaurd. However all pre-boot attacks I am aware of rely on writing a malicious payload to the system's SPI flash and involve physical access.

While they are genuine vulernabilties, I wouldn't consider this a worse problem than being able to inject rootkits into other parts of the firmware which is also the case here.


In my understanding, the concern is not what outside attackers can do. It is what capabilities exist under Intel's control before they are reduced to some hopefully benign subset.

And the understanding that we have is mostly limited to what is in flash memory, e.g. the ME's BootROM hasn't been dumped yet (as far as I am aware).


I have the ME11's boot ROM in a disassembler as I write this :)


What does the Intel Management Engine do? Does it phone home? Can that port be blocked?


The whole problem is that nobody knows for sure. If you've got a possibly-malevolent possibly-exploitable third party agent with wide access to the system, it's not really your personal computer any more, is it?


In case someone wonders, AMD has its own equivalent - https://en.m.wikipedia.org/wiki/AMD_Platform_Security_Proces...


> What does the Intel Management Engine do?

It runs Minix, as I recall...


Your Thinkpad would have a free and open source BIOS/UEFI. For some, that is an improvement.


>what will it improve?

it will make you (as of now) unable to use thunderbolt and therefore a dock. Maybe you see that as improvement. I kinda like my thunderbolt



Love seeing stuff like this on old laptops!

I recently built https://linuxlaptopprices.com/, inspired by diskprices.com.


Hi! Big fan of linuxlaptopprices - it's exactly what I've been looking for. Have you considered adding a "ships to" filter? I live in Alaska and I'm looking to replace my T14s with bad soldered memory, but a lot of the listings don't ship to AK/HI.


Any chance it could have a region filter?


Amazing to see this. One step closer to my true dream of a P-series workstation with coreboot... albeit that's not very likely due to the dGPU.


What are some of the most significant challenges you've encountered when transitioning an existing system to Libreboot on a T480, particularly regarding hardware compatibility and performance optimization? Additionally, how do you ensure the integrity and security of the system during and after the installation process?


The post mentioned IRC. I haven't used it for ages. Any channel I should join to meet technical greybeards?


Aren't people using GNUboot nowadays?


Nice. Somewhat tempting to upgrade from my ivy bridge, but then I'm reminded that intel's last decade has been such a dumpster fire that everything from the last decade may be more or less the same. What does libreboot mean these days ? Does T480 do native RAM init ? Or does it still need FSP ? It may be easier to use coreboot directly. I don't think libreboot does anything more than coreboot these days. This is also exciting as T480 is the same as T25, so you may be able to use T25's keyboard with it. That's the old style keyboard that they don't make any more.


>T480 is the same as T25, so you may be able to use T25's keyboard with it

The mod is complicated and very, very expensive. But possible, if you can find one in your preferred layout, which is very doubtful at this point - they've all been snapped up by people doing what you describe.

I do use the T25 keyboard on my T480. Is it nice? Oh hell yes. Was it worth the time and expense? Absolutely not, unless you are a serious keyboard nerd and have more money than sense. Which I did, at the time.


Yeah, I looked up what people have done, and the EC alone makes it not worthwhile. And like you said, there's no availability. Ebay only shows the Japanese layout. I think the T480(s) keyboard is fine. I'm quite comfortable with the Tx30 chiclet keyboard, and 480 looks about the same. I was debating buying X1C 13, but this T480(s) may be a fun Christmas break project.


>the EC alone makes it not worthwhile

Now this I don't agree with; I have made no software changes to my T480 whatsoever, and the keyboard works more or less as expected. Some of the Fn-key shortcuts do not match the key labels (behaving instead like a stock T480) and the microphone mute button doesn't work, but otherwise everything's perfect out of the box. Speaker mute, volume keys, navigation keys all fine.


i believe that raminit isnt done via a blob anymore. you might as well read the official docs.

coreboot isnt even in the main tree yet! you have to use libreboot unless if you want to hunt down mate kukri's branch and go off that. you also have to use deguard to disable intel boot guard. It is more work than ivy bridge, which is why I use libreboot; its all done for you, its reliable and updates are done by someone else (you dont have to update your own payload yourself and recompile and retinker etc).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: