I’m halfway done turning a 2010 MacBook Air with a broken screen into a headless batocera game machine for my living room (halfway because I had to order some more tools because I’m missing a torx driver). It’s actually going to create my ideal computer, I love the keyboard and trackpad of the MacBooks and you just plug in an external display and it works.
One helpful thing that isn’t easy to google: if you need to get to Mac recovery console but your MacBook’s screen is cracked, take it apart and disconnect the LVDS (lcd screen) cable, and it basically turns your MacBook into a Mac mini. Otherwise it will boot to its own screen first until the OS is up and ready.
I use a mid-2010 macbook pro (with the screen developing large splotches of black) as a home server after installing linux mint, and it's been working flawlessly. Been up now for 192 days without a reboot. It runs a webserver for two low-volume websites, pihole, nextcloud, photoprism, paperless-ngx.
The disconnecting lvds cable is a helpful tip, thank you!
How do you seperate network coming for your website and for your home?
I would like to run a website at home but am afraid that due to some programming mistake a hacker takes control of my server and they can access whatever devices I have at home.
I have ports 80 and 443 open on my WAN and forwarded to the server on the LAN. My websites are static (apart from an isso commenting server), so I guess the risk of someone taking control of the server is low... But I'm not sure if that's the right way to do it
Yep, just finished doing that actually. A little hacky to get the wifi antenna to sit cleanly on the bottom part of the case, but otherwise worked out perfectly!
I don’t really know, haven’t measured. Likely less than a normal MacBook Air since the display is gone. Regardless, I live in San Diego and our electricity is already so overpriced that it just doesn’t even register for me anymore anyways.
This is a very useful way to retire old laptops. I have a dell business laptop that is rocking an Ivy-bridge quad core.
It’s over 10 years now and it’s still running a few low power VMs - pihole and home assistant.
But more recently I’ve been moving my workloads to the 2012 Mac Mini i7s quads. Marketplace has them for <$100 if you look carefully. Bought three of those for just $50 + $30 SSD upgrade.
The cost does not include the power consumption. For Intel-based Mac Mini running a network filter I suppose it will be at least its idle number 20 Wt or about 200 KWt/year or 20 USD/year (using 10 cent per KWt as an average price in US). Using any kind of VM will bring that to 30 USD/year. This is a non-trivial amount. In Europe that may reach over 60 Euro/year depending on how a particular country is affected by the war.
It is unclear if using an old Intel-based MacBook Air without the screen will be significantly better. While its idle power is around 3 Wt, running a network filter in a VM will wake the CPU up often enough to bring the totals closer I suppose to Mac Mini.
Raspberry PI 4 on the other hand consumes only 6 Wt at the full load. Using it as a mostly idle network server brings that down below 1 Wt or 1 USD/year.
MacBook Air with M1 may be closer to Raspberry PI numbers without using VMs, but those even with a broken screen I suppose are not that cheap. And running a network software in а VM there may again brings numbers to 10 USD/year or more.
The other awesome way to do this is old thin clients. You can get a HP t620 or t630 (or the Wyse equivalent) for <$75. They usually have AMD SoCs and can take M2 SSDs.
I used to be super interested in thin clients, after some small DEC units got popular-ish for being turned into Linux wifi APs. And I'd follow every embedded core release with anticipation.
But these days I mostly have a bitter outlook towards this segment. They are exceedingly slow at adopting new cores, even when say AMD has a drop in replacement for them. The commonly seen AMD GX-420GI is a 2016 release, a 28nm final/4th generation Bulldozer, which isnt terrible, but isnt power efficient & pretty underwhelming. Thin clients are no longer mass-market items, now kind of botique gear, and costs are correspondingly high.
The one liter business mini pc market has, in my view, completely overtaken this plodding botique field. It refreshes designs regularly, there is huge volume/good resale, a little more expandability, better features. The more modern cores mean way better power efficiency (albeit there is usually an 8w idle which isnt fantastic).
100% agreed. At the “high end” it’s a licensing play to avoid the Microsoft license. At the low end, it’s a weird volume business.
I buy lots of 10,000 and get 50-60% discounts for the cheaper Linux devices. At the low quantity price they don’t make any sense as you can source quantity 1 shitty PCs within a few dollars of a thin client at low volume. Also Chromebooks can help you deliver a much cheaper solution by reducing server side concurrency.
I always suspected that Microsoft did something in the backend to poison the product as PC management is such a PITA and cash cow. For task users, my company is replacing PCs soley because of Windows support. (Aka $30-50/unit to MS)
I have a little Dell 3040 that only has 8GB of storage that’s a pita to upgrade and do this. I use a Samsung external flash drive to store surveillance video in a friends barn.
Model dependent, but a lot of “thin” clients are just x64 PCs with Atom or AMD GX CPUs, with M.2 disks, BIOS menus and all. Zero clients are usually more exotic in unique ways.
NUCs I can see, but are thin clients good value for anything but low-power always-on workloads? I would naively expect them to be cheap but have weak CPUs and insufficient memory for running anything that uses significant compute (either fewer large things, which is painfully common, or many smaller services).
Ah, perhaps I'm out of date then; when I say thin client I'm still thinking of the old Wyse box in my junk box with a Celeron, but that thing is ancient so maybe technology has advanced enough to blur the lines?
This does not match my experience unless you buy them day-and-date of release from a manufacturer who's slow to upstream system definitions. In my house I've got Rock64s, Orange Pi 3 LTSes, and just got an Orange Pi 5 (which stretches the definition of "SBC" in a lot of ways), along with a couple RPi3's and RPi4's. All have excellent Armbian support and, once you have Armbian installed, it's a bog-standard system.
They're also smaller, use less power, and tend to be less noisy. You trade off PCIe for most of them, granted, but more and more stuff is just "and I need a USB host over here" and for that they're excellent.
I wouldn’t go all the way down to “thin-client” like the Wyse stuff. NUCs and SFF/micro PCs are the next step up and usually have i3/i5/i7 CPUs… but depending on your needs, a thin-client isn’t out of the question.
Typically an Intel-based NUC does not have particularly good low-power management for mostly idle workloads. So running rarely accessed network software will be like 15 Wt or 15 USD or 20-30 Euro per year. RPi will be 10 times lower.
My main advice for anyone doing something like this is to ensure that the "hot" side of the heat pipe is below the "cold" side where the cooling fan is.
Putting it upside down means the heat pipe has to work against gravity and that's bad.
Even along the same plane isn't great, then it has the wick back the condensed cooling fluid, which is less efficient than simple gravity transfer back down.
Huh, I always thought the heat pipes are just plain copper (or similar heat-conducting material), never crossed my mind that there could be a cooling fluid inside.
They generally have a wicking material in there to handle the pipe being in a poor orientation. They would be MUCH less effective trying to just conduct heat to the cooling fins.
Just checked, my old dell i3 xps laptop running as a home server with no hardware changes and just its lid closed has the current uptime of 149 days without any reboot. This laptop wouldve sold for next to nothing, and already, it has served me approx $100 of cloud costs.
I've been doing this with an x220. You can also get light ipmi functionality from the Intel vpro me. I can VNC into the laptop and control it out of band. Even going into the bios. This is a must for something you're putting into an annoying to get to place.
It's great to see others doing this and writing about it. I too browse /r/homelab and look at some of the power-hungry monsters people are running. Given energy price rises in my country (UK), it makes them too expensive to run.
I've managed to get my setup [0], consuming ~8w (idle + running my blog). Going from an old laptop with a broken screen, putting it to use instead of making it e-waste, does feel good
This is such a good idea. You can get "old" Thinkpads with NVidia GPUs and 16+ GB RAM for a very decent price, and use them for home server purposes like encoding and object detection. Much smaller and lower power and heat than a desktop, yet much more powerful than a Pi, Jetson, &c. There are even workstation class models with Xeons and ECC RAM.
> Much smaller and lower power and heat than a desktop, yet much more powerful than a Pi, Jetson
This brings me to my biggest complain about (all kinds of) Pi(s): while running a Pi 24/7 CAN save some power compare to a 24/7 running T430, the T430 can do Wake Up On LAN, which means you can just turn it off when it's not in-use and turn it back on remotely based on demand. Many (if not all) Pi(s) can not do that, you have to run it 24/7.
Of course there are Wake Up On LAN power switches, but they are hacky and require extra care to setup properly. And if you did it wrong, you might risk losing data or damage the storage on the Pi.
No idea what you're talking about. From what I've read "Optimus" (switching between onboard graphics and GPU on demand) can be a pain, but otherwise NVidia's driver, while proprietary, is very good, and the open source driver is pretty good.
Which, if you're using it as a home server then setting it to nvidia-only mode seems the easiest route forwards instead of futzing with Optimus graphics stuff.
It's not Nvidia + Linux that sucks, it's Nvidia + Linux on a desktop. CUDA thrives on Linux. Most AI SaaS runs on Nvidia + Linux (if it isn't on a TPU or alike)
> It's not Nvidia + Linux that sucks, it's Nvidia + Linux on a desktop.
It's not just Nvidia + Linux on a desktop, its might be an *old mobile Nvidia + Linux on an old laptop* which results in a lot of restrictions & bugs nowadays:[0]
It’s gotten a lot easier to run NVidia GPUs on Linux since Linus first gave Nvidia the finger all those years ago. It’s usually a matter of running a single package manager command.
I’m old enough to remember when it was the NVidia drivers that were great (because it shared most of its code with the windows driver so supported “nice things”), and the ATI driver (fglrx) was a total dumpster fire that never worked right.
Correct me if I’m wrong, but I don’t think the Nvidia Linux drivers were ever bad, they just weren’t open-source and Linus Torvalds didn’t like that the open-source nouveau drivers sucked.
Regarding the thermal paste pump-out, it's due to using thin paste that's not designed for direct die-to-heatsink contact (Arctic MX-5, etc). The constant thermal expansion/contraction 'pumps' the paste out the sides, because the die expands differently than the copper heatsink.
You can get a thick paste made for that purpose, like SYY 157 which is what I use, and it will stay in place.
I just re-pasted my GPU and had the same issue occur, with standard CPU paste within a week the temperature was very high, after using a thicker paste it's been fine and has much lower temps.
> Curious what the power draw is with an older Thinkpad like this while idling and under load.
It is specifically detailed in the post:
As a result of all of this, the T430 is very quiet while using about 10-12W of power while idling. Under the maximum CPU load generated by `stress`, the total system power usage is around 34W.
> Curious what the power draw is with an older Thinkpad like this while idling and under load.
I have an X230i running Debian, and I got it to idle - screen on, backlight low, no browser open, Emacs and Syncthing running - at around 8 W. It gets up to ~10 W with light usage - i.e., a few browser tabs open - ~13 W at 40% load, and ~17 W at 75% (I tested the last two with x265 video playback, which doesn't get hardware acceleration on Ivy Bridge).
The i3-3120m in it should have a TDP of 35 W, but I don't think I ever saw it draw that much.
In my research, there’s a lot of conflicting info about ECC RAM, but it seems like it’s not able to guarantee data integrity over long periods of time, but for enterprise situations, the relatively small premium is worth having if it prevents even one bit flip.
As for power draw, a Sandy Bridge laptop will probably draw around 25-30W idle and 50-60W under full load (excluding any GPU)
For home use you’re mainly buying ECC for it to ring alarm bells when a stick spontaneously fails, and to avoid the data loss nightmares and corruption spread to backups that follow
I'm pretty sure this is all 100% correct so why are the haters hating? Anyone who has googled around on ECC info knows that there are many arguments on forums about what ECC RAM does and how valuable it is and when. For example, colloquially, you NEED ECC RAM for ZFS disks, however even ECC RAM can only compensate for so much.
Maybe your laptops are less old or less degraded than mine, but their batteries tend to the thing I would trust the least about them, especially if they're going to be plugged in and powered on 24/7. I'm fine with using old laptops as servers, but if anything I'd try to see if they can run without batteries because at some point I get paranoid about fire hazards.
> One that will conveniently double as a pillow as the battery slowly balloons.
The title's "ThinkPad as a Server"
For the vast majority of ThinkPad production, the batteries were hot-swappable and constructed from the very same 18650 LiIon cells used in the original Tesla Roadster [0].
We're clearly not talking about swollen non-removable LiPo pouch having macbook-airs or their copycats... that would be a total failure to consider the application context.
That doesn't really work because keeping the batteries at 100% charge wears them out faster and an old laptop won't have modern features to limit the maximum charge.
Guess you would need a UPS for your internet modem however. Recently had a power outage and while my laptop kept chugging my VoIP connection certainly did not
I've never had a UPS not need battery service at some point, and they tend to use lead-acid which is especially fragile when it comes to deep discharges vs. LiIon laptop batteries.
I think in any case the battery is a wear item that will need servicing. Just make sure you pick a common laptop with plentiful third-party battery replacement options. Preferably ones that can have the battery swapped while online (not a LiPo pouch buried inside a macbook)
But laptop batteries typically last several hours, which most UPSes are meant to last a matter of minutes, so even a 5-year-old laptop battery will likely meet or exceed UPS performance.
Currently got an old ThinkPad X230T running my Valheim server. Been up for over a year straight! That's pretty much straight from the person I bought it from -- wiped the noisy old HDD, installed Debian (IIRC) and set up my server. Been running ever since, and easily supports 4+ simultaneous players. Pretty sweet! Yeah, it clicks suspiciously every so often (so I'm sure the HDD won't last), but I automatically back up the game world so the chance of any data loss is slim.
Oh yeah, I have wifi off on mine, and the display shut off via command line, to reduce power usage a bit. I wouldn't go so far as to start disassembling the system haha
OP’s notion of stripping as much as possible triggered a related question. If I understand correctly many cloud data centers are essentially huge collections of caseless motherboards with disks hanging off them. Near big bodies of water for cooling. Wouldn’t the exposed circuitry corrode or get dusty?
I used to have identical ThinkPad T41's. When I went to a coffee shop, I'd leave one in my backpack running a web server etc, and then develop on the other one.
Running on Linode or something could have been faster, but most of the time the coffee shop internet connection was dog slow.
I'm not surprised there were unexpected issues - I've found running laptops 24x7 (or other SFF devices) tends to get weird heat issues.
I had fun with 8th gen Intel NUCs just turning themselves off after a while, turned out to be issues cooling the VRMs. Also had thunderbolt devices just drop after extended uptimes.
Similar issues with Asrock Deskminis - although I could mitigate it here by changing the CPU cooler to force more air over the VRM heatsink.
One helpful thing that isn’t easy to google: if you need to get to Mac recovery console but your MacBook’s screen is cracked, take it apart and disconnect the LVDS (lcd screen) cable, and it basically turns your MacBook into a Mac mini. Otherwise it will boot to its own screen first until the OS is up and ready.