Hacker News new | past | comments | ask | show | jobs | submit login
HPE ProLiant MicroServer Gen10 Plus vs. Gen10 Hardware Overview (servethehome.com)
69 points by mchan 30 days ago | hide | past | web | favorite | 72 comments



I don't like:

* Pentium G5420 has 54W TDP. That's a lot. It'll be noisy under load. I'd prefer something like Celeron G4932E with 27W. I don't need performance, I need quiet operation and ECC support.

* iLO enablement kit is $100 and need physical delivery. No more $5 ebay key. That's a shame for home consumer.

Also I was not able to find a price. Hoping for the best. Buying Gen8 for $300 with cheap iLO key was a great purchase.

CPU seems to be not soldered, so may be it'll be possible to downgrade.


That CPU is a great choice.

The specs on paper won't tell you the whole story. I've got the Pentium G5400 and even though it's rated at 58W TDP its actual power consumption and heat output is on the low end.

I don't have my notebook with me but I ran a small at home experiment with a kill-a-watt sort of device while monitoring temps. Found consumption to be around 21W in idle, 26W for light loads, between 35W and 42W when doing heavy loads like compiling, transcoding, etc.

Temps are 28C in idle, up to 42-ish under load.

These CPUs lack Turbo Boost which is responsible for high temps in many instances.


Thanks, that was a very informative comment.


I went down the rabbit hole of low TDP Xeon drives when I was last building a home FreeNAS box. It makes sense in theory to get a CPU that doesn't get as hot, so will need less cooling, but I think TDP is more about the total / max power used, not average. I would suggest a CPU with more cores it can keep sleeping for most of operation, but available when you need it.


The Xeon E3-1270's in my R210 II's are 80W TDP, the Xeon E5-2450L's I have loaded in my R320 and R420 are 70W - guess which makes more noise? The 2450L's, of course!

TDP is a guideline for cooling, it says nothing about noise, or even power draw. The former is a factor of the cooling system you have paired, the latter can be completely arbitrary as long as the chip can average itself out to the rated heat dissipation (which is what the TDP measures).

If you want a quiet home server, get a tower, my ThinkServer TD340 was hella quiet even with two Sandy Bridge-EN chips installed. Beyond that it doesn't really matter, hell, swap out the CPU cooler and case fans with Noctua ones to go the extra mile.


> Also I was not able to find a price. Hoping for the best. Buying Gen8 for $300 with cheap iLO key was a great purchase

UK Pricing direct from HPE - expect sticker shock

https://www.hpe.com/uk/en/product-catalog/servers/proliant-s...

HPE ProLiant MicroServer Gen10 Plus E-2224 S100i 4LFF-NHP 180W External PS Server P16006-421 £ 839 * inc VAT

HPE ProLiant MicroServer Gen10 Plus E-2224 S100i 4LFF-NHP 1TB 180W External PS Server P18584-421 £ 922 * inc VAT

HPE ProLiant MicroServer Gen10 Plus G5420 8GB-U S100i 4LFF-NHP 180W External PS Server P16005-421 £ 629 * inc VAT

HPE iLO Advanced 1-server License with 1yr Support on iLO Licensed Features 512485-B21 £ 394 * inc VAT

HPE iLO Advanced Flexible Quantity License with 1yr Support on iLO Licensed Features 512486-B21 £ 394 * inc VAT

HPE iLO Advanced Flexible Quantity License with 3yr Support on iLO Licensed Features BD506A £ 444 * inc VAT

HPE 16GB (1x16GB) Dual Rank x8 DDR4-2666 CAS-19-19-19 Unbuffered Standard Memory Kit 879507-B21 £ 236 * inc VAT

HPE 8GB (1x8GB) Single Rank x8 DDR4-2666 CAS-19-19-19 Unbuffered Standard Memory Kit 879505-B21 £ 174 * inc VAT

HPE Trusted Platform Module 2.0 Gen10 Option 864279-B21 £ 66 * inc VAT

HPE MicroServer Gen10 SFF NHP SATA Converter Kit 870213-B21 £ 11 * inc VAT

HPE 1TB SATA 6G Entry 7.2K LFF (3.5in) RW 1yr Wty HDD 843266-B21 £ 96 * inc VAT

HPE 4TB SATA 6G Midline 7.2K LFF (3.5in) RW 1yr Wty HDD 801888-B21 £ 419 * inc VAT

HPE Mobile USB DVD-RW Optical Drive 701498-B21 £ 119 * inc VAT


Street pricing in the US from lower margin HPE resellers for the performance E-2224 diskless model (P16006) has been discounted to $578-$660 depending on reseller/ day.

Still a lot more than some of the previous versions.


the UK price includes VAT@20% so you are looking for around £524 (629), £699 (839), and £770 (922) respectively, which doesn't look terribly far off the US pricing given the standard UK markup.

The normal UK server resellers and disties (including the one I bought my gen8 microserver from) don't seem to have them listed yet.


Well the G4932E runs at exactly half the clock rate, so it's not surprising that it uses half the power. Just disable one of the two cores and now you have a 3.8GHz single core processor with a TDP of 27W:

echo 0 > /sys/devices/system/cpu/cpu1/online

Alternatively you could reduce the clock speed to 1.9GHz for both cores:

cpufreq-set -f 1900 (need to load the userspace governor also)

And of course you can schedule this to happen at a particular time of day.


TDP is largely irrelevant. You are interested in either energy per task or power for availability, neither of which you can figure out from a datasheet. The latter is essentially the same as idle power, but depends on what services you run.


Sounds like vbezhenar cares about being woken up at night by a noisy server, in that case maximum power draw regardless of load is what matters.


Why do you expect high load at night?


Backups come to mind. For me, I schedule them at night.


Have a look at the crontabs that ship with Ubuntu.


TDP is such a meaningless stat to criticize. People still caring about TDP is a huge pet peeve of mine. The TDP is a nearly entirely fictional number your modern CPU will never operate at.

Especially in a home server, the CPU will be maxed out 0.001% of its life, if that. The only stat that matters is idle power, which it will be at for the majority of its life.


I'm happy to show you quite a few servers which run almost at their TDP for 7/24/365.

Yes, it's not very applicable to home scenarios but spike loads create a lot of noise at home when a CPU has high TDP.


Surely it's clear we're talking about home and small business servers?


For sure but what about home servers which do transcoding for example?

They will run at their TDP for some time and, create considerable noise and heat.


Great. Who reports idle power on a datasheet?


> I don't need performance, I need quiet operation and ECC support.

I'm curious: are you going to put this under your desk?


Actually I'm going to put it into my closet along with the rest of "home equipment" (UPS, GPON modem, switch, landline phone). Noise will disturb me at night, as I'm sleeping not far enough from there. I managed to configure Gen 8 to be quiet enough with some weird tricks (it was not by default). Basically I need for fan noise to be on par with HDD noise.


I hated the noise that my N40L microserver made (even though it was super quiet already). Now I have a NUC, and plugged a USB powered fan [1] into it to blow air over the chassis (the small fans in the NUC are noisy as hell when it's under even moderate load). Now it's so quiet I can't hear it. For storage I got a USB3 based 5-drive chassis and hooked up my 4 drives running mirrored ZFS. USB3's 5Gbps is more than plenty for my usage.

[1] https://www.amazon.com/EasyAcc-Rechargeable-Personal-Regulat...

I actually got the earlier version that didn't have a battery. Just adjust the speed knob to the sweet spot of quiet + cooling factor.


Wish there was a non-insane costing 12 bay Thunderbolt array but because the market mostly consists of video professionals the math in cost and noise works out better to build a more traditional server anyway. 8+ drives won't be quiet. I've got my 8 drive array right behind me and the drives are the issue really.


Downclock/throttle/disable hyperthreads/etc in BIOS?



> No more $5 ebay key. That's a shame for home consumer.

I guess that's why


I've been thinking about getting a microserver. I already have a Synology NAS but I'd like something a bit more powerful to host Nextclound and a couple apps.

The Gen8 Microservers seemed neat, but they are getting old and rarer on eBay. Gen10 seems like a step back performance and feature wise.

With ILO not available (or a $100 upgrade), what is the benefit for a home / small business / enthusiast user of a Microserver vs. a custom built system? I could probably build a Ryzen based PC with a better performance / price ratio, just minus the ILO, and it's probably going to be quieter. Any optinions on this?


The Microserver line is aimed at corporations who want to send out a local office server that they will maintain remotely; the local office might not have a rack or any other infrastructure besides a router-switch combo. Size is important. HP custom builds the case and the motherboard: repair is expected to take the form of swapping the whole box, possibly sliding the disks in and out.

If you don't care about size, you can do much better for cheaper, and be able to replace everything with standard parts if you have problems in the future.


Wouldn't iLO be even more important if the server is meant to be deployed at a site with no technical staff?


iLO is available with an optional kit, though honestly you'd be better off getting the ML30 if you were buying HPE gear for a remote location.


For a powerful microserver, you might want to consider SuperMicro's E300-9D-8CN8TP. It's tiny, comes with an 8-core Xeon D, has an M.2 slot, 4 x 10GbE and is even ESXi certified.


It's also a lot more expensive


4x10gb Ethernet has to add what, a few hundred dollars if they are Intel ports?


"[...] host Nextclound [sic] and a couple apps"

So, what are these "couple apps"? You already have a decent storage solution. You might consider a RPi 4 with 4G RAM for NextCloud and some apps.

Other than that, I personally like the ASRock DeskMini A300. Slap an Athlon in it, and call it a day (keep in mind you need a "G"-Processor, with iGPU!).

At home, I've built a homeserver around a X470 and Ryzen 5 2600 - it runs everything using Unraid: Storage, Home-Assistant, PLEX, Node-RED, Grafana for the living room display, the secondary pi-hole instance, and also my personal Gaming VM. It runs at 15% load during idle, drawing 70 W (including Vega 56 GPU).

I'm pretty happy with my setup.


I'm disappointed that HPE picked Intel over AMD EPYC Embedded 3000 series for this micro server line.


Same here. After Intel's decision to not release microcode updates for the Intel processors I was using (earlier core i7 models) - rendering them useless - I'm never buying Intel again.


This absolutely needs a way to add an SSD for OS and fast storage, so the 4 3.5" slots can be used for a ZFS stripped mirror (Raid10-like).


Maybe not as ideal as you want, but I typically just stick in a USB drive.

It's not like the OS drive needs the performance when the vast majority of your data is written to ZFS.


You should be able to stick an M.2 adapter card in the PCIe riser

https://www.amazon.co.uk/dp/B07SQZBSCN


Not if you need to use the PCIe for adding any kind of decent networking though. :(

Still, USB 3.2 probably means some kind of decently reliable USB stick would probably work instead.


Linux will get that, but will the firmware? If not you still need a SATA or USB DOM to put your /boot on. Annoying.


Unless you stick Clover EFI bootloader on a USB/SD card to allow booting from the SSD.


I have an older Microserver (not sure which one), it had spare SATA ports and I put an SSD in one of them, slotted it in some free space and now have RAIDZ on all four disks.


Would be even better if they had the option of 8x 2.5" slots and NBase-T or 10Gbps Ethernet. The old Gen 10 was way too large, this one I could replace my Synology with it running FreeNAS.


If you use FreeNAS, you can simply load it into memory from a small usb stick and use the SATA drives for that stripped mirror.


like the faster-than-sata usb-ports?


It's sad that we'll be losing the dvd bay. Using an additional sad drive for as boot disk in there and all the four drives for storage was ideal.


"That optical bay was often used with an extra SSD. While the Gen10 had an extra fifth internal SATA port and a Molex power connector which could be used to add an extra SSD, the Gen10 plus does not have these features. We already gave the feedback to HPE’s product team and I think we have a solution that we will show in the full review we will publish for the MicroServer Gen10+."

Hopefully there's still a way.


Sad to see no option for having both nvme drive and 10GbE at the same time.


Why do server hardware need to look cool? Do ppl buy hardware based on the looks!? So many metal cases with razer sharp corners to cut yourself on, making it weight a ton. And screws. And more compartments, like a Russian doll. Making it cost a ton. All you need is a wireframe made in wood or plastic, a bare board, and a fan.


> All you need is a wireframe made in wood or plastic

Wood burns, plastic melts. When you have a bunch of racks full of a bunch of servers undrler heavy load, managing the environmental temperature of the datacenter is very important.


Not sure datacenter is particularly relevant for this product, but yes, a homemade wooden chassis would be a bit of a car crash waiting to happen. (Although the lackrack probably deserves a mention! :) https://wiki.eth0.nl/index.php/LackRack )


When you get melting/burning temperatures you are already screwed, and burning metal is far more dangerous, try turning some metal wool on fire.


Speaking from experience, it is much, much, much more feasible to reach the burning point of wood or the melting point of plastic in a datacenter than the melting point of the metals used in server construction.

One of the jobs of the case is to contain a fire if it starts. A plastic case wouldn't do this, and a wood case would actively be hazardous.


Do you have a story? I've never experience more then 120°C in electronic equipment as it's usually the safety limit where it would shut itself off. Although in theory the silicon would still be fine at 300° where wood starts to burn.


Two situations come to mind:

1) Several appliance machines were in a cabinet with water-cooling doors attached. The doors restrict the airflow from the front of the cabinet to the back to the point where there isn't sufficient airflow to cart away the heat from the CPUs. The engineer responsible for those particular systems played show-and-tell with the melted plastic pins that formerly held the motherboard in-place for a month after the machines crashed.

2) An 8-year old rackmount machine had a power supply fail spectacularly and light the entire server on fire. Fire suppression was triggered in response. Several other adjacent machines were damaged but the fire stayed relatively contained to the one cabinet.


> All you need is a wireframe made in wood or plastic, a bare board, and a fan.

Yes, but no. Yes, that will work. No, that will very likely not be compliant with your local RFI regulations.

That being said there is little excuse for cases with non-deburred edges when you can get cases that are fully painted (in and out) and have most cut-outs folded over for surprisingly little money (~50 €, which really makes you question how any of the questions make a profit off of it, not to speak of workers involved).


Well,

1) Even water-cooled equipment uses forced air cooling to cool the water heat exchanger (unless you happen to own water rights to a river, in which case we have a different discussion). A lot of the highest mark-up goes to bespoke on-prem data centers (e.g. hospitals, labs) where the air isn't necessarily cleaned to spec. I would therefore think you want some stuff between the air and the CPU heat exchanger to force the air around some turns to knock some of the debris out of the air. So a grill makes sense.

2) Most buyers are buying single units, but most units are sold to buyers buying them in bulk, with a service contract.

3) If you're making a grill, it's gotta look like something.

4) You want something that approaches a consistent theme for your accessories: monitors, keyboards, mice, printers, etc. Even without the logos, you can match the HP mouse to the HP keyboard and HP server. Same for Dell, Lenovo, etc.

5) I find the big manufacturers try to slowly evolve their design over the years. You can walk into a place (hospital, power plant, ship, whatever) and quickly identify it as a Dell shop or an HP shop.

6) Electric discharge machining makes it relatively easy to go from CAD drawing to ready-for-production molds.

7) You need some low-risk projects for your young mechanical engineers.

8) At scale, your custom EDM-mold plastic grill may be cheaper than stock filters of sufficiently similar performance.

8) Branding matters.


> 3) If you're making a grill, it's gotta look like something. > 4) You want something that approaches a consistent theme for your accessories: monitors, keyboards, mice, printers, etc. Even without the logos, you can match the HP mouse to the HP keyboard and HP server. Same for Dell, Lenovo, etc. > 5) I find the big manufacturers try to slowly evolve their design over the years. You can walk into a place (hospital, power plant, ship, whatever) and quickly identify it as a Dell shop or an HP shop.

This is one of the purposes of a Design Language -- it ensures users of your products can identify what does and doesn't go with your products. And it also ensures that if a user has used one of your products, the signifiers for your other products are immediately apparent to them.

https://en.wikipedia.org/wiki/Design_language


Most interesting comparison for me will be the price. The Gen10 is so inexpensive that it's competing in the same price range as a four-bay Synology unit. But I imagine this won't be.


Get the Z series workstations from HP instead. They are much better options for home.


They all seem to be much more GPU-focused? i.e. completely different product segment? And mostly (with a couple of exceptions) significantly larger than something named "MicroServer".


They are perfect for data crunching as well as GPU related purposes.

For IO: My 8 year old one lets me saturate SATA at 500megabyte per second. With SAS which it also supports it's supposedly 3000megabyte per second, bit I have never tried it.

For memory: ECC

For processing: Dual Xeon CPU config

For storage: Nice storage pods.

For admin: Out of band ILO like functionality, although I've never tried it

For home: SILENCE! My machine measures at 32db


Can you stick any hard drives you want in there, or do they have to be HP drives?


HPE will probably tell you that you need to use their drives. We have been putting in non-HPE hard drives so far without issue. More on that in the full test.


At least up until Gen8 you could use any drive; the bays came in standard dimensions and took my Western Digital drives without a problem. I don't see why that should have changed in the meantime.


Wasn't there something about fans blowing at 100% with non-HPE drives due to their firmware?


Since it is now an intel, does it come with an integrated backdoor like the other "server-grade" hardware?


To be fair, AMD also has that "integrated backdoor" for the sake of argument, called Platform Security Processor, sharing acronym with extremely popular game console just like Intel ME does with extremely often mentioned Input Method Editor(s).


Why the downvotes? It's a legitimate question: ME/AMT can be neutralized or disabled.

Bios can be replaced with coreboot.

Some buyers and some vendors actually care about security.


It's x86, so if you count ME/PSP as a back door, then the answer is almost universally going to be yes. It would be like commenting on the new release of Ubuntu and asking if it was still using a Bourne-like shell by default.


I love the new form factor, and getting away from low performances AMD socks is welcome. I'd love to see some benchmarks with the Xeon ones.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: