
Putting Stuff in a Proliant Dl325 - bluedino
https://flak.tedunangst.com/post/putting-stuff-in-a-proliant-dl325
======
jlgaddis
> _There’s a RAID card with 2GB cache and battery included._

I can't speak for this particular card but, in the past, there were some HP
RAID cards where you could not _disable_ the RAID functionality if you wanted
to.

Assume that, for whatever reason (ZFS, perhaps?), you had a bunch of drives
and you just wanted to use those drives normally -- that is, you wanted the OS
to see the "raw" drives, individually. Sorry, too bad!

You had two options: 1) create a bunch of one-disk RAID0 arrays (one per
drive) and then let the OS use those (which isn't ideal in the ZFS case) or 2)
buy a different card.

Another common thing (when it comes to ZFS) is "cross-flashing" the firmware
on RAID cards. Many of the RAID cards shipped by the various server
manufacturers are often just re-branded LSI cards, with or without
modifications to the firmware (besides just the re-branding).

For example, you could take an IBM ServeRAID M5014 card (which is basically an
LSI MegaRAID 9260-8i card with less cache) and flash it with the "original"
firmware from LSI. The last I checked (which, admittedly, has been a while),
it wasn't possible to do this with many of the most popular RAID cards that HP
shipped.

(This probably isn't an issue if you're buying servers for $work. In my case,
I buy them for $home too.)

At a previous job (at an .edu), we used HP server gear pretty extensively and,
overall, I was quite happy with it. In the last 5-10 years, though, I've
become "turned off" to pretty much anything from HP, though.

\---

Finally, there was the recently discovered issue with HPE SAS SSDs failing
catastrophically [0]:

> _SSDs which were put into service at the same time will likely fail nearly
> simultaneously._

Nothing like having every single drive in your _ENTIRE_ array fail at
(basically) the exact same time, huh? I feel bad for the department/team that
discovered this issue the hard way!

> _... results in SSD failure at 32,768 hours of operation ..._

I know what you're thinking: "32,768? That's one more than the maximum value
of a --"

Yep.

[0]:
[https://news.ycombinator.com/item?id=21637516](https://news.ycombinator.com/item?id=21637516)

------
StillBored
Those little Dell CS24's (apparently previously facebook machines?) are cute,
I own a couple, but a good 5 years ago I dropped another $200 and picked up a
more recent dual 6 core machine and its about 4x faster doing everything.

I'm surprised given that he was hosting on such an old/ebay machine he didn't
just drop a few hundred and get another. There is a lot of good equipment out
there for cheap if you know where/what to look for.

------
stu2010
$1800 for a 1U server in a 24C/48T and 64GB of RAM configuration? These are
incredibly competitive. How much cheaper are whitebox versions of the same
thing if this is the HP premium price?

~~~
jcims
1U fans drive me crazy. Are there any vendors that plumb these for liquid
cooling?

~~~
tomnipotent
Noctua fans are your best bet - they're drastically less noisy than any other
fans I've come across.

~~~
StillBored
And they move dramatically less air than those little blowers they put in 1U
machines. Sure at lower speeds they are quieter, but when you really need to
remove the heat its hard to beat spinning at 4x the RPM, which is noisy.

------
nineteen999
> For background, the current server is an old ebay sourced 8x Xeon with 8GB
> RAM Dell. It’s actually pretty adequate, but OS upgrades require I put shoes
> on and walk across town to the data center.

I wonder why? Even the DELL CS24 had a remote console feature. Not saying it's
not time to upgrade it of course.

[https://www.youtube.com/watch?v=ZtrkhIQXIp8](https://www.youtube.com/watch?v=ZtrkhIQXIp8)

~~~
wahern
Forget console. I've been upgrading OpenBSD and Linux systems remotely for
ages with just the standard, runtime upgrade procedures. For example, as best
I can tell my main OpenBSD system has been continuously upgraded since 2012
using the standard unattended[1] upgrade instructions, plus occasional
housekeeping to remove unused system files (e.g. old libc versions). Between
2000 and 2012 there were a few changes (e.g. moving among leased and colocated
hardware) requiring some reinstalls, but I otherwise followed the same
pattern. I don't think I've ever maintained a single Linux instance for that
long, but I've definitely upgraded across 2, possibly 3 Ubuntu LTS releases.

[1] Which is actually a manual process that requires attention, it just
doesn't require console access--you're not rebooting into an installer.

~~~
bayindirh
I've installed Debian 4.0 beta-1 and upgraded it to until 8.x.

It had no problems but I had to switch to 64bit and there was no reliable
procedure or method at that time, so I reinstalled 8. Currently the same
system is running Debian 10 testing.

So it's possible to just upgrade blindly with no console.

However, a remote console is nice and useful to fix network related stuff
(e.g. a switch dies or you just mess the network settings).

------
gruez
>The CPU is the 7402P model of Epyc[...]

>The DL325 I got for $1800 is possibly below parts cost? ATM, newegg has 3960X
for $1800. CPU alone.

wikipedia says the 7420P is $1250? It makes sense that the 3960X is more
expensive, even though it has the same amount of cores, because it has higher
clocks (3.8/4.5 base/boost vs 2.8/3.35).

------
jlgaddis
Note that ~5 years ago, HP started providing firmware updates and such "only
to customers with a valid warranty, Care Pack Service or support agreement".
They are no longer freely downloadable.

In a post ironically entitled " _Customers for life_ ", HP wrote [0]:

> _... we are in no way trying to force customers into purchasing extended
> coverage. That is, and always will be, a customer’s choice._

Now, this might not be an issue if all of your servers belong to $work and
$work maintains active support agreements for them even after the warranty
period has passed. Many of us buy servers for small businesses or even for
personal use at home, however, and it would be really nice if we could keep
them up-to-date with the latest firmware, security patches, and such.

At the complete opposite end of the spectrum, I can find and download any
drivers, firmware, etc., for my Dell servers quickly and easily, without
logging in to anything or providing any information whatsoever. If I wanted
to, I could even download and install their "Repository Manager" and run my
own local mirror!

WRT other vendors (unless things have changed), Cisco requires registration
but not paid support (for server downloads) and IBM requires "entitlement
validation". I can't speak for any other servers vendors.

 __EDIT: __I just now checked for any updates for a Dell PowerEdge R720 (which
is EoL /EoS, by the way) I have here at home, found a BIOS update from about a
month ago [1], downloaded it with wget, and am now live migrating ("vMotion")
the VMs running on it over to another host so that I can take the host down
and install the update -- it contains microcode updates addressing several of
the CVEs for Intel CPUs.

[0]:
[https://web.archive.org/web/20140209062309/http://h30507.www...](https://web.archive.org/web/20140209062309/http://h30507.www3.hp.com/t5/Technical-
Support-Services-Blog/Customers-for-life/ba-p/154423)

[1]:
[https://www.dell.com/support/home/us/en/04/drivers/driversde...](https://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverid=6vf30)

------
seriesf
I wish there was a more robust market for external nvme-in-box peripherals.
Having to deal with vendor brain damage is never fun, and nvme devices are so
small and draw little enough power that they’re really the perfect external
peripheral.

~~~
ComputerGuru
Do you mean something like an external enclosure containing a backplane
connecting to NVMe drives, connected to the main server over SAS 12gbps, or
something else entirely?

~~~
derefr
Connected to the main server over (perhaps multiple) Thunderbolt cables, I'd
assume. Since you really do just want PCI-e passthrough.

~~~
seriesf
Yeah the problem is you want more lanes than thunderbolt brings. Two nvme
devices, even cheap ones, can swamp an x4 pci slot. There are various
connectors designed to bring faster PCI buses out of the box, but no real
standard.

~~~
derefr
There's no reason that Thunderbolt (especially optical Thunderbolt) can't be
that standard. In fact, there's no reason that Thunderbolt controllers need to
live in the CPU, rather than living on PCi-e HBA cards (like InfiniBand or
RAID controllers do.)

It just hasn't happened yet because of a lack of enterprise demand. The
pendulum hasn't swung back to all-externalizing blade servers quite yet.

~~~
namibj
Actually they currently just put a retimer chip on the backplane, attach the
NVMe drives to the otherwise passive (as far as high-speed data goes)
backplane, and use QSFP+ twinax cabling between the backplane and a sometimes
passive "HBA" card that adapts from PCIe slot to this QSFP+ cage. Afaik they
can run PCIe3 x4 per cable. If you look for open compute hardware, you'll find
various modular blade-like designs.

------
anonu
110watt max reading with many CPUs firing... I think he might have missed a
zero.

~~~
kristianp
Perhaps it's because "By default, the system is in a power efficient mode that
keeps the CPU clocked down."

------
kristianp
Why does this guy run such large sqlite dbs? Must be Ok for read-heavy loads.

