
Homelab: Intel NUC with the ESXi Hypervisor - henvic
https://henvic.dev/posts/homelab/
======
moondev
The ultimate NUC for a vSphere lab is the "hades canyon" NUC8i7HVK. I have
three of them in a vSAN cluster.

* dual NVME drives for all NVME vSAN

* dual nics work out of the box for ESXi

* dual thunderbolt 3 ports to enable 10GbE via TB3 adaptor or PCIe expansion

* Linux friendly AMD GPU that has no problems being passed through to a nix or win vm

* low profile and easily stackable for a rack shelf

* Supports 2x32GB SODIMMs

* Power efficient

* Low noise

Love these things!

~~~
mtsr
Aren’t vSphere licenses a bit expensive for the average homelab? I personally
went with a larger former workstation with more cores and RAM, because I
expect to only ever have a single node, even if I’d love to have a full
cluster.

~~~
InvaderFizz
VMUG Advantage membership ($200/year) gets you a 6 socket, non-production
license for basically the entire VMWare catalog.

~~~
op00to
For $0/year I can run Linux and ovirt.

~~~
sithadmin
That won't help you very much if the goal is to maintain familiarity with
VMware products (which is the entire point of the VMUG licensing scheme).

------
jedieaston
KVM and Proxmox also run very well on that platform, if you're more used to
those stacks (and the management tools are free).

~~~
anonymouswacker
I'm looking for a hypervisor recommendation... I am hoping to virtualize at
least two if not all three of my machines: Unraid (pass-thru HDDs); pfSense
(pass-thru NIC); and Windows 10 (pass-thru Graphics). I am hoping to have
Windows 10 running fast enough to do some light photo editing. Which
hypervisor would you think to try first?

~~~
whatsmyusername
Proxmox. ESXi client is windows only and most of KVM's clients are either
linux only, cli only, or dogshit.

Proxmox is great as long as you have the mental fortitude to ignore the
tickler that pops up on login if you're on community support only.

~~~
ex3ndr
ESXi web is good. Why you need an app for this?

~~~
whatsmyusername
I dumped it years ago when my container work kicked off and they started
paywalling everything behind vcenter.

If web works with base ESXi great! If not then I'm not interested.

I just don't work on a lot of companies in the position of needing VMWare
anymore. Either they're all in on AWS or they're too small to justify the
expense.

------
jimmcslim
I've just discovered PhotoPrism [1] through this article, which looks like a
great solution for organising my photos in my homelab/home server - which is
an HP N54 Microserver (Gen 7), recently reformatted to run FreeNAS; loving it
so far (previously was some Frankenstein mix of Windows Server 2012 running an
Ubuntu VM via Hyper-V, with the storage being ZFS within the VM... terrible
performance!)

There is also PhotoStructure [2] that I like the look of, but PhotoPrism is
OSS and already integrates machine learning for identifying objects in
photos... yes I know I can't expect great performance out of my N54L on that
front!

[1] [https://photoprism.org/](https://photoprism.org/)

[2] [https://photostructure.com/](https://photostructure.com/)

~~~
henvic
off-topic: Here's a weird thing... I've to admit that one concern I've is to
like PhotoPrism (this is what is keeping me a little bit afraid of trying it).
The issue is that I'm a Go programmer and it seems to be a good piece of
software, but it uses the GPL license, and I've as a motto to avoid reading
GPL'ed source code, especially when it affects what I do for a living. It'd be
even worse if I think about contributing to it...

I discovered it a few months ago when I was searching the web to see what is
out there. Previously to moving to iPhoto (regretted a lot), I used digiKam
and I enjoyed it.

* I don't want to risk reading something I like there and ending up copying the idea or writing similar code unwillingly, as this license is viral.

Thanks for showing me PhotoStructure!

~~~
yjftsjthsd-h
> but it uses the GPL license, and I've as a motto to avoid reading GPL'ed
> source code

So, don't? Just pretend that it's a proprietary application with no source
that just drops from the sky as a binary or docker image. Can you not get what
you want as just a "normal" user?

~~~
henvic
I'd argue this is hard to do when it's so easy to explore a bit further, so
I'd rather just stay away from trouble :)

------
regnerba
I run my homelab on NUCs as well but no hypervisor. Everything I run is in
Docker containers. So the NUCs have CentOS installed and then just containers
across the Swarm cluster.

Storage is an NFS mount from my Synology NAS.

The only reason I would really care to have a hypervisor is because then I can
do more from Terraform.

I use Terraform to configure my DNS, my Ubiquiti gear, and the Swarm cluster
but that leaves a gap where I need to do something to manage the actual CentOS
machine. There isn't much to manage (users, SSHd config, SSH keys, packages
such as vim, docker, and htop, and then NFS mounts) but the less I have to
manage myself the better. Just don't think it's worth adding a whole
hypervisor just to pull possibly some of the networking into Terraform.

Anyways, NUCs + Docker Swarm = great win.

~~~
luzer7
That's exactly what I want to do. Do you have any guides you used? I'm having
a little trouble with nfs and docker permissions (guid/uid I think) trickling
down to the nfs share in my Synology.

~~~
regnerba
With regards to NFS permissions, I set my Synology up as so:

* Client: 192.168.0.8/29 (whatever the smallest subnet is I can use that includes my NUC)

* Privilege: Read/Write

* Squash: No mapping

* Asynchronous: Yes

* Non-privileged port: Denied

* Cross-mount: Denied

This is fairly out of date (I keep my own stuff in a private monorepo, this
was a snapshot I posted for a friend), but here is the basics of a script to
setup my NUC: [https://github.com/regner/homelab-
stacks/blob/master/scripts...](https://github.com/regner/homelab-
stacks/blob/master/scripts/init_node_common.sh)

Of note, the "Adding NFS share to fstab" part doesn't actually do that...

------
ocdtrekkie
I currently run my Sandstorm.io server on an 7th Gen i5 NUC.

I recently picked up some old budget Gigabyte BRIXs that will take on some
other fun roles. (A 6th Gen Celeron and a 4th Gen i5.)

I have wanted to run an AD domain for managing my gaming PCs, but I had some
difficulty with ESXi IIRC (probably also NIC issues, like in the article), and
Intel has decided to be a complete tool about network drivers for "consumer"
chipsets being installable on Windows Server. One of the perks of the old 4th
Gen BRIX unit: A Realtek LAN NIC.

One of the things I'm sad about on newer NUCs is the removal of the LED ring.
7th gen NUCs in particular have a neat multicolor LED ring around the front
panel that can be programmatically controlled... pretty neat for server status
like uses. But I suspect they found it rarely used and so left it out for cost
or something.

------
dade_
VMs work ok, but much I am much happier with Linux containers. NUCs get bogged
down pretty easily and go from quiet purring kittens, to screeching and
whining when they get under load. A dozen containers, with different distorts
and versions runs nicely.

------
NickNameNick
I just got one of those NUCs.

The network device is annoying - I couldn't get it to work on debian stable,
but it is supported in Debian testing thanks to kernel 5.5

I also had problems with the HDMI video - It didn't work with the hdmi->dvi
cable I got with either of my spare monitors, nor did it work with my TV and
the existing hdmi cable I had there. I did get it to work by borrowing the
hdmi cable from a friends htc vive.

------
sigjuice
I have a couple of Raspberry Pis and Onion Omegas on my LAN and a few Linux
and BSD VMs on my MacBook (NATed). They can mostly ssh amongst themselves, but
the biggest problem I have is that I haven't been able to come up with a
satisfactory DNS/mDNS setup. I just don't like fiddling with IP addresses. Any
suggestions?

~~~
lonelappde
I thought the point of mdns is that it just works with zero configuration.

Other alternatives include running pihole on one machine (or all machines for
backup), or giving them static IPs in your DHCP server and then copying a
hosts file to them all.

~~~
sigjuice
A couple of my systems have resolvers that can't resolve .local names
(OpenBSD, OpenWrt/musl)

~~~
yjftsjthsd-h
Does Avahi not work there? Implied to be available:
[http://www.idoru.be/notes/how-to-configure-
rendezvousbonjour...](http://www.idoru.be/notes/how-to-configure-
rendezvousbonjourzeroconf-in-openbsd-with-avahi-deamon/)
[https://openwrt.org/docs/guide-
user/network/zeroconfig/zeroc...](https://openwrt.org/docs/guide-
user/network/zeroconfig/zeroconf)

------
aljarry
On the cheap side, I got old Lenovo m93p - USFF with 8GB RAM, i5 (2 cores, 4
threads) with 4 USB 3.0 ports, which is just enough for lightweight VMs, some
test containers and ZFS mirror for backups / NAS. Whole package with 2x2TB
disks was around 350$.

~~~
mech422
I still swear by my Ameridroid H2's - Celeron quad core, dual channel DDR 4, 2
x 1Gbps nic, 1 x NVME, 2 x Sata, 4K video - less then $120. I have 3 that I
got the end of last year - tricked out with 16G DDR4 and 2x480G SATA - $250
each

 _edit_ clarify it was $250/each

~~~
SparkyMcUnicorn
A few days ago I picked up a SFF Dell (precision T1700) with a Xeon E3, 32GB
DDR3, small SSD + HDD, 11 USB ports, and a thunderbolt pcie card. All for
$200.

The CPU is roughly equivalent to a 6th or 7th gen i5, but with 8 threads
instead of 4.

Installed proxmox with 1 VM and several containers. Couldn't be happier with
everything. My wife can't even hear it under decent load in the living room.

~~~
mech422
I got a rack full of Dell and HP blade servers of Ebay years ago - nice gear -
dual quad core xeons with 48G ram, SSD, and gigabit networking...

Thing is, running all 16+ blades pushes my electric bill over $500/month, and
sounds like a fighter jet getting ready to take off :-P

Having silent gear is totally underrated!

------
undebuggable
Anyone with similar homelab setup - do you use the free ESXi 5.5 or more
recent versions which are quite pricey? If 5.5 how do you orchestrate it?
vSphere Client on Windows is awful.

> I discovered the Ethernet NIC wasn’t working.

> More Ethernet issues

Likewise with 5.5, where enabling the ethernet is often nontrivial (e.g.
injecting the drivers for the Realtek NICs).

~~~
lostlogin
I think I signed up for a trial licence and got 6.7 u3, but I can’t seem to
find a link. It’s been running a while so it isn’t a 60 day one.

VMWares website is utterly impenetrable and it’s like a form of hell trying to
get out of the loops you end up in with licence agreements and download
options.

~~~
undebuggable
> VMWares website is utterly impenetrable and it’s like a form of hell trying
> to get out of the loops

Oh man, yes. The only way to download installation binary of something
specific was a direct link to VMWare servers found somewhere on the internet.

~~~
lostlogin
I torrented it the last few times. I have a licence and am unclear if this is
a breach of TOS.

------
dragonshed
I have an older NUC that I've been repaving as needs change, using it as a
build agent for some projects, a homelab to experiment with hasura, etc. It's
always done well, but it's perf relative to other options would never make me
consider using it as a type 1 hypervisor like this. I may need to pick up a
NUC10I7FNH.

------
youngtaff
For my setup I was running ESXi on a 2012 i7 Mac Mini with 16GB of RAM

At the time I bought it (2013?) it was far better value for money than any of
the Intel NUCs.

I've always wanted someone to produce a NUC in a similar form factor, with
good design but there doesn't seem much demand

And alas the current Mac Mini's are completely un-upgradable

~~~
GekkePrutser
You can upgrade the memory on the 2018 mini, but yes that's it.

------
ct520
I got a NUC8i7BEH with ESXI. Works great, run dev lab out of it publicly
facing vlan'd off on my home fiber.

Only negative I ran into is - I bought the bigger one with NVME and regular
SSD. I couldnt fit a SSD in with NVME with heatsink :-/ Besides that this
thing is snappy and uptime been perfect.

------
asadhaider
I have an old Intel NUC D54250WYKH which I run ESXi on as my core applications
server and where all my inbound homelab traffic gets routed through. There's a
single CentOS VM with Docker installed which has all the host resources
allocated to it.

Currently running the following containers-

    
    
      proxy [1] - Nginx proxy host for all ingress traffic, has a nice web interface and works with Let's Encrypt
      pihole [2] - Primary DNS server running Pi-hole to block ads
      cf-ddns [3] - Client to update Cloudflare records with my IP address (wildcard record *.myhomelab.dev which I run the proxy 
      hosts through)
      unifi-controller [4] - UniFi Controller appliance
      watchtower [5] - Keeps the docker images up-to-date
      portainer [6] - Web interface for managing docker containers
      guacamole [7] - Apache Guacamole remote desktop gateway
      postfix-relay [8] - Open SMTP relay on my internal network which forwards everything to Amazon SES, makes email notifications easier
    

It's a great piece of kit and love the small form factor, it sits in my
network comms cabinet and I've had no issues. I pass through a monitoring
cable from a UPS in the cabinet to the CentOS VM which monitors if there's a
power cut and shuts the ESXi host down safely.

I initially ran PhotonOS with Docker but had some networking issues so
switched to CentOS 7. I have larger, more powerful SuperMicro hypervisor hosts
which run the bigger application containers.

[1] [https://hub.docker.com/r/jc21/nginx-proxy-
manager](https://hub.docker.com/r/jc21/nginx-proxy-manager) [2]
[https://hub.docker.com/r/pihole/pihole](https://hub.docker.com/r/pihole/pihole)
[3] [https://hub.docker.com/r/joshava/cloudflare-
ddns](https://hub.docker.com/r/joshava/cloudflare-ddns) [4]
[https://hub.docker.com/r/linuxserver/unifi-
controller](https://hub.docker.com/r/linuxserver/unifi-controller) [5]
[https://hub.docker.com/r/containrrr/watchtower](https://hub.docker.com/r/containrrr/watchtower)
[6]
[https://hub.docker.com/r/portainer/portainer](https://hub.docker.com/r/portainer/portainer)
[7]
[https://hub.docker.com/r/oznu/guacamole](https://hub.docker.com/r/oznu/guacamole)
[8] [https://hub.docker.com/r/simenduev/postfix-
relay](https://hub.docker.com/r/simenduev/postfix-relay)

~~~
regnerba
Why run ESXi at all and not just install CentOS directly on the NUC?

~~~
ocdtrekkie
The portability of being able to lift a VM off the hardware and onto another
piece of hardware is a nice feature when you are using bargain consumer
hardware for your "servers". The overhead is pretty minimal.

I'm not currently doing it with my NUC servers, but it's a pretty good choice
to do so, and ESXi is free if you don't need to, you know, manage it in any
competent way.

------
GekkePrutser
Yes, the Thunderbolt 1 to Ethernet adapter works with ESXi. I use it with my
Mac Mini 2011 running ESXi.

------
snorrah
That was quite a thorough and interesting post, I was pleasantly surprised! :)

~~~
henvic
Thanks! By the way, I noticed this video
[https://www.youtube.com/embed/WkA0dMudUG0](https://www.youtube.com/embed/WkA0dMudUG0)
showing TempleOS didn't make when building from Markdown to HTML.

* I'm going to fix this later today + some misspells.

------
datlife
After playing around with ESXi, Proxmox, Hyper-V. I converged my mini-homelab
to:

\- Any linux-based OS, Ubuntu in this case.

\- Ansible / Terraform for task automation.

\- K8s / Docker for containerization.

Given I have only 1 desktop + laptop, this option is good enough for me to
learn cluster setup / networking / automation.

------
lonelappde
Why is this better than running VietualBox in a host OS?

------
sqldba
6 cores and 64GB max memory? It’s hardly enough to run anything.

~~~
regnerba
O_o what the heck are you trying to run in a home lab where that isn't enough?
I currently run 20+ containers per NUC in my home lab. I have plenty of
capacity left and nothing else to really run...

~~~
luckman212
I think sqldba's post was sarcasm. As in, once you open Slack, Gmail and a few
Chrome tabs, poof there goes 10GB of RAM.

