Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Proxmox VE Helper Scripts (community-scripts.github.io)
238 points by BramSuurdje 31 days ago | hide | past | favorite | 86 comments



Along with the submitter, I am also on the team of maintainers who volunteered to help with maintenance of this project after tteck's sad news that they were entering hospice (1). The team members are all motivated individuals, who are enthusiastic on carrying on tteck's legacy.

We are moving forward in a transparent manner and I am more than happy to answer any questions.

(1) https://news.ycombinator.com/item?id=42016605


Oh wow, this is truely sad news.

I only recently went down the homelab/selfhosted path and the majority of my containers were setup using tteck's scripts.


> Oh wow, this is truely sad news.

Incredibly sad. It’s a real testament to tteck that he took the time to transition the project, and make his wishes known how he wanted us to proceed. Tteck is a legend.


"Tteck is a legend."

When the shit has really hit the fan personally and yet you still worry about other people: that is the mark of a decent person.

Legend, indeed.


I have been using Proxmox VE for several years now and have most of my services running as docker containers in one VM. This always bothered me because I wanted to be able to control the individual services and their backup jobs using the Proxmox interface. After checking out these scripts I already moved a couple services (Caddy and Wireguard) over to LXC containers and am very impressed by how easy it was to do.

Basically I just wanted to say thanks to everyone involved in making these scripts, it has left me with a great first impression.


Careful. I too thought about this, but docker containers have the following benefits over the LXC scripts:

- Updates and automatic upgrades between major versions.

- The developer who wrote the software created the container (most of the time), this means its a supported environment. Also, as they have the insight into the application and future upgrades the environment has been setup correctly for each version.

If you want to achieve your goal, I'd suggest an LXC with your favourite Linux distro + docker + app container(s) for each app you have. It gives you the same thing, but with the benefits above.


i too was looking at going the route of the poster above. is your suggestion essentially the 'same' in terms of resources as converting a docker container to a LXC (relatively speaking)? for some reason i had/have it in my head that the LXC's would somehow be more efficient (based on...nothing, hence the question!)


Think of an LXC as a docker container but at the OS level.

An LXC running docker with an app containerised inside will basically be the same as if the app is running a level higher in the LXC itself.

Give it a try. Then open top/htop in the host OS (pve shell) and you'll see the apps running in the docker container inside the LXC as native processes.


tyty!

i'm setting up a new server soon and want to optimize/correct some of the things i've done on my first proxmox setup (like not running truenas in proxmox passing the RAID controller through lol).

i'll give this a shot!


Oh yeah I definitely won't be doing this for all my containers, the majority will stay within my VM. However it is worth noting that in many cases the LXC scripts install most of the required packages by adding their official repositories, so this seems like a well supported way of doing this.


A bit of a tangent. I've been trying to manage libvirt& Unraid through terraform, but have run into issue after issue. I'm about given up, and will just manage the virtual machines manually...

What's the virtualization technology on proxmox?

What's the advantage to using something like this as opposed to terraform or salt stack or Ansible?


It is also worth mentioning that Proxmox uses ZFS making snapshotting quick and Proxmox also has a very good backup system.

If you want to treat your self-hosted applications as "sheep" (1) , then terraform k8s etc. is a better bet.

But if you are happy to manually restore from a backup or snapshot when something goes wrong, or automatically have your LXC container shifted to different hardware if you have a cluster, then Proxmox is for you. The reality is that in a home setup you will spend about as much or less time maintaining your "pets" than than you would your "farm".

(1) I write this from New Zealand


"It is also worth mentioning that Proxmox uses ZFS"

No it does not enforce ZFS or any other filesystem. That's up to you. ZFS or BTRFS are fine when indicated - and you need to know your stuff.

Cephs for clustering (hyperconverged) is very much a first class citizen. I generally only use EXT4 as a filesystem - keep it simple. XFS is lovely too, especially for reflinks if you need them.

(1) Wal and Cooch know how to run a farm (and so do I, in the UK!)


> No it does not enforce ZFS or any other filesystem. That's up to you. ZFS or BTRFS are fine when indicated - and you need to know your stuff.

You are correct, it is optional and I should have made that clear. While optional it does have native support for ZFS and takes advantage of ZFS features, like instant snapshotting of LCX containers.


> Proxmox uses ZFS making snapshotting quick

Proxmox only supports linear snapshots using ZFS (so no tree-like snapshots). This might be a deal-breaker for some usages.


Proxmox is more about the management of the hosts and resources on them, including the live migration of VMs between hosts, support for some types of HA and failover.

You can likely manage the configuration of the VMs themselves through terraform or similar in combination with Proxmox if that's your desire.


proxmox is using KVM for virtualization and Linux Containers (LXC) for the containers. I agree that something like terraform and/or ansible would make more sense for an IAC (infrastructure as code) deployment. Most of the people I talk to that use proxmox for a homelab prefer to do things manually and don't bother with any IAC implementation.

For work I'm a firm believer in reproducible environments and IAC. We actually a combination of vagrant, libvirt, and KVM to spin up local clusters for quick testing and development. It works out pretty well, but in my homelab I don't have anything complicated enough to bother setting up terraform/ansible for. Although I imagine if my server crashed I probably wouldn't think that way anymore.


Given enough appetite and love from me, I've got a pretty robust Ansible script for building a Proxmox host with $i++ LxC's.

It's not suitable for open sourcing yet (embedded secrets and the like), but if the community wants it, it's pretty solid.

Only issue I see is the Ansible script currently always expects to be building a cluster of Proxmox hosts. I'd need to make some change to customise it so it can build out just one node though.

I've been using it or ~3 years now for my Proxmox cluster home lab which predominantly hosts LLAMA, *Arr stack, deluge, Nginx, Tailscale and a few other services.

It's not quite a one click deployment, but it can build our an entire cluster in 30 minutes after an initial Proxmox install is completed.


Proxmox makes that all into a point and click appliance to focus on reliability and doing something with the technology.

Just because someone doesn't use vagrant, libvirt and KVM to spin up local clusters manually, doesn't mean they don't know how.

There is no shortage of Proxmox users who grew up in datacentres from bare metal servers, to virtualization first coming out, and beyond.


Why is something like Proxmox a bad target for IAC?


If you want to manage VMs, then you're probably using terraform + provider. However, SDN (Software Defined Networking) is not yet supported [1], which makes any kind of deployment with network separation not feasible (using IAC only).

[1] https://github.com/bpg/terraform-provider-proxmox/issues/817


You can split the difference with the Proxmox provider for Terraform[1]. The workflow would be:

- provision VMs with Terraform - configure/maintain your VM with something like Ansible

The provider also allows your to schedule LXC if you'd like to target that instead.

[1]: https://github.com/Telmate/terraform-provider-proxmox


How good and complete are any of these providers for Proxmox? If the ratio holds (4/10 I checked), I'd have to look more closely at about a third of the providers, just based on their latest release date.


I have been looking into setting up my first Proxmox box, here is my take as a newcomer.

I wanted to do what I think is a very basic and very common setup: Modem > proxmox box > OPNsense VM > physical wifi router via onboard 10Gb NIC + internal network VMs like OMV etc. The goal is to add a full network filter via OPNsense, and allow access to a media sever and backup etc from the internal network.

I see no OPNsense, OMV script is basically contra-indicated because it should be a VM instead of the LXC container, and I don't see any glue scripts to get VMs talking to each other, which is an important part of Proxmox configuration. So it looks like there is room here to get some basic setup scripts for a simple home server either improved or added to the collection.


No it isn't basic and common (it is for me but perhaps not for you and certainly not for most people)

OK, so you want to virtualise a router and firewall. That's fine. I have deployed roughly 200 pfSense firewall/routers as VMs and physical boxes and OPNSense is similar, so I can probably help.

At a minimum you will need two physical interfaces (one will actually do but you will need to know what you are doing!). You need "WAN" and "LAN". OPNSense is still FreeBSD based, I think, so it will not run in a L[inux]XC container for obvious reasons.

Your last paragraph seems rather confused. I don't know what you mean by "glue scripts". VMs communicate via networks

I suggest you try a few experiments to get to grips with virtualisation properly and then move on from there. If you swing by the Proxmox forums with specific issues we'll try to help out but in the end you need to dive in full on ... or not.


I run proxmox and have set up VLANs.

The router port to the proxmox machine is set up for tagged packets that isolate incoming/outgoing traffic.

After that my VMs and Containers are easily set up to "live" on one or more networks.

For me the firewall rules on the router determine what traffic can be relayed between vlans through the router.

I'm pretty sure you could set up opnsense running in a container or vm to do the same thing, selectively passing traffic from one vlan to another.


i have a similar setup with a PM box and a Ubiquiti Dream Machine Pro. i provision VMs with a Terraform provider, have a script that processes Terraform outputs into an Ansible inventory INI file to handle configuration. i find it pretty straightforward and could take it further by scripting my VLAN setup but changes so infrequently i don't mind doing it manually.


There is no OPNSense script I think historically in part because any misconfig could expose the Promox instance to the world. It is easy enough for advanced users to spin up a VM with the ISO. There has been a request for a OPNSense script made recently.

I agree with OMV. It certainly can be used as is, but not usually how people want to use it. A note was added to the script a few days ago.

> I don't see any glue scripts to get VMs talking to each other

There is a Tailscale script which technically helps them talk to each other (over Tailscale) :)

The scripts are designed to setup self contained LCX containers. We are trying to avoid building our own k8s.


Great, now I am down the tailscale rabbit hole and just have to use it!

I think I will stick to using proxmox virtual ports to create my network so I can more easily only stick to individual device registration in tailscale and save on that overhead when I'm home, but then also add tailscale /headscale into the mix somewhere so I can tap in via VPN when I am out of the house.

Tailscale and OPNsense are more difficult to get working together due to conflicting project goals (one blocks well, the other opens up well), but it looks like it's worth it to me.


I use Proxmox with an OPNSense VM and have multiple NICs - one is dedicated to the fibre ONT. I also use an external wifi mesh. I have a couple of other vms (unRaid hosting Dockers with sata card passthrough for legacy reasons and a vm for Home Assistant OS) and lots of other LXCs. It works superbly.


> I don't see any glue scripts to get VMs talking to each other

I'm confused by what you mean here? Don't they just use the network like any other computer?

I haven't had to do any special configuration to get my VMs to talk to each other.


VMs usually have their virtual NICs connected to a bridge interface on the host (like a virtual switch) so they can communicate. Proxmox creates one up by default that is also bridged to the physical NIC you set up for management when you install it, so it just works.

In the router case, you'd likely want this default one to be the 'internal' network and have a separate interface (either physical or VLAN) for the WAN.


I am not perfectly informed, but in my case, OPNsense would need to be the only vm with access to the incoming NIC port, and all other VMs and the router would need to use virtual network interfaces only coming from OPNsense for incoming. The router would be the only device with direct access to the outgoing NIC port. None of that seemed incredibly difficult looking into it, but still, it was the type of recipe I was expecting when I saw "Proxmox scripts".

And of course this means that the Proxmox box as a whole should have similar hardening to a typical web server, with minor tweaks to allow residential traffic on various other standard ports. So that hardening would probably be another script I would like to see (I don't know what all the proxmox scripts in the first section do).


VMs already use virtual network interfaces, which are by default bridged to `vmbr0`, a bridge that proxmox creates by default which is also bridged to the hardware NIC. For your use case, you simply want to create a second bridge, e.g. `vmbr1`, which is not bridged to the hardware NIC. You would then assign two virtual NICs to opnsense, one on each bridge (WAN and LAN, essentially) and then choose `vmbr1` as the bridge each time you create an "internal" service behind opnsense.

Since selecting the bridge for a service's NIC is part of setting up each service, the only thing such a "glue script" would be doing is creating the `vmbr1` bridge. That's already a one-liner.


I was looking at a proxmox/(pfsense/opnsense) tutorial the other day. They recommend binding the WAN interface to vmbr1 (or anything other than vmbr0) since VMs are created with their ethernet bridged to vmbr0 by default. This configuration is what most people want so it'll be a little less work setting up networking.



I'll definitely look into the docker LXC and Home Assistant VM. I'd been using docker in a VM on proxmox, successfully mind you, but perhaps there's some more efficiency to squeeze...


HAOS as a VM on proxmox works well.

I used some of tteck's helper scripts to set up mqtt and zigbee2mqtt LXC containers with a passthrough of the USB zigbee device.


The scripts for both these projects work very well. I would recommend Home Assistant HAOS in a VM over a LXC or docker.


I decided to run proxmox on my homelab rather than having a k8s setup, and I've come to sort of regret it. LXCs are awesome, but being bound to just them or qemu VMs doesn't fit all of my needs. With Kubernetes I could just add support for lightweight VMs (Firecracker hypervisor, or unikernels or something) with a project like Kata. Proxmox is just not extensible.

It's also just not amenable to automation or reproducible builds in the same way as an established pod manager like Kubernetes: there's no support that I can find for Terraform, and so you're stuck with regular full-disk backups and maybe some Chef/Ansible/Puppet tooling, which I don't want to invest in [re]learning.

Still, very cool resource management and passthrough model, and it's easy to set up and maintain, with a nice control panel.


It's certainly a different model of deployment. I like it, though it does have its warts.

However there is a (community) TF module...? https://registry.terraform.io/providers/Telmate/proxmox/late... (I have no experience with it as I typically reach for Ansible).

Also, easy-to-install ZFS makes it hard for me to cajol myself into trying something else. And if I want k8s for play time I can always spin up (a/some) VM(s).


I've been automating deployment with the bpg Terraform module linked by a sibling (of mine) comment to Proxmox for work.

Neither option is particularly complete, and they have some issues; the bpg one does most of the heavy lifting over SSH rather than using the API due to missing features; it also has some annoying quirks with data structure, such as VM IPs are in multi-dimensional arrays, which means you have to write a bunch of logic to drop localhost and secondary IPs (such as those for Docker virtual networks), and then restructure the output, if you want to use the address to setup your DNS for example.

It's doing what I need now, but I would not call them "gold" or "platinum" grade, probably "silver".

I'd suggest seeing if Proxmox is better-supported in some other IaC tool and fallback to Terraform as a last resort.



I've pretty thoroughly drunk the NixOS Kool-aid.

For awhile I ran Docker Swarm with a bunch of SBCs, then k8s, then just a big server running Ubuntu + Cockpit, then Proxmox, until I have finally settled on NixOS.

NixOS has decent container support if necessary, but I've found that its declarative nature means I almost never bother with containers. "Uninstalling" something is generally as simple as "remove it from the config file, rebuild", and it's not hard to do cgroupey stuff if you need to manage memory and the like.

Not to mention that I think NixOS's nginx DSL is wonderful. It's so nice being able to have my proxy configs (along with LetsEncrypt) managed directly (and correctly) by the config environment instead of me writing my own scripts and the like.

(I'm not sure if there are any distributed NixOS things, because I could totally see something neat being built on Flakes)

My homelab has never been simpler and I've never been happier with it.


Any reason you didn't go NixOS in a Proxmox VM? The advantage would not be having to do a full reinstall if anything went wrong and being able to spin up other OS' if needed. The downside would be a few percentage of performance loss.


NixOS takes a snapshot on every rebuild, which happens pretty much every time you install something or change a configuration setting, meaning that if I screw something up, generally all I have to do is reboot and choose the previous generation.

Of course I could install NixOS inside Proxmox, but part of the appeal of NixOS is that everything in the system is managed by the configuration.


I've used this[1] Terraform provider together with the Talos[2] distribution for deploying a Kubernetes cluster. I agree that the APIs available with Proxmox are not fully featured, but it more than suits my needs.

I'm running a four node cluster on salvaged SFF machines backing up lvm snapshots to home brewed TruNAS storage and it all makes me happy.

----

[1] https://github.com/Telmate/terraform-provider-proxmox

[2] https://factory.talos.dev/


You probably know this but it's good to run a cluster with an odd number of nodes. You don't even need another full node, just a quorum node like a RPi.


Yes, of course.. I'm actually in the process of replacing nodes. The original 3x Ryzen5 4-core 32Gb hosts are being replaced by Ryzen9 12-core 96Gb hosts.. its just taking a bit of time. As long as I only ever take one down for updates at a time, its no bother for a home-lab environment.


Proxmox doesn’t preclude you from having k8s. You can create VM(s) in Proxmox and then install k8s on them, then run your app workloads in k8s.

You do have to treat Proxmox VMs like “pets, not cattle” since they are more difficult to automate, but that’s the same story as if you were managing your k8s host on bare metal too. The benefit with Proxmox-hosted VMs though is that you can use Proxmox for whole-VM backups and migrations, so you can have the best of both proxmox and k8s!


Something like Talos gets you pretty close to cattle. You just boot a fresh VM from a generic ISO, then run a pre-defined config against it and it will join the cluster. I haven't looked into it but in theory you could pre-bake that config into the boot ISO so adding a new node would literally be just add a new VM using a template. Of course you'd want to remove the node from the cluster cleanly before just deleting it, though.


I run a couple of Talos clusters on Proxmox at home; I haven't templated them yet as they're pretty static clusters, but I suppose you could use a Proxmox Snippet with the config in and point new Talos nodes at it when they boot.

I've also been using a Terraform module for Proxmox at work to deploy stuff, but there's only two, both community modules and neither is gold/platinum tier, good enough for homelab though I'd say.


Different technologies are for different approaches and applications.

It's relatively trivial to use the pve command line utility to create or modify vms in proxmox.

Still, the originating reason of this post is due to a large number of useful scripts to help make things more manageable and maintainable, and the founder of it having to step away, and there being gratitude for their help to make things much more manageable.


> Still, the originating reason of this post is due to a large number of useful scripts to help make things more manageable and maintainable

Also makes it very quick to try out an application, arguably less time than even docker.


Absolutely.

Docker is a step or two away from packaging installers for the masses.


I hadn't intended to take away from that. And I've used these scripts myself for spinning up resources - they're definitely a help.


But you can just chuck Kubernetes nodes on Proxmox? I have my nodes running on XCP-ng. The beauty of running a hypervisor is maximum flexibility. I can try out different distros etc, either for k8s nodes or otherwise. I run my router on there (opnsense). I can play with stuff like nix and guix and could even install Windows if for some reason I wanted to.



I see Incus also uses/used LXC, which has been my main gripe with Proxmox; I'm intimately familiar with building Docker/Podman images but have never built an LXC.

Now that Incus has shutdown the image server[0] is there a decent source for LXC images? I've often struggled to find ready-made images for a lot of things I want to deploy on Proxmox, and if I was to move away, I'd probably want something that uses Docker/Podman for when I don't want to deploy a VM.

[0]https://discuss.linuxcontainers.org/t/important-notice-for-l...


For people who might be confused, Incus is what used to be LXD. It used to be a Canonical project but people who didn't like their direction forked it an made much easier to install as well. (It was only available on Snap for a long time) I think the main developer uses opensuse so their rpm packages are pretty good.

As for LXD/Incus itself, I sincerely believe it's good software and I like their CLIs a lot more but for my own purposes i've moved to using proxmox, or lxc directly.


I moved my Proxmox single node home-prod setup to Incus over the last couple of weeks.

Incus feels a lot less…legacy? Old school? Something.

Not a lot different when it gets down to it though. It’s easier to work at the CLI with Incus. Backups are a little less straight forward.


Proxmox isn't legacy, far from it.


I have a single VM on my proxmox server that I spin up all my docker containers in... This is the simplest thing I could think of in terms of config. I haven't had to wire containers together, though, so maybe I've found the sweet spot for my deployment needs.


What capabilties does Proxmox have that are missing from simple KVM? Just a web interface?


Clustering, migration, high availability, backups, Ceph integration, virtual networks as of recent, can do containers as well as VMs to name a few off-hand. The web interface is optional, too. You should check out their webpage for more.


re: backups, their other product "proxmox backup server" integrates really well with the backup system they have in place. It can be run in a container on the proxmox host itself.


KVM is just kernel side of things, it's not full vmm by itself, you always need some userspace application too. Firecracker, Qemu, Cloud-Hypervisor are some vmms built on top of kvm.

While qemu is common way of using kvm, but running qemu directly is quite annoying. So you have stuff like libvirt and proxmox as wrappers around qemu.


Fantastic community! I've tried a couple of scripts already. I have running Pi-hole and Paperless LXC containers. I'm looking forward to Appflowy!


Unfortunately until this day Proxmox doesn’t support full disk encryption oob, despite using the word „Enterprise“ in the first paragraph of their website. Yes, you can go your own way and install on an encrypted Debian, but you will miss important features and are on your own. It all comes down to ZFS not treating FDE as a first class citizen.

Very sad state of affairs.


Am I right that proxmox takes over your entire machine?

I have been using a combination of docker and lxc/lxd to manage my VMs. But, cockpit (on ubuntu) does not give me a perfect experience for managing running VMS, etc.

I wish there was a good solution for all of this. But, it feels like you need to cobble together a bunch of kibana tools to get true monitoring.


Proxmox runs on top of Debian. The Debian is part of the install but I think you could install the packages separately on an existing Debian install. You could even install cockpit if you want to.


I wanted to delegate management of my raid array to higher level tools since it died on me seemingly for nothing (I was able to recover all the drives but none of the files).

I tried TrueNAS but it's very rigid. Proxmox seems to give you more control over what's installed on the server but it's also quite locked down. Don't remember exactly what was it that pushed me off Proxmox. I think it was that I needed to manage some VMs over LXD API and others over proxmox and I couldn't mix and match, I had to choose one without extra hacks.


Yes, installing Proxmox is akin to installing ESXi.


Usually, proxmox is the base OS (meaing replaces whatever Linux/Windows/FreeBSD/etc that there was before). It is possible to run Proxmox inside KVM, but that isn't the usual choice.


Specifically, running Proxmox in a VM (for me at least) I'd only recommend for testing Proxmox itself, not for any production setup (even "production" in my homelab)


Monitoring is there in Proxmox


Tangential question - what are people using their homelab for / what are some interesting or useful projects you've spun up on them? I've been thinking about setting one up but not 100% sure I'd find use out of it :)


At the risk of leaking info to any toes I step on w.r.t my home environment,

- home assistant - Network Video Recorder - Jellyfin - network management such as ubiquiti or omada etc - vault warden/1pass/other secrets servers - tailscale or wire guard server - build server/k8s test environment - private artifactory or mirror (especially useful if you're using the same distro on a bunch of devices but don't want to overload the actual mirror+improves download times) - torrents (someone's gotta seed Wikipedia) - onsite backups - bastion into your home network (see also: wire guard) - some people even use it for their router

You could also take a look at the tteck scripts, there's a bunch of cool stuff in there


I'm just going to repeat a bunch of what hughesjj has already said, but anyway:

OPNSense (as my household's internet interface), Unifi Controller (as my household's primary wifi), Jellyfin, Wireguard, Pi-hole, LMS[0], Frigate NVR (migrating off ZoneMinder, awaiting delivery of a Coral TPU to finalise this), couchdb (as Noteself[1] back-end), nginx (serving a handful of sites for my own entertainment), Mailu[2], Calibre[3], various other in-flight experiments (which Home Assistant will soon become, Bitmagnet DHT scraper).

Most of the above are docker instances hosted on a small number of VMs hosted on two (or sometimes three) physical machines running proxmox.

[0]: https://github.com/epoupon/lms (HN lurker)

[1]: https://noteself.org/

[2]: https://mailu.io

[3]: https://fleet.linuxserver.io/image?name=linuxserver/calibre or https://fleet.linuxserver.io/image?name=linuxserver/calibre-... (I can't remember which)


I have Proxmox on my older (11th gen) Intel NUC - it runs my Unifi network controller for my WiFi APs, an Unbound DNS cache (because my router doesn't support DNS over TLS or DNS over HTTPS), a NAS (using ZFS on some hard drives in a USB 3.2 Gen 2, 6-drive disk chassis - the ZFS managed directly by Proxmox and then a Debian container just doing Samba).

That's all for now but I've just installed Home Assistant but haven't set that up yet. I also intend to try out Jellyfin as a media server and Frigate as a video recorder when I get some cheap cameras.


I started with a home lab a few months back. Its basically an old miniPC running Proxmox virtualization and LCX containers. Mostly it helps me learn various technology I have not much experience with in my work. I run about 20+ services on the home lab. Some of them include:

Open WebUI which can connect to the OpenAI api or a local Ollama LLM. You can also connect various tools to the LLM like a calculator or web search to augment them. The AI has helped me learn how to configure and debug stuff. Like I got step-ca to setup a local certificate authority and give certificates to my various internal services. I played around with configuring Caddy and Nginx along with ACME to the the step-ca. The LLM was even helping be debug my config files.

I'm also using Hoarder for bookmarking and it can use AI to automatically tag your bookmarks. It can even backup the webpages.

I've been using Mealie to clip and save online recipes.

I'm running Uptime Kuma to check if my computers and services are up and if they are down, I'll get a notification.


Also just started out few months ago:

- home assistant (just a few currently, more soon) - paperless:absolutely awesome document management system - immich: image management with automatic synchro of my mobile taken images (ML features) - tailscale - StirlingPDF: simple tools for all things PDF


- Home Assistant, Mosquitto, Scrypted, Frigate, rtl_433

- Postgres, Maria, Influx, Grafana

- Plex, Arr suite, Transmission

- Ollama, OpenWebUI

- Web change tracker

- Teslamate


I doubt you have a whole lot of control over this, but this website is misery to use on a phone. Browsing scripts gives you six or ten tiles per page, navigation buttons are only in the header. There are no page numbers or any indication of progress. At a glance, there appears to be no way of filtering or sorting, although there is some arbitrary grouping being applied in the middle of the list. Also the script descriptions only show three lines of text, which is not nearly enough to give a clue to what the script is for.

This is probably the worst implemented list view I've ever seen. Completely useless.


Not sure if we're seeing the same site but this site is better than 90% of the web when accessed via mobile for me. No page number when navigating forward/back through scripts, true, but I doubt you want to go through more than 1 or 2 pages without searching first. Search worked for me and turned up what I wanted on the first hit. Descriptions being 3 lines is pretty much the best it can do with the limited screen space; to improve this I feel you would need dedicated summarized content for mobile, which is a price that most platforms don't want to pay. Your description made me think that this was in the bottom 1% of mobile sites but honestly it's above average.


Heartwarming to see the community response here, long live tteck.

I’ve just built my first homelab and have favored OpenMediaVault which seems better suited for my use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: