I have an M4 Mac Mini running the following in a VM:
- OpenWRT (previously OPNSense & once Mikrotik RouterOS) using 2x 2.5Gbps Ethernet NICs via USB-C
- OpenMediaVault (Exposing a 4-bay DAS via USB-C, 2x3TB Drives in Btrfs RAID-1)
- HassOS (Home Assistant OS)
On the host, I'm running OLlama and a reverse proxy through Docker.
The whole thing uses 7 watts of power at any given time - I've seen max peaks of 12w when running LLM queries. The drive bay actually uses more than it.
Through power saving alone, it will pay for itself in 5 years over my previous AMD Zen 2 build.
My question was rather about MacOS guest(s) on a MacOS host. Contrarily to specialised linux distros (Home Assistant, OpenWRT...) MacOS doesn't strike me as particularly minimalistic won I wonder about the amount of overhead and plain storage requirements just running them idle...
I understand for specific MacOS or iOS development wanting template envs one would want to easily and repeatedly spawn up / destroy.
Out of curiosity, why not containers for OMV and Haas? QoS? And I’m dying to know what you are using openwrt for. I’m looking at setting up a Mini as well, and have been using Colima/LIMA to run containers on Rosetta/Mac vz locally and it seems to work well enough.
I assume you mean something other than OMV, ran in a container? Reason I put it in a VM was that I wanted to use a Linux compatible file system. I’m using BTRFS with raid, I’m sure I could have ran APFS in raid instead!
For haas, it was partly QoS, partly because I had historically ran all of my things separately. I might look at bringing that to the container level.
I’m using OpenWRT as my main router! One port to my LAN switch, one to the modem.
What is your use case where not having an ECC is critical? Assuming something related to complex calculations that cannot fail and takes a lot of time to process?
If you care about your data, you want ECC. If a bitflip happens in memory, zfs will for example, not catch the error and happily write out the corrupted file.
This sort of thing probably doesn't work particularly well with file auto-saving, which is almost necessary for some language servers to reprocess files to reflect "real-time" in VSCode. Aren't you also just committing half-baked work and totally menial changes?
In the future, consider hosting with the following:
* Flokinet (long standing host standing for freespeech, very seriously presented)
* Njalla (created by PirateBay founder, will actually troll any copyright trolls, but seems reliable)
* Cockbox (a silly but also likely reliable option)
Also it seems that you'd have been fine using Cloudflare without giving them any identifying details - when I signed up way long ago, they didn't ask for anything...
Whilst this is theoretically a nice idea for security/privacy, wouldn’t there be throughput/latency concerns over mobile networks if you aren’t just connecting to one server and getting all your notifications in one go? One of the biggest slow downs on mobile data is the initial handshake, so repeating that for each of your apps doesn’t seem worth the trade off to me.
This is the goal, in the same way Google does for FCM: the Play Services maintain a connection with their servers to get the notifications and then send them to the different apps. Here's it's the same with a server you control and know it won't read whatever is in that notification, without draining the battery because there's only the Unified Push app maintaining the connection.
Notifications don't appear to be E2EEed. They are encrypted in flight. But they do change formats while on the servers. For example, the application server contacts the push server using Web Push (like a webhook), but the communication from push server to the distributor app on the end device is using some other protocol like XMPP, Server send events or WebSockets. This obviously requires decryption.
But the encryption shouldn't be a big issue as @morith pointed out. The application server can send a notification to the end user application simply asking it to refresh. The second image on their home page (https://unifiedpush.org/) illustrates this.
Yeah, existing proprietary (e.g. Apple/Google) push notifications are not E2EE, so apps that need to convey sensitive information (secure messaging apps, etc.) tend to take this approach already regardless of push provider.
IIUC notification contents are application-defined. So if you're building a weather app and don't care about encryption, your content can literally be "It's going to be sunny."
Signal, on the other hand, would only send "please refresh" and then it's the app's responsibility to act on it.
Best part is that neither of those follows the "native style" either.
There's no dark uxtheme provided by windows, so Qt falls back to their fusion style. Even the "native" light theme doesn't follow the new "fluent" style so that's also a shitshow.
As for WinUI 3, apps are supposed to bundle it, so if you target Windows 11 and run the app on Windows 10, it'll look all rounded and drop-shadowy instead of flat like native Windows 10 apps. Great stuff.
As someone that is programming since 1986, has multiple native and Web frameworks experiences behind myself, have been back coding Web since 2019, because a lot of developers are lazy to learn how to write portable code.
Not even portable Web code, they rather ship a copy of Chrome, and then complain Google has taken over the Web.
TBH I'd take QT app over any electron/web app... it still looks more consistent and matches the OS. Heck, even Java Swing (IDEA) looks better and feels more native than any electron crap.
Not sure... It's mostly due to used L&F. You have FlatLaf (inspired by IdeaUI). Swing is surprisingly solid platform (though it has it's quirks and sadly it's development virtually stopped years ago)
Drivers have the right to crash the system in my books - software doesn't. They need to take a stronger stance on antiviruses and kernel based software in general and push defender as the defacto antivirus for Windows.
wondering this as well - the article suggests that it would be able to run for longer thanks to the vision/sensory systems being driven by a very low wattage system, requiring a much smaller battery... but I'm not convinced that a slightly larger system, perhaps with wheels or the ability to jump, would be outpaced by this particular "biohybrid"
- OpenWRT (previously OPNSense & once Mikrotik RouterOS) using 2x 2.5Gbps Ethernet NICs via USB-C
- OpenMediaVault (Exposing a 4-bay DAS via USB-C, 2x3TB Drives in Btrfs RAID-1)
- HassOS (Home Assistant OS)
On the host, I'm running OLlama and a reverse proxy through Docker.
The whole thing uses 7 watts of power at any given time - I've seen max peaks of 12w when running LLM queries. The drive bay actually uses more than it.
Through power saving alone, it will pay for itself in 5 years over my previous AMD Zen 2 build.