I'll second this, also adding that while it remains more of a project to setup Frigate has made significant advances over the last few years and has improved a lot. So if you previously looked at it and were put off, might be worth looking at again.
Also fwiw, if someone is willing to spin up a Windows VM or are running that stack anyway than Blue Iris is probably the default contender for local security software, well polished. I know a few people who still keep a single remaining W10 with GPU passthrough install just for that, not even for games anymore where Linux has gotten good enough in the last few years.
All of this though benefits a lot from already having some sort of homelab and/or self-hosted stack. If you do then the marginal investment may be pretty minimal and value quite high as you use it for a lot of other stuff. If starting from scratch it's a lot more of a haul which of course is precisely why a lot of people use other solutions.
I dabbled earlier but started using 1Password in earnest in 2010 or so with 1PW3. There are plenty of things that could be argued about when it comes to the switch from a native Mac application to Electron, degradations in the GUI etc, some of us may be more sensitive then others. But one major objective thing you're apparently missing was the shift to a forced subscription, including deactivating previous supported sharing methods, and with the typical-for-VC-driven-feudalism-model eye wateringly, outrageously expensive and inferior multi-user support. Pure, proud rent seeking. And then naturally as well the artificial segregation of simple features like custom templates began too.
I hope someday that's made illegal. In the meantime there's Vaultwarden.
As another alternative, rather than using Touch ID you can setup a Yubikey or similar hardware key for login to macOS. Then your login does indeed become a PIN with 3 tries before lockout. That plus a complex password is pretty convenient but not biometric. It's what I've done for a long time on my desktop devices.
Thanks for sharing, hadn't seen it but at almost the same time he made that post I too was struggling to get decent NAS<>NAS transfer speeds with rsync. I should have thought to play more with rclone! I ended up using iSCSI but that is a lot more trouble.
>In fact, some compression modes would actually slow things down as my energy-efficient NAS is running on some slower Arm cores
Depending on the number/type of devices in the setup and usage patterns, it can be effective sometimes to have a single more powerful router and then use it directly as a hop for security or compression (or both) to a set of lower power devices. Like, I know it's not E2EE the same way to send unencrypted data to one OPNsense router, Wireguard (or Nebula or whatever tunnel you prefer) to another over the internet, and then from there to a NAS. But if the NAS is in the same physically secure rack directly attached by hardline to the router (or via isolated switch), I don't think in practice it's significantly enough less secure at the private service level to matter. If the router is a pretty important lynchpin anyone, it can be favorable to lean more heavily on that so one can go cheaper and lower power elsewhere. Not that more efficiency, hardware acceleration etc are at all bad, and conversely sometimes might make sense to have a powerful NAS/other servers and a low power router, but there are good degrees of freedom there. Handier then ever in the current crazy times where sometimes hardware that was formerly easily and cheaply available is now a king's ransom or gone and one has to improvise.
My personal favorite is iHP48 (previously I used m48+ before it died) running an HP 48GX with metakernal installed as I used through college. Still just so intuitive and fast to me.
>Is this going to be like that submarine that guy built to bring people to the Titanic?
Doubtful. Might seem counter intuitive but in some ways space is an easier problem then under water, at least once you get up there. The pressure differential between ~vacuum and 1 atmosphere obviously is just one atmosphere, and outward instead of inward, whereas you get to 1 atm of pressure (14.6 psi) in water at almost exactly the 10m mark (in salt water). The Titanic wreck (which is what the sub you're referring to was designed to reach) is at 3800m, at which point the pressure is around 380 atm (~5600 psi). Any failures are going to be absolutely catastrophic with no time to react. Whereas a space station can handle small leaks just fine for quite awhile (as ISS has had to [0]) if it has some buffer, it's "just" a supply loss and if it became too much would mean people having to get into a safe area or suits and eventually abandon it in the worst case, but it doesn't go pop like a soap bubble. And such things can definitely be patched. Assuming normal proven safety procedures are followed (most importantly having some margin and constant backup life boats or rooms sufficient for all humans on board until all can get to Earth) an impact or mistake or the like might put the station out of business but should be very survivable.
At any rate nothing like the titan, where IIRC the implosion went supersonic and thus they literally didn't even know what did them in because the collapse front was faster then the speed of human nerve signal propagation (120 m/s at best, usually lower).
>But since the impact point isn't guaranteed to be the periapsis or apoapsis, the above mentioned diametrically-opposing point also cannot be guaranteed to be an apsis.
You're correct on the generalized case of the math here, no argument at all, but this also feels like it's getting a bit away from the specialized sub-case under discussion here: that of an existing functional LEO satellite getting hit by debris. Those aren't in wildly eccentric orbits but rather station-kept pretty circular ones (probably not perfectly of course but +/- a fraction of a percent isn't significant here). So by definition the high and low points are the same and which means we can say that the new low point of generated debris in eccentric orbits will be at worst no lower then the current orbit of the satellite (short of a second collision higher up, the probability of which is dramatically lower). All possible impact points on the path of a circular orbit are ~the same. And in turn if the satellite is at a point low enough to have significant atmospheric drag the debris will as well which is the goal.
I'm sitting behind one of these right now, got it back at the start of last November, attached to an Ergotron monitor arm. It's worth noting that there are a number of screens coming out that seem to be using roughly this panel but with different price points and feature mixes. MacRumors (and no doubt others) maintaining a nice little dedicated thread [0] on 6K screens with info in an easily digestible form. And so on a meta-note, the most exciting thing to me is simply that we're finally seeing a big leap forward all at once in the screen fundamentals (resolution, refresh, color) after a long and frustrating (to me anyway) period of stagnation. Apple did the first iMac 5K iirc in 2014, twelve(!) years ago. And I thought at the time it wouldn't be long before we had a range of higher res options, but instead Apple eventually did a standalone, then long after LG did a release, there were a couple of rando ones from Dell that got dropped... and that was it. Now we've got lots of 5K and 6K options, 8K ones are coming, currently it's 60Hz but CES has seen higher refresh announced, next few years are looking good. While the LG doesn't take advantage of TB5's full bandwidth, but having 120 Gbps on tap means that we also have plenty of headroom for everything, high resolution, high refresh, and higher color bit depths without having to compromise. So that's all pretty nice.
As far as this one specifically, on a physical level it's perfectly decent. I actually like that unlike the previous LG and most screens it seems nowadays, there is no camera at all. The only real irritation about it is the ginormous power brick it has, which is bigger and heavier then a Mac Mini, and on top of that has a fixed cord (a SHORT fixed cord) which I hate. I prefer having power be integrated and just using a normal power cable, but if nothing else it's irritating that even on high end electronics OEMs still don't use GaN and shrink everything a lot.
I'm no longer doing significant graphics work so haven't invested in updating color calibration hardware, none of my old stuff still works with current higher bit-depth/HDR etc screens. I'm mostly doing coding, CAD, light non-print graphics, etc. So my impressions are purely subjective. List in no particular order vs the older 5k and other screens I've used:
• Whether good luck or just (not) bad luck, quality control on the physical parts hasn't been an issue. There isn't any banding, no dead pixels, light/dark patches or the like that other comments report.
• It claims to be cutting edge in terms of IPS displays, "nano ips black" blah blah, but there isn't any significant noticeable contrast increase vs the old. It's definitely excellent for a standard IPS display but OLED/µLED it is not (though conversely I have no concerns about it being on hours a day display static GUI elements).
• Matte instead of glossy doesn't really do anything for me since I'd reoriented my office space long ago due to everything being glossy. There is a slight shimmer if I focus that bothered me a little more than new but I don't notice after a few months. I don't think it's quite as good as Apple's treatment, but for myself I'd probably just go back to glossy given the choice. YMMV based on lighting.
• It claims to be cutting edge in terms of IPS displays, "nano ips black" blah blah, but there isn't any significant noticeable contrast increase vs the old. It's solid for a standard IPS display but OLED/µLED it is not (though conversely I have no concerns about it being on hours a day display static GUI elements).
• Software situation is mediocre. I have not been able to get LG's software to perform a firmware update, it fails with odd error messages, so I haven't been able to experiment at all with some of the modes that it was advertised with. Their software wants a lot of invasive permissions and is wonky. LG support has not been helpful. Newer screens will presumably come with current firmware out of the box at some point but this was disappointing.
• Also on software, at least under macOS 15 the HDR story seems a bit odd. It's the first desktop Mac screen I've used that has an HDR toggle in the System Settings, and enabling it does make HEIC photos and a few other workflows I surveyed work more like an MBP screen. However it also causes the Mac GUI colors to get all washed out and strange, there isn't compensation there with just the toggle. The may be improved in macOS 26, or might be something one of the Studio modes will help with if I can ever get access to them, but it isn't plug-and-play here.
• If I do toggle it on, having the HDR support with true 10-bit is noticeable in working with high bit depth photos, including everything from any iPhone in awhile.
• Having TB bandwidth out of the hub doesn't matter much to me but does work and means the TB5 input isn't totally wasted. Sometimes convenient to have an extra port. This would probably be of more value for someone using a notebook which is clearly the intended use-case.
Anyway, it's fine, I needed a new screen and it gives me a noticeably improved amount of screen space for my aging eyes but is still on the right size (for me) of not being so big that I'd need a curve though it's right on the edge. I've run 2 and 3-screen (1920x1200) primary use (ie, all for regular system use vs having a secondary proof/video screen like I do now) setups in the past, and there are pluses and minuses particularly with having one be vertically oriented, but it's not bad to have so much space all as a single unified thing.
I think most people would be better off waiting, this was clearly not all baked yet when I got it and there is plenty of competition here or coming, but I'm not returning it either. I'm looking forward to hopefully finally seeing screens that will arguably be "done", basically hitting the limits of human visual acuity in all respects (or at least to the many-9s level of diminishing returns) in the next few years. And I'm also kind of curious longer term still about what effects that might have on the industry, for my entire life progress in video, unlike audio, has been constant and there was always clearly more to do. Once resolution and refresh stops and monitors are "finished" I wonder if that might be interesting for media in terms of reducing the technical rat race?
>OK, they are foreigners, but does that mean they can ignore the law and avoid paying fines?
Uh, yes? Obviously? If they're willing to pay a much bigger price in the form of losing all business in the country completely and never visiting again. But the law and fines don't apply globally only to Italy, so if someone is not Italian and is willing to give up on a not insignificant market (and beautiful country) then sure, why wouldn't that be that? Seems like it balances out, I'm sure Cloudflare does way more business than 14M in Italy, so they're not going to be casual about it. And remember, this is just pitting one arbitrary set of business interests (IP monopolists) against another (internet service company and associated customers). Why should Italy be immune from getting pushed on by the second party there and immune from facing the cost tradeoffs? Why privilege the former commercial interests over the latter?
Worth mentioning that links at home can use them too, jumbo frame support was rare at one point but now you can get them on really cheap basic switches if you're looking for it. Even incredibly cheap $30 (literally, that's what a 5 port UniFi flex mini lists for direct) switches support them now. Not just an exotic thing for data centers anymore, and it can cut down on overhead within a LAN particularly as you get into 10/25/40/100 Gbps stuff to your own NAS/SAN or whatever.
Also fwiw, if someone is willing to spin up a Windows VM or are running that stack anyway than Blue Iris is probably the default contender for local security software, well polished. I know a few people who still keep a single remaining W10 with GPU passthrough install just for that, not even for games anymore where Linux has gotten good enough in the last few years.
All of this though benefits a lot from already having some sort of homelab and/or self-hosted stack. If you do then the marginal investment may be pretty minimal and value quite high as you use it for a lot of other stuff. If starting from scratch it's a lot more of a haul which of course is precisely why a lot of people use other solutions.
reply