What's more concerning IMO is that despite the fact that I can't find a CLA, their pricing page claims that if you buy their support contract that they'll give you the software under a "Commercial" license: https://min.io/pricing
I worked on this while on the Instagram Server Framework team to leverage LibCST and solve some of Instagram's own problems. We took a lot of inspiration from other lint frameworks (notably, Flake8 and ESLint).
I left the team before most of the work to open source it happened. It looks like a lot of work went into the documentation, command line interface, and configuration system. Congrats to the whole team! I hope this can be useful to other projects outside of Facebook.
I'm with you, and I prefer "dumb" devices, but you can still buy these blu-ray players and not connect them to the internet.
The internet connectivity is sold as an additional feature so that you can use your blu-ray player to watch Netflix. I agree that I don't want logging on a device like this, but if I was going to connect one of these to the internet, I would at least want regular security updates.
That’s what my in-laws did. “For some reason you hadn’t connected your smart TV to the internet; instead of using your Apple TV, we gave TCL your WiFi password. Aren’t you proud we figured it out on our own?”
I really doubt it. My 2015 Sony bravia has a similar 4K VA panel that I'm largely happy with. I absolutely hate that TV for reasons besides it's panel. And I regret blowing £850 on it.
You're not going to find IPS or OLED panels on those large form factor monitors for a sensible price, so do consider that.
Also keep an eye open for NEC digital signage displays on eBay, they're quite common coming from liquidated businesses. I bought a few of them for the office on the cheap and they're solid as long as you avoid the really old plasma models.
Traditionally, a computer monitor would be superior in quality since they are used at a closer distance where things like dead pixels are much more noticeable, in contrast to a TV that's mainly used for video at a longer distance.
With LCD monitors being available in sizes as large as TVs and with the same resolutions, I suspect there won't be much difference but perhaps panels intended for TVs may still have more allowable defects.
If you're going to that sort of length, maybe just filter DNS requests from the TV to whitelist Netflix and Amazon Prime Video, etc. but block everything else. A custom router might go one step further and only whitelist outbound traffic to IP addresses that were previously resolved through DNS.
> you can still buy these blu-ray players and not connect them to the internet.
Unfortunately this is only a temporary solution IMO. Within the next decade I think you'll see these smart devices shipping with built in connectivity that's difficult or impossible to disable, especially if Starlink or other satellite based services really take off.
I don't think these kind ofdevices will use Starlink or an equivalent service any time soon. Starlink needs a "pizza box sized" satellite dish that constantly adjusts its position to stay in contact with the satellites. I assume they won't work inside, like other satellite based antennae.
If smart devices will have build in connectivity in the next decade, I think 5g will be a more likely candidate. But I don't see that happening either. Why would a company pay for the data of its users when most people will just connect it to their wifi?
Sure, there's probably less exposure from nuking the profiles directory, but I don't think you're being fair to containers:
- Containers are a feature built into Firefox, these extensions just expose a UI for it. The Multi-Account Containers plugin [1] is published by Mozilla. You don't need to trust anyone but Mozilla to use that base set of functionality.
- The container functionality in Firefox is the result of some work from the Tor Browser being upstreamed into Firefox [2]. It seems reasonable to assume that it's well-implemented.
- The limitations of the extension that you linked to don't seem any worse than your profile-segmentation approach. It's just saying that it's possible for multiple websites to get opened in the same container, which is similar to how you could end up opening multiple websites in the same profile.
I reworded my comment. My point remains in that a lot of possible issues disappear if I just choose to destroy the entire directory. There is a clear difference between one Chrome instance (and all its associated windows/tabs) segmented into one profile directory and in-process segmentation. The issues at [1] seem to invoke unexpected behavior in that you're asking for a new temporary container but you can't know ahead of time if that's what you will get. I'm writing "seem" because it's not clear from the description how that interaction works. They could be better worded.
With segmentation enforced at instance boundary (rather than in-instance), there is no unexpected behavior of this sort. All links open in the segmented instance that the browser window/tab you're using belongs to. If you want a new container, you start a new instance and you know that's exactly what you will get. There is no possible "fail open" result. Note that I'm not saying the Firefox behavior you described is a major issue, just that it proves you can have unexpected scenarios.
Moreover, jedberg is correct in that cross-profile data leaking is possible (partly what I meant by "implementing profiles" properly), except that it's very easy to see if that's happening without auditing Chrome. Use a tool that records all filesystem operations (e.g. dtrace on macOS).
At the end of the day, I choose one set of trade-offs over another.
See if your preferred distribution has a kernel backport available. Most have backports of the kernel because it has relatively few dependencies and is backwards compatible, making it easy to backport.
E.g. Debian currently has a 5.5 kernel in buster-backports[1]. However, Debian's backported kernel doesn't get the same level of attention and isn't subject to the same security policies as stable releases, so may be lacking some security patches that the stable kernel has.
If the data structure has a reference count of one, you can safely mutate the data structure instead. Many persistent immutable data structure libraries use this as an additional optimization.
> I'm surprised that smaller companies have not moved away from static pricing yet.
The physical server has a fixed set of resources. If you let customers pick what resource they want to buy, you'll run out of one resource before the rest, leaving the server underutilized and increasing your costs. You can't sell KVM instances with no ram, no disk, or no CPU.
If you have enough physical servers and small enough customers, you can mostly solve this by carefully figuring out which customers to put on which machines. However, you've created a complicated jigsaw puzzle. And what happens if you need to shift customers between servers to balance things out?
I think Google is able to offer custom machine types because they're big enough, and because they're able to perform live migrations between physical hosts (most hosting providers can't do this), so the customer likely won't even notice if they need to be migrated.
Instead, most of these providers offer a few different SKUs to satisfy different use-cases, or they just pick one target market (e.g. lots of disk space for backups or lots of CPU resource for compute) and focus on that. A few offer networked block storage, which gives some more flexibility, but at the (potential) cost of reliability and performance.
> None in Asia though? You need it to be in Taiwan to service the region well!