As open source models improve, OpenAI needs to keep on improving their models to stay ahead of them. Over time though, if it hasn’t already happeened, the advantages of OpenAI will not matter to most. Will OpenAI be forced to bleed money training? What does it mean for them over the next few years?
I think the government is late to the game in this instance. I would have been in the “break it up” camp until this year. I see google’s search monopoly going away in the next few years with GenAI.
I don’t think prices came down at all though. May be lowering reprises would increase demand but they don’t want to, or rather cannot to keep the brand premium. So it’s not news
Unknowingly you made the perfect argument against the Vision Pro. What you described is a VR experience which is precisely what apple set out not to do. They wanted an MR device and MR experiences. So if an immersive experience is what you want a cheap device will deliver it over time as the hardware catches up
This is Texas so I am inclined to think the judge is a political hack installed by someone to issue judgements that are paid for. As much as I don’t want to, after seeing the recent Supreme Court and other rulings, this is the first thought that came to mind.
This isn’t new is it? Every startup in the last decade plus existed to slurp user data one way or another. User data accumulation is the single most popular value proposition for startups and what drove funding. In that sense AI is not new, it super charges everything including user data aggregation. It will be surprising only if this is a new trend.
Makes sense. Chromecast is built into most tvs. That way they can eliminate the hardware costs and focus fully on software. The tv manufacturers will be happy to work with them as they, hopefully, will get a share of the ad revenue too.
Also this is the first product they killed that I agree with.
Except that means that you'll have to hook the TV up to the internet instead of just connecting a dongle to it (which means the TV may spy on you and/or display ads), and when they inevitably stop supporting it you'd have to replace the whole TV instead of just a dongle.
At least there still is a separate device you can hook up, at least for now, though it's more expensive, clunkier, and packed with a bunch of needless stuff.
FHE is cool but I wonder how many use cases it actually fits. Don’t get me wrong, it gives better security guarantees for the end user but do they really care if the organization makes a promise about a secure execution environment in the cloud?
Also from an engineering point of view, using FHE requires a refactoring of flows and an inflexible commitment to all processing downstream. Without laws mandating it, do organizations have enough motivation to do that?
I think the main thing that throws it into question is when you get the software that sends the data to the service and the service from the same people (in this case apple). You're already trusting them with your data, and a fancy HE scheme doesn't change that. They can update their software and start sending everything in plain text and you wouldn't even realise they'd done it.
FHE is plausibly most useful when you trust the source of the client code but want to use the compute resource of an organisation you don't want to have to trust.
I assume companies like it because it lets them compute on servers they don't trust. The corollary is they don't need to secure HE servers as much because any data the servers lose isn't valuable. And the corollary to that is that companies can have much more flexible compute infra, sending HE requests to arbitrary machines instead of only those that are known to be highly secure.
Unless the operating system for iPhones is open source and one can verify which version they have installed, users can't really be sure that Apple is doing this. They could just say they are doing things to protect user's privacy, and then not, and sell their data.
> Unless the operating system for iPhones is open source and one can verify which version they have installed
There are a lot of security engineers out there reverse engineering Apple's iOS versions and payloads, especially ones installed on the phones of activists and other dissidents who may be under government surveillance. While in theory Apple could build a compromised OS and serve it only to a single IP or whatever, the reputational risk if they were to be discovered would be enormous. Compared to when the processing is happening on Apple's servers, where it's impossible to tell for sure if you're being wiretapped, there's just too much of a risk of detection and tipping off the target.
As open source models improve, OpenAI needs to keep on improving their models to stay ahead of them. Over time though, if it hasn’t already happeened, the advantages of OpenAI will not matter to most. Will OpenAI be forced to bleed money training? What does it mean for them over the next few years?
reply