Hacker News new | past | comments | ask | show | jobs | submit | _jab's comments login

> One recent idea I've had is that many online subscription services should automatically pause if you stop using it.

Cool idea, but probably tough to enforce what “using it” means. I could see companies start sending newsletters to customers and calling that engagement


This wouldn't survive the courts so approximately one company would get away with it for a time.

No serious operator would choose a provider with your implementation of that feature.


I agree that no serious operator would choose a provider whose spending cap feature was "enable this feature and we kick you off the platform".


People will pay to be abused: c.f. Oracle


I wonder about OpenAI's moat. Thanks to advances in hardware and rapidly improving open-source ML frameworks, it's getting significantly easier and cheaper to replicate what they've built over time. That's not the case for most startups: it's not really any easier to build an Uber clone today than it was ten years ago.

OpenAI depends on spending vast amounts of money to stay a year or two ahead of the competition. I'm doubting whether that's a justified tradeoff.


Open source just caught up to GPT-4, which was released over a year ago. You don't think all of the advances in hardware don't also play into their hands? They have GPT-5 in the pipeline and are likely hard at work planning and prepping for GPT-6. A year or two beyond the competition, at this point, is their moat.


> A year or two beyond the competition, at this point, is their moat.

That doesn't seem like much of a moat.

> They have GPT-5 in the pipeline and are likely hard at work planning and prepping for GPT-6.

I wouldn't be surprised if there's an element of diminishing returns here. The improvement from GPT4 to GPT5 is likely much less than GPT3 to GTP4.


And they are going to run into source data problems. The internet is not producing that much more quality content, and now they have a problem with AI-generated clickbait.


I think your comment is emblematic of the very divide between Silicon Valley and the rest of the world that leads to frustration like what the artists here are expressing.

No one becomes an artist except because of passion for the work. It sure as hell isn’t for the money. That’s not so universally true in Silicon Valley, and I’m guessing you’re one of those people who views their job mostly as a means to fund the rest of their lifestyle.


Glad to see this development. The amount of FUD around AVs is too high, and allowing each individual municipality to set their own regulations for AVs would have been a ridiculous amount of red tape for these companies to deal with. Just to pull one particularly bad quote from this article:

> “I hope that, in the meantime, our communities do not suffer too much in terms of injuries and community damages due to the current regulatory gaps,” Cortese said in a statement.

What gaps? What injuries? What community damages? If someone can actually present statistics that these cars are more socially dangerous than an equivalent amount of Ubers and Lyfts, I would be very, very surprised.


It's been nonstop lies from opponents, especially South Bay opponents of this technology with the tiniest kernel of truth that Cruise has not been very responsible. I still can't get over San Mateo County flatly lying about Waymo not talking to them (https://www.cpuc.ca.gov/-/media/cpuc-website/divisions/consu..., "in its protests, San Mateo stated...").


When you strip away some of the more pretentious language, this article isn't really saying anything that extraordinary. If you just swap the phrase "information" with "content", i.e. "addiction to useless content", it becomes clear that the article is really just talking about the dangers of doomscrolling.

At best, it's a useful reminder that doomscrolling happens not only on Instagram and YouTube, but also on "respectable" sites like the NYT, Hacker News, and Wikipedia, and that does resonate with me. But I don't see anything else that the author is really adding to the large corpus of discussion on doomscrolling and Internet addiciton.


It shouldn't take a Vox article to ensure employees basic security over their compensation. The fact that this provision existed at all is exceptionally anti-employee.


OpenAI's terrible, horrible, no good, very bad month only continues to worsen.

It's pretty established now that they had some exceptionally anti-employee provisions in their exit policies to protect their fragile reputation. Sam Altman is bluntly a liar, and his credibility is gone.

Their stance as a pro-artist platform is a joke after the ScarJo fiasco, that clearly illustrates that creative consent was an afterthought. Litigation is assumed, and ScarJo is directly advocating for legislation to prevent this sort of fiasco in the future. Sam Altman's involvement is again evident from his trite "her" tweet.

And then they fired their "superalignment" safety team for good measure. As if to shred any last measure of doubt that this company is somehow more ethical than any other big tech company in their pursuit of AI.

Frankly, at this point, the board should fire Sam Altman again, this time for good. This is not the company that can, or should, usher humanity into the artificial intelligence era.


I'm increasingly of the opinion that publicly available models should be required to disclose the data they are trained on. Perhaps not necessarily the raw data itself, but at least a description of what the datasets are.

I get the sense that there would be more backlash against these models which would drive us more quickly towards a lasting resolution if people better understood how their data is being used.


Microsoft's can search the open internet. That'd be a long disclosure list!


That's not what training data means


Isn't it considered one-shot learning training it within the context window? There were lots of glowing reports of how well it can do one/few shot learning that way.

I don't think that training on copywritten data is necessarily wrong, just pointing out that doing so within the context window rather than at weight training time might not be so different.


I’m not sure what is meant by this. There’s no “training” happening within the context window, at least not by the commonly-used definition of training, it’s all just part of the input. If you’re asking whether you can reverse-index search copywritten text and feed it into an AI model without permission, that’s been happening for years.


For example, within the context window giving it 5 examples of a problem in a class it has never seen, with answers, and then asking it to solve a sixth was given as one of the amazing few-shot learning examples.

It could potentially do similar searching by the internet for similar things to your question and then figuring out how to derive the answer, without finding an exact answer match directly.


The argument may be that having very large models that everyone uses is a bad idea, and that companies and even individuals should instead be empowered to create their own smaller models, trained on data they trust. This will only become more feasible as the technology progresses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: