Hacker Newsnew | past | comments | ask | show | jobs | submit | cedws's commentslogin

I think the same for me, I’m pretty sure I wouldn’t be in my career if I had been restricted to an hour a day on a filtered iPad.

But I also think the internet has more potential for harm now. Widespread social media makes it easy for predators. YouTube actively incentivises content creators to produce brain numbing shit instead of the more amateur and educational content I was exposed to. Instagram creates vicious dopamine hooks that children have no mental defense against.

Also sorry to sound egotistical but I think I was an outlier that drifted into doing educational things, many or most kids will spend every moment they get just playing video games.

That being said, I’m in favour of parents doing the parenting, not the government.


That being said, I’m in favour of parents doing the parenting, not the government.

This aspect of parenting is really hard. If your kid is 10 years old and all their classmates have Roblox, saying 'no' to your kid does isolate them socially, because all the other kids are talking about what they did in Roblox at school and play Roblox together after school. To make it worse, some primary schools even allow kids to play Roblox at school during breaks or the teachers make TikTok videos, making kids want to have Tik Tok as well (TikTok-teachers are a real phenomenon), etc. So, even when you are trying, it gets undermined by others. Trying to fight it is kind of pointless, because most other parents don't see the issue.

Same for e.g. instant messaging, it is basically Sophie's choice: you allow them into these addiction machines or you isolate them socially. It would be much easier if social media and certain types of addictive games were just not allowed under 16. Just like we don't sell cigarettes or alcohol to kids.

I also completely agree with the counterpoint that age verification on the internet is generally bad.

Luckily, some things can be done without grave privacy violations. E.g. where high schools 10-15 years ago would gloat about being iPad or laptop schools, more and more are completely banning smart phones and laptops during school time.

At any rate, it's perfectly possible to hold both views at the same time: social media and addictive games should be forbidden under 16 and the age verification initiatives are terrible for privacy.

Maybe we should just ban Facebook, TikTok, etc. no more addiction, no more age verification needed :).


> Also sorry to sound egotistical but I think I was an outlier that drifted into doing educational things, many or most kids will spend every moment they get just playing video games.

I am in the same predicament as both of you, having grown up with unfiltered internet access, and not wanting it to have went any other way (I love my life, actually!)

There is a condescending tendency when people hear what I said above, to tell me that I am an outlier, or, God forbid, a "genius", and other equally worrying conclusions regarding my character.

I agree that, today, there are millions more ways that children can fall for objectively negative things, that have been completely, and intentfully engineered to be terrible in a way which can be exploited for profit.

But also, I simply think that, with enough access to mind-numbing content, for long enough... people will simply realize that, actually, they don't want that. At least, not just that.

Adults are not a good term for comparision in the matter of less aggressive addictions, like with social media, because they already have lives they want to escape, with responsibilities and whatnot.

These are not scientifically sourced claims, but, in my experience, children have a lot more time, energy, curiosity, and will/intent to create, for one reason or another, and they have been doing those things since time immemorial.

This is just a consequence of having access to ~the entirety of all human knowledge at their fingertips, with no restrictions, and with an incredible amount of free time at their disposal.


I think the HN crowd is full of outliers. You folks are unrestricted internet success stories. Congrats! For every one of you there has to be 100 or 1000 gaming and social media addicts.

Someone else said it best: AI isn’t increasing productivity much for the average worker, it’s just allowing them to do their job and put in less effort.

And I think that’s entirely fair to be honest. The workers aren’t going to see any raises or bonuses from AI productivity gains. Why should they go out of their way to make their boss richer?


This argument has been decapitated countless times already on HN. Anthropic already enforce usage limits for everyone. If those limits are higher than what they want users to actually consume, that's Anthropic's problem.

This move is anti-competitive and Anthropic knows it. They're hurriedly trying to lock the gates and lay landmines behind everyone after a massive surge of new subscribers so that they're stuck using Claude Code. They see it as vital to their survival to not just to be the gas pump for tokens, they need to control the platform.


I'm baffled how people don't seem intellectually able to grasp what you described here. Claude Code users on Anthropic subscriptions aren't subsidizing those using other harnesses because usage limits aren't counted on the harness layer. It's an anti-competitive move against vc-backed commercial harnesses like Opencode (vc-backed) or Openclaw (openai-affiliated).

> This argument has been decapitated countless times already on HN.

No it hasn't, because the argument is completely correct, and the people mad about it are mad they can't have unlimited usage instead of paying the token API prices.

> This move is anti-competitive and Anthropic knows it.

No it isn't, that's not what "anti-competitive" means, and no court in the world would label it as such. You can't go flailing around looking for legal jargon to attach to behavior just because you don't like it.


API is intended for massive scaled operations (companies) and has no hard usage limits, a subscription is intended only for individual usage (solo dev) and has therefore hard usage limits. Is it that difficult to grasp the difference between API and subscription models?

> Anthropic already enforce usage limits for everyone. If those limits are higher than what they want users to actually consume, that's Anthropic's problem.

I mean, OpenCode is the one changing their app here. So it kinda seems like it's actually everyone else's problem.


I hate these non lawyer HN takes that call anything they don’t like “anti competitive”. Let’s just start with looking up “no duty to deal”.

I don't mean anti-competitive in the legal sense of the word, I mean that it literally is hostile to competition.

It’s a company’s job to be hostile to competition

That’s a very ruthless American capitalist view. I would say a company’s job is to make the best product without resorting to cheap tricks.

Right because other capitalist countries have companies that make it easy for competition

Under what law can Anthropic force OpenCode to do this? Surely it's not illegal to publish code that interacts with an API that's open for everyone to see?

The API has a very clear ToS prohibiting third-party client usage with the heavily subsidized Claude.ai subscription plans. Anthropic's right to reject or block that traffic, as well as to ban users who attempt this, is well-protected by the ToS those users neglected to read.

Regarding the legal demands here, anyone can issue anyone else a cease and desist order at any time, for anything, in the USA. The demands do not need to have merit.

"Illegal" generally refers to criminal law, not civil suits, this was essentially Anthropic threatening to file a lawsuit. Opencode was under no legal obligation to comply and was not breaking any laws, they simply decided it was easier and cheaper to comply than to fight.


I thought TOS grants Anthropic the right to stop providing the service to a user, not go after them legally.

I think you may be confused here. Anthropic isn't going after users here at all, they essentially told another company that is interfacing with Anthropic's service in a way that violates Anthropic's TOS to "please stop or else we might have to take legal action in the future".

More broadly, you do not need to establish any kind of contractual right to "go after" anyone legally, that's not how civil law works. A cease and desist letter isn't even really legal action, it's a threat of legal action, but even then, Anthropic doesn't need your permission to sue you, just like you don't need Anthropic's permission to sue them.

If you think that inside the U.S., you have some kind of legal immunity to or protection from cease and desist letters or lawsuits from any company, for any reason¹, you would largely be mistaken. If this is important to you, you might want to talk to a lawyer.

¹ Some states have anti-SLAPP statutes that offer limited protections in certain context, but this isn't applicable in the context of this example between Anthropic and AnomalyCo.


That being said it's maybe a valid claim under "tortuous interference" theory, i.e. OpenCode damaged Anthropic by interfering with the contractual ToS agreement between Anthropic and its users

https://en.wikipedia.org/wiki/Tortious_interference


My understanding is that, if you directly assist someone violate a ToS, you can be held liable.

Bad analogy but the getaway driver doesn’t need to enter the bank to be guilty in the robbery.


Surely there's no way that's true. The logical conclusion of that would be that every random ToS is a law that everyone must abide by, regardless of whether or not they've agreed to it.

By definition, it is exactly a law. It's known as business law. The ToS is a business contract which you must agree to if you wish to use the service. Violating terms of service is literally a breach of contract.

How can you breach a contract if you are not a party to a contract? OpenCode is not using any Anthropic services, they are just publishing some source code that seems (obligatory IANAL) to be protected speech under the First Amendment [0], if this legal argument is happening in American jurisdiction.

[0] https://en.wikipedia.org/wiki/Code_as_speech


Interesting point. I've been looking into a similar issue recently, and for example LinkedIn won a lawsuit against the analytics company hiQ because they violated their ToS for scraping their website. And I think they also never technically had a direct contract they'd breach.

https://en.wikipedia.org/wiki/HiQ_Labs_v._LinkedIn


Yes, but I think that there is a big difference. In the case you linked, hiQ were actually doing the scraping themselves.

People (or company? not sure) don't make any requests to Anthropic themselves. They just publish code that can make such requests.

I don't think that there is a legal precedent that would make publishing code that can do scraping illegal.


Yeah good point. I think if the scraping code is written specifically for a site / system that prohibits scraping through it's ToS, the company has an edge for a lawsuit. It's a bit of a gray area I think. It depends how much of a threat you form to the company you're scraping, and how big the company is.

> The API has a very clear ToS prohibiting ...

What is the relevance?

If I understand correctly, OpenCode, i.e. the creator of the tool, does not use Anthropic's API. Their users do.

I am unsure where the connection can be made between the users violating some terms of service and a maker of a tool.


But they provide Claude specific code that helps their users violate their ToS, and that can be an argument in a lawsuit..

They specifically built the tools to do it easily.

... A tool that had code which explicitly enables and advertises the ability to violate those terms of service.

but plan is linked to api key which the user provides…

Presumably there's binding legal terms in the license agreement that users agree to in order to access the API.

And what makes Opencode beholden to that, and not the actual customer (you and I)?

tortious interference

They don't need any actual written law behind their actions, all they need is money. What are you gonna do, fight them in court? Good luck with that, especially against a company directly associated with the US government and Palantir.

well i guess the next github will be based out of China on Alibaba cloud

Terms or Service

Do you need to accept ToS to vibe code anthropic plugin?

Code is one thing. Using API key is entirely different thing.


using API key also has Terms of Service :)

Yes, which is user’s responsibility!

Cleary they need to get AI which has not accepted those to rewrite it. That is the easy and fast solution these days. Or at most find a person who has not accepted tos do that.

Just re-train your own model on your own collected data. No problem there.

I'd like to think that the top minds working on AI have a higher purpose than to get the next generation hooked to a digital morphine drip. Serving soap cutting videos and giving teen girls body dysmorphia isn't a very compelling mission.

Though I'm sure many are mercenaries and will work for whoever pays the most.


>I'd like to think that the top minds working on AI have a higher purpose than to get the next generation hooked to a digital morphine drip.

That's the irony. The genius scientists are against AI used for defense, but somehow they're all-in for AI being used to getting people addicted to ads, dopamine, gambling, debt, porn, political manipulation, etc. basically everything that's guarantee to wreck society, but thank god they aren't making weapons I guess.


> I'd like to think that the top minds working on AI have a higher purpose than to get the next generation hooked to a digital morphine drip

The next 5 years are going to be very disappointing for you.


Spatial memory is really underutilised in computing.

When libghostty[0] releases maybe you could use that so you don't have to build everything from scratch.

[0]: https://mitchellh.com/writing/libghostty-is-coming


libghostty already powers quite a few alternate terminals:

https://github.com/Uzaaft/awesome-libghostty

This project uses alacritty-terminal, so it's also very much 'not from scratch', just using a Rust library to that effect.


libghostty looks really promising! I went with Alacritty as the terminal backend because its core is written in pure rust.

I know that Counter Strike has the Overwatch system so that the community can review the gameplay to determine if somebody is cheating. Could this be turned into a bounty system for developers? You could give developers access to gameplay data through an API and they would write their own algorithms to detect cheaters. If they are correct then they’re rewarded, if they’re wrong they’re penalised. Kind of like CAPTCHA.

Using a diversity of algorithms developed by the community could make detection a lot more accurate and more difficult to evade.


For my job which is mostly YAML engineering with some light Go coding (Platform) I'm finding it useful. We're DRY-ing out a bunch of YAML with CUE at the moment and it's sped up that work up tremendously.

When it comes to personal projects I'm feeling extremely unmotivated. Things feel more in reach and I've probably built ten times the number of throwaway projects in the past year than I have in previous years. Yet I feel no inspiration to see those projects through to the end. I feel no connection to them because I didn't build them. I have a feeling of 'what's the point' publishing these projects when the same code is only a few prompts away for someone else too. And publishing them under my name only cheapens the rest of my work which I put real cognitive effort into.

I think I want to focus more on developing knowledge and skills moving forward. Whatever I can produce with an LLM in a few hours is not actually valuable unless I'm providing some special insight, and I think I'm coming to terms with that at the moment.


> Yet I feel no inspiration to see those projects through to the end. I feel no connection to them because I didn't build them

For me, this is a key differentiator between “AI-assisted” and “vibe-coded”. With the former, I may use AI in many ways: some code generation, review, bouncing ideas, or whatever. But I engage in every step, review and improve the generated code, disagree with the reviews (and still contribute a good proportion of hand-written code, at least in the core business logic). In this way I retain sufficient ownership over the output to feel it is my own.

With vibe-coding, I feel exactly as you describe it.


What in the world is YAML engineering?

DevOps? K8s config nightmare?

I think advertising it as a DJ is a stretch, last time I tried it, it was basically just Siri for music. DJing is much more than just playing random tracks.

I’ve been wondering if AI could be used to compose a set that rivals real DJs, but it seems like a difficult problem. First it needs to select tracks that fit well together, and stitch them together to ramp up and ramp down energy over time. Then it needs to layer the tracks, which requires an intuition for what sounds good and I’m not sure can be done algorithmically. It also needs to do engaging transitions which are appropriate for the moment - also difficult.


I think calling it AI is also a stretch.

I suspect the xAI merge was a manoeuvre to pump SpaceX whilst actually quietly beginning to scale down xAI. It’s a money losing venture.

SpaceX is an incredible business and too important to fail. By rolling his other businesses into it, Elon protected them from failure.

Something to admire is his ability to always find the chess move. Like, you could see Twitter is a disaster of a business that should have dragged Elon down, but he manoeuvred his way out of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: