Hacker Newsnew | past | comments | ask | show | jobs | submit | more Palmik's commentslogin

Great work! What optimizations are you most excited about for 2026?


Lot of cool stuff coming up! As a Ray developer, I focus more on the orchestration layer, so I'm excited about things like Elastic Expert Parallelism, posttraining enhancements like colocated trainer/engines, and deploying DSV4 (rumors are the architecture will be complex). vLLM roadmap is here for reference: http://roadmap.vllm.ai/


APIs are usually very profitable. As for subscriptions, it would depend on how many tokens average subscriber uses per month. Do we have some source of info on this?

Some notes:

- # Input tokens & # output tokens per request matters a lot.

- KV Cache hit rate matters a lot.

- vLLM is not the necessarily most efficient engine.

- You are looking at API cost for DeepSeek V3.2, which is much cheaper than DeepSeek R1 / V3 / V3.1. DeepSeek V3.2 is different architecture (sparse attention) that is much more efficient. DeepSeek V3 cheapest option (fp8) tends to be ~$1/mil output tokens while R1 tends to be ~$2.5/mil (note that for example Together AI charges whopping $7/mil output tokens for R1!)

As for the cost: You can also get H200s for ~ $1.6/hr and H100s for ~ $1.2/hr. That somewhat simplifies the calculations :)

Ignoring the caveats and assuming H200s, with their setup you will:

- Process 403200000 input tokens.

- Generate 126720000 output tokens.

- Spend $25.6.

- On Together with DS R1 it would cost you $3 * 403.2 + $7 * 126.7 = ~$2096. Together does not even offer discount for KV cache hits (what a joke :)).

- On NovitaAI with DS R1 it would cost you $0.7 * 403.2 + $2.5 * 126.7 = ~$600 (with perfect cache hit rate, which gives 50% discount on input tokens here, it would be ~$458).


Nothing wrong with a GPL-like viral license for the AI era.

Training on my code / media / other data? No worries, just make sure the weights and other derived artifacts are released under similarly permissive license.


Well, I would say it should be like that already & no new license is needed. Basically if a LLM was ever based on GPL code, its output should be also GPL licensed. As simple as that.


Define source, compile, and library.


Licenses like GPL are built on top of an enforcement mechanism like copyright. Without an enforced legal framework preventing usage unless a license is agreed to, a license is just a polite request.


We need countries to start legally enforce that. Nothing will change otherwise. I stopped open sourcing my code and LLMs are one of the big reason.


Wouldn't you want the code generated by those models be released under those permissive licenses as well? Is that what you mean by other derived artifacts?


That's how I interpreted it at least


It really should be like that indeed. Where is RMS? Is he working on GPLv4?


If model training is determined to be fair use under US copyright law—either legislated by Congress or interpreted by Federal courts—then no license text can remove the right to use source code that way.


> then no license text can remove the right to use source code that way.

At least in the US.

Quite what happens if another country ordered, say chatGPT, to be released under the AGPL since it was trained on AGPL code, who knows.


RMS is probably greatly behind the technical news at this point. I mean, he's surfing the web via a email summary of some websites. Even if he doesn't condone of how the internet is evolving, he can't really keep up with technology if he doesn't "mingle".

He's also 72, we can't expect him to save everyone. We need new generations of FOSS tech leaders.


I am gen-z and I am part of the foss community (I think) and one of the issues about new generations of FOSS tech leaders is that even if one tries to do so.

Something about Richard stallman really is out of this world where he made people care about Open source in the first place.

I genuinely don't know how people can relicate it. I had even tried and gone through such phase once but the comments weren't really helpful back then on hackernews

https://news.ycombinator.com/item?id=45558430 (Ask HN: Why are most people not interested in FOSS/OSS and can we change that)


As much as RMS meant for the world, he’s also a pretty petty person. He’s about freedom but mostly about user freedom, not creators freedom. I also went through such a phase but using words like “evil” is just too black and white. I don’t think he is a nice person to be around.l, judging from some podcasts and videos.


If there is one thing Stallman knows well is the way he uses words and I can assure you if he calls something "evil" that is exactly the word he meant to use.

> user freedom, not creators freedom

In his view users are the creators and creators are the users. The only freedom he asks you to give up is the freedom to limit the freedom of others.


RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.

I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.

Again, imho the GPL is a blessing for humanity, and bless the people that choose it freely.


> RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.

This is not true, though. As a copyright holder, you are allowed to license your work however you wish, even if it's under for example GPL-3.0-or-later or whatever. You can license your code outside of the terms of the GPL to a particular user or group of users for example for payment.

Really, it's only when the user agrees to abide by the license that you'd have to give access to source code when asked, for example.

> I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.

And they can. Whether that wins one any friends or not is another matter.


Oh and bless the people that won't use anything but GPL software.

Don't bless the people that think you are evil for not applying the GPL to your creation.


> user freedom, not creators freedom

Creators are not creators, they're also users. There's a very solid chance that a better world for everyone would be achieved if freedoms for all users would be bullet proof. Every user should be able to modify and repair all their hardware and software without creator involvement.


And we just don't think about all the software that is then not being created because people feel it's immediately everyone's property and so won't even bother?

Sure, we can copy software, so it's not like they are taking your house. But "they" may be taking your livelihood.

Ok, objectively perhaps the world would be better, but we can't know. And opinions don't mean anything. What matters is individuals and being fair to them, whatever society grows from that is just what we have.

That said, if we ever go multi-planet, and there is a planet with no copyright and everything is GPL, I'd check it out and imagine I'd feel quite at home there.


> Sure, we can copy software, so it's not like they are taking your house. But "they" may be taking your livelihood.

With GenAI that's starting to happen with anyway.


Which is why we perhaps need a GPLv4? With some provisions that force open sourcing model architecture + weights when using such code as training material?


And also provisions somehow handling hyper scalers. Hyper scalers are big enough that they can build everything from scratch and stop ripping off FOSS individual and small company contributors.


You can follow him on https://stallman.org/ What is he doing? I believe still giving talks and taking stance on current day political issues. Additionally I believe the last few years where quite turbulent so I assume he is taking life at his own pace.


Interesting. Is there a license that acts this already ?


That is a complete fools errand. If it ever passes it would just mean the death of Open Source AI models. All the big companies would just continue to collect whatever data they like, license it if necessary or pay the fine if illegal (see Antropic paying $1.5 billion for books). While every Open Source model would be starved for training data within its self enforced rules and easy to be shut down if ever a incorrectly licenses bit slips into the models.

The only way forward is the abolishment of copyright.


I don't follow. If the model was open-sourced under this GPL-like license (or a compatible license), then it would follow the GPL-like license. If the model was closed, it would violate the license. In other words, it would not affect open-source models at all.

Similarly, I could imagine carving out an exception when training on copyrighted material without licence, as long as the resulting model is open-sourced.


> If the model was closed, it would violate the license.

Training is fair use. The closed models wouldn't be impacted. Even if we assume laws gets changed and lawsuits happened, they just get settled and the closed source models would progress as usual (see Bartz v. Anthropic).

Meanwhile if somebody wants to go all "GPL AI" and only train their models on GPL compatible code, they'd just be restricting themselves. The amount of code they can train on shrinks drastically, the model quality ends up being garbage and nothing was won.

Further, assuming laws got changed, those models would now be incredible easy to attack, since any slip up in the training means the models need to be scraped. Unlike the big companies with their closed models, Open Source efforts do not have the money to license data nor the billions needed to settle lawsuits. It would mean the end of open models.


I am pretty sure most people get Anthropic's move. I also think "getting it" is perfectly compatible with being unhappy about it and voicing that opinion online.


OP is responding to an article that largely frames Anthropic as clueless.


I don't think it is intending to frame the move as clueless, but rather short-sighted. It could very well be a good move for them in the short term.


To me it's very easy to understand why people would be upset and post about it online.

1. The company did something the customers did not like.

2. The company's reputation has value.

3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.


Sure, it is perfectly valid to complain all you want. But it is also important to remember the context here.

I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.

Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.

Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.


> So me saying I am not going to give them money until they start selling burgers is a meaningless too them.

Sure, but that's because you're you. No offense, but you don't have a following that people use to decide what fast food to eat. You don't have posts about how Taco Bell should serve burgers, frequently topping one of the main internet forums for people interested in fast food.

HN front page articles do matter. They get huge numbers of eyeballs. They help shape the opinions of developers. If lots of people write articles like this one, and it front pages again and again, Anthropic will be at serious risk of losing their mindshare advantage.

Of course, that may not happen. But people are aware it could.


I have the feeling that these discussions are much more tribal rather than evidence based. :)


Tribal? Not really. Subjective? Absolutely. Objectively 5.2 scores higher on benchmarks than 5.1; subjectively it works better for me than 5.1. I don't care too much about other opinions TBH :)


I think possibly even better would be viral, GPL-like license that explicitly mandates that any systems (models, etc.) derived (trained on) the code need to be released under the same license.


It would not be discrimination to mandate that weights of any model trained in the code need to be released under similarly open license.


Hard disagree.

At Google, in most orgs, manager can influence the chance of success significantly:

- Making sure their team works on what the org leads find "impactful"

- Facilitating cross team collaborations, which will lead to good peer reviews for your report

- Helping your report write the promo packet

- Presenting the promo case effectively during the calibration meeting and being prepared to advocate for the report and respond to criticism from other managers at the meeting

- etc.

There are many managers that do very few if any of these things, and it shows.

Yes, there are quotas, but nonetheless the manager plays a big role in whether their report makes the cut.

There is no harm in saying that you are quitting because you do not feel valued / rewarded enough. Hopefully it will effect change in the manager. Of course it's best to keep it polite and not burn any bridges in the process.


I wonder why Texas did not start by targeting NSFW / porn apps specifically, like other states.

I also wonder why smut literature (the best selling category of books on Amazon) seems to get a free pass.


The app stores already block porn on their own initiative.

> I also wonder why smut literature (the best selling category of books on Amazon) seems to get a free pass.

It's popular with women and basically invisible to men.


There are plenty of NSFW oriented apps, especially in the AI category.

> It's popular with women and basically invisible to men.

Mostly true, and this might be a reflection of reality, but certainly not a justification.


And being long-form written text, likely invisible to minors as well.


It's extremely visible to teenagers. They're one of the main audiences for booktok.


Text has always been treated differently than images or video, partly for historical reasons and partly because regulating it runs straight into classic First Amendment landmines


Probably because some apps aren't NSFW apps but have it (Reddit)


Because people were so sick of their shit, and they already got their asses beaten so hard that they turned a fundamentalist city into an atheistic one. Banned in Boston used to be a thing. Boston itself got sick of that puritan bullshit.

They know that re-litigating that is a road to ruin because 'artistic merit' is so well tread a ground in literature.


Text just fundamentally isn't nearly as graphic as images/video.

Write the most sexually disturbing sentence you can come up with and it's going to be rather meh and possibly quite comical. And any of the gravity that it does have comes from the reader's ability to generate the visuals themself which is mostly out of reach for children who don't have the experience to necessarily know what's even being described.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: