Hacker Newsnew | past | comments | ask | show | jobs | submit | Towaway69's commentslogin

What’s the going rate for tokens in terms of dollars? How much are companies spending on “tokens”?

Also kind of ironic that small codebases are now in vogue, just when google monolithic repos were so popular.


> What’s the going rate for tokens in terms of dollars?

It depends on the provider/model, usually pricing is calculated as $/million tokens with input/output tokens having different per token pricing (output tends to be more expensive than input). Some models also charge more per token if the context size is above a threshold. Cached operations may also reduce the price per token.

OpenRouter has a good overview over provider and models, https://openrouter.ai/models

The math on what people are actually paying is hard to evaluate. Ime, most companies rather buy a subscription than give their developers API keys (as it makes spending predictable).


Api keys with hard limits I assume?

Are there companies out there that add token counts to ticket “costs”, i.e. are story points being replaced/augmented by token counts?

Or even worse, an exchange rate of story points to tokens used…


> Ime, most companies rather buy a subscription than give their developers API keys (as it makes spending predictable).

The downside with subscriptions is that your work with the LLM will grind to a halt for a number of hours if you hit the token limit. I was doing what I consider very trivial work adding Javadoc comments to a few dozen files using Claude Sonnet on the $20 plan and within 30 minutes had been told to sit out for a couple hours. The reason was that Claude was apparently repeatedly sending the files up and down to fill in the comments. In hindsight, sure, that's obvious, but you would think that Claude would be smart enough to do some sort of summarization to make things more efficient. Looking into it, it was on the order of several million tokens in a very short amount of time.

It really made me wonder how in the hell people are using Claude to do "real" work, but I've heard of people having multiple $200/month subscriptions, so I guess that could work. Definitely seems like a glimpse into the future of what these services will truly cost once people are hooked on them.


I know of a corporate who has embraced Claude for doing documentation of their codebase to better use Claude to do coding on the codebase.

So Claude can understand the codebase, it needs to document it. Makes sense and is also great for humans because now there is uptodate docu on the codebase.

I don’t know how much it cost but the codebase, I’m told, is around 2 to 3 million lines of code.


Ok understand - https://plugsocketmuseum.nl/Danish1.html :-)

I guess if you were to take a fork to an outlet, in Denmark you would definitely think twice.


Are taxes simple?

Why does Bash syntax have to be "simple"? For me, Bash syntax is simple.


Uh, reading a bash script shouldn't be as hard as doing your taxes. Bash syntax has to be simple because bash code is going to be read and reasoned by humans. Reading just a simple if statement in bash syntax requires a TON of knowledge to avoid shooting yourself in the foot. That's a massive failure of usability just to save a couple of keystrokes.

This is like saying "what's wrong with brainfuck??? makes sense to me!" Every syntax can be understood, that does not automatically make them all good ideas.


What about Jesus Tape?

It's an unpopular opinion, but I swear that cable ties are a far more effective problem solver than Jesus/duct/gaffers tape. I always bring tie, and never tape

Why unpopular, I prefer cable ties too. But sometimes, for a quick fix, tape will also cover it.

Unless you happen to have Norris Ties, in that case, your done before you start! /s


We'll soon have AI as our God replacement, we can then pray to it.

Remember any unexplainable technology in which we blindly trust might as-well be God.


> Why bother harvesting resources from a gravity-laden planet when you can almost certainly get them from asteroids or other places?

Why bother digging up a carbon laden energy source from the depths of a gravity laden planet instead of using solar energy or wind or any other energy source that is less harmful?

Seems really illogical … oh wait, thats just an intelligent life-form.


> Why bother digging up a carbon laden energy source from the depths of a gravity laden planet instead of using solar energy or wind or any other energy source that is less harmful?

Well at least one reason might be that you're currently unable to use those latter forms of energy as well as you can the former.

Anyway, using the way we act as a comparison for how these other civilizations might act doesn't make sense to me - we're nowhere even remotely close to being a threat to other civilizations. By the time a civilization reaches the point where they can travel between stars, I do suspect they'll be using renewables pretty dang heavily


That's why I gave the example of solar: we've been able to utilize solar for a long time yet only now is it become a serious source of energy. Windmills have existed for probably 200 years but have not been taken seriously as a source of energy.

I'm not talking about mining asteroids, I'm talking about other sources of energy that have been known to us but which we don't utilise because of self-interest of oil companies - not money or cost, self interest. Money & cost are regulated by us not money.

So to say these other sources of energy weren't viable from a financial PoV might be correct but it goes against our own self-interest.

> I do suspect they'll be using renewables pretty dang heavily

That's like saying "in any case, the future will be better". As humans have shown, worse comes before better in history. Howabout making the present better first?


We haven't been able to utilize solar to the degree we have been able to utilize oil for all that long, and since it has, our utilization has only grown.

"Commercial concentrated solar power plants were first developed in the 1980s. Since then, as the cost of solar panels has fallen, grid-connected solar PV systems' capacity and production have doubled about every three years. Three-quarters of new generation capacity is solar"

This says nothing of, say, hydro power, which we have been using for a while

> That's like saying "in any case, the future will be better". As humans have shown, worse comes before better in history. Howabout making the present better first?

Mate I said nothing about our future or present. It's just absurd to assume our past has any bearing on how super-advanced space-faring civilizations will utilize technology.


So how is it that the amazon is disappearing? Coincidence or human interference?

Humans have demonstrated a cycle of 1. exploitation to the point destruction, 2. Realisation of the damage they have inflicted, 3. Green washing and band-aid fixes 4. Rinse and repeat.

Be it waste handling, colonisation, industrial revolution, slavery, oil extraction etc etc.

At least for the time being, prairie dog tunnels seem safe.


Like I said, we should probably care more, and generally speaking, we do, over time. I'm not suggesting we're perfect, that we haven't made any mistakes, or that we won't make any more - just that we're slowly learning how to do better.

> Be it waste handling, colonisation, industrial revolution, slavery, oil extraction etc etc.

Interestingly, most of these have seen lots of progress in reducing the harms - if not practically eliminating it altogether, such as with slavery.


Colonisation and industrial revolution have reduced the harm? For whom?

Looking it from a white, western male perspective, you're right. From other perspectives this might well not be the case.

A lot of technology has short term benefits but are, in the long term, net negative to either us as species or the environment around us - which is the life support system for us. We as a society have not got a "undo" button for much of this technology, since once the damage has been done in real life, it stays in real life.

So we develop technology, see it fail, and try to fix the issues with more technology not realising that technology might be the problem. Or perhaps it's because we don't have the simplicity of an "undo" button.


> The explosion damaged a van opposite and blew out the tyre of a car as well as damaging a wall, front porch, shed and a Wendy house.

> Shrapnel also shot through a passing car into a passenger seat, while another piece of metal damaged the window frame of a child's bedroom.

Wtf. That was a “homemade” bomb to bring down one camera.


> I don't think this is true

Other cultures, other rules. I can testify that there are countries where this is definitely a requirement.

Hence I have an id card which does not bear my signature but a legible rendering of my name.


Or patch it over to python, I assume LLMs are even better at python.

Don't assume. Empirically, they are not. (This post Feb 2026 may change in future yadda yadda)

See: autocodebench

https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/tree/ma...


Reading that made me think how much that might be related to Elixir being very similar in syntax to Ruby. Do LLMs really differentiate between the two?

Specific studies, as the one quoted, are a long way from original real world problems.


Here are some thoughts on it from José Valim: https://dashbit.co/blog/why-elixir-best-language-for-ai

LLMs absolutely understand and write good Elixir. I've done complex OTP and distributed work in tandem with Sonnet/Opus and they understand it well and happily keep up. All the Elixir constructs distinct from ruby are well applied: pipes, multiple function clauses, pattern matching, etc.

I can say that anecdotally, CC/Codex are significantly more accurate and faster working with our 250K lines of Elixir than our 25K lines of JS (though not typescript).


I suspect this is partly due to the quality of documentation for Elixir, Erlang, and BEAM. The OTP documentation has been around for a long time and has been excellently written. Erlang/Elixer doc gen outputs function signatures, arity, and both Elixir and Erlang handle concepts like function overloading in very explicit, well-defined ways.

Thats a large reason for sure!

I'd layer in a few more

* Largely stable and unchanged language through out its whole existance

* Authorship is largely senior engineers so the code you train on is high quality

* Relatively low number of abstractions in comparisson to other languages. Meaning there's less ways to do one thing.

* Functional Programming style pushes down hidden state, which lowers the complexity when understanding how a slice of a system works, and the likelyhood you introduce a bug


I suspect the biggest advantage Elixir has is the relative quality of the publicly available code. Approximately no one has Elixir as their first programming language, which keeps a lot of the absolute trash-tier code that we all make when first learning to program out of the training set. If you look at languages that are often people's first (Python, JavaScript, Java), only Java has an above average score. Of those three, Java's significantly more likely to be taught in a structured learning environment, compared to kids winging it with the other two.

(And Elixir's relationship to Ruby is pretty overstated, IMO. There's definitely inspiration, but the OO-FP jump is a makes the differences pretty extreme)


Agree with the quality level but there are other languages where that is also the case: Erlang for example is probably one of those languages.

> Elixir's relationship to Ruby is pretty overstated

Perhaps I am actually am over thinking this. Elixir has probably diverged enough from Ruby (e.g. defmodule, pipe operators, :atom syntax) for LLMs to notice the difference between the two. But it does open the question, though, how does an LLM actually recognise the difference in code blocks in its training data.

There are probably many more programming languages where similarities exist.



Having written a lot of both languages, I'd be surprised if LLMs don't get tripped up on some of Ruby's semantics and weird stuff people do with monkey patching. I also find Ruby library documentation to be on average pretty poor.

> I also find Ruby library documentation to be on average pretty poor.

That surprises me :)

From my time doing Ruby (admittedly a few years back), I found libraries were very well documented and tested. But put into context of then (not now), documentation and testing weren't that popular amongst other programming languages. Ruby was definitely one of the drivers for the general adaption of TDD principles, for example.


I think they're often very well tested, but the documentation piece has always been lacking compared to Elixir.

I used to frequently find myself reading the source code of popular libraries or prying into them at runtime. There's also no central place or format for documentation in ruby. Yes rubydoc.info exists, but it's sort of an afterthought. Sidekiq uses a github wiki, Nokogiri has a dedicated site, Rails has a dedicated site, Ruby itself has yet another site. Some use RDoc, some don't. Or look at Devise https://rubydoc.info/github/heartcombo/devise/main/frames, there's simply nothing documented for most of the classes, and good luck finding in the docs where `before_action :authenticate_user!` comes from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: