Hacker Newsnew | past | comments | ask | show | jobs | submit | samuelknight's commentslogin

The cost of intelligence is non-linear, with slightly dumber models costing much less. For a growing surface of problems you do not need frontier intelligence. You should use frontier intelligence for situations where you would otherwise require human intervention throughout the workflow, which is much more expensive than any model.

I don't know about Mythos but the chart understates the capability of the current frontier models. GPT and Claude models available today are capable of Web app exploits, C2, and persistence in well under 10M tokens if you build a good harness.

The benchmark might be a good apples-to-apples comparison but it is not showing capability in an absolute sense.


All code is a liability in general. All code written before LLMs and during the current in-between years are vulnerable to the next frontier model. Eventually we will settle into a new paradigm that correctly addresses the new balance of effort.

Have you considered poking the cache?

When a user walks away during the business day but CC is sitting open, you can refresh that cache up to 10x before it costs the same as a full miss. Realistically it would be <8x in a working day.


On the one hand I don't understand why it needs to be half a million lines. However code is becoming machine shaped so the maintenance bloat of titanic amounts of code and state are actually shrinking.


This is a physical implementation of a tiered caching hierarchy.


This is standard in every tech RSU vest schedule I have seen.


Yes, it's the standard in every legal doc I've ever seen too, but most companies have typically done some accelerated vesting as part of severance. Of course they don't have to, but it's a generally lower cost way of showing some good will.


I've helped a number of people with negotiations and reviews of their employment offers.

I would strongly disagree that accelerated vesting upon layoff is common. It's rare.


I think that a model designed to ignore semantic chatter like financial news and deeply inspect the raw data is a very powerful perspective.


Large companies already maintain a clone of their packages. Very large ones actually bundle their own build system (Google Bazil, AWS Brazil). If you want to update a package, you have to fetch the sources and update the internal repository. It slows down the opportunities for a supply chain attack down to a crawl.


Absolute wave of supply chain attacks recently. Hopefully this causes everyone to tighten up their dependencies and update policies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: