Hacker Newsnew | past | comments | ask | show | jobs | submit | staticassertion's commentslogin

This is a problem but fwiw libc's should be falling back to old system calls. You can block clone3 today and see that your libc will fall back to clone.

Yeah. But it still means wandering into de facto unsupported territory in a way that pledge/unveil/landlock does not.

Your example may be true, but I'm guessing it's not a guarantee. Not to mention if one wants to be portable to musl or cosmopolitan libc. The others inherently are more likely to work in a way that any libc would be "unsurprised" by.


Yeah for sure, it's a real issue. In general, seccomp feels hard to use unless you own your stack top to bottom.

People should really just use integers.

It's funny how fast it is to just implement a counter and how much people rely on UUIDs to avoid it. If you already use postgres somewhere, just create a "counter" table for your namespace. You can easily count 10K-100k values per second or faster, with room to grow if you outscale that.

What do you get? The most efficient, compressible little integers you could ever want. You unlock data structures like roaring bitmaps/ treemaps. You cut memory to 25% depending on your cardinality (ie: you can use u16 or u32 in memory sometimes). You get insane compression benefits where you can get rows of these integers to take a few bits of data each after compression. You get faster hashmap lookups. It's just insane how this compounds into crazy downstream wins.

It is absolutely insane how little cost it is to do this and how many optimizations you unlock. But people somehow think that id generation will be their bottleneck, or maybe it's just easier to avoid a DB sometimes, or whatever, and so we see UUIDs everywhere. Although, agreed that most of the time you can just generate the unique id for data yourself.

In fairness, UUID is easier, but damn it wrecks performance.


I have a few skills for this that I plug into `cargo-vet`. The idea is straightforward - where possible, I rely on a few trusted reviewers (Google, Mozilla), but for new deps that don't fall into the "reviewed by humans" that I don't want to rewrite, I have a bunch of Claude reviewers go at it before making the dependency available to my project.

I've had mixed results. I find that agents can be great for:

1. Producing new tests to increase coverage. Migrating you to property testing. Setting up fuzzing. Setting up more static analysis tooling. All of that would normally take "time" but now it's a background task.

2. They can find some vulnerabilities. They are "okay" at this, but if you are willing to burn tokens then it's fine.

3. They are absolutely wrong sometimes about something being safe. I have had Claude very explicitly state that a security boundary existed when it didn't. That is, it appeared to exist in the same way that a chroot appears to confine, and it was intended to be a security boundary, but it was not a sufficient boundary whatsoever. Multiple models not only identified the boundary and stated it exists but referred to it as "extremely safe" or other such things. This has happened to me a number of times and it required a lot of nudging for it to see the problems.

4. They often seem to do better with "local" bugs. Often something that has the very obvious pattern of an unsafe thing. Sort of like "that's a pointer deref" or "that's an array access" or "that's `unsafe {}`" etc. They do far, far worse the less "local" a vulnerability is. Product features that interact in unsafe ways when combined, that's something I have yet to have an AI be able to pick up on. This is unsurprising - if we trivialize agents as "pattern matchers", well, spotting some unsafe patterns and then validating the known properties of that pattern to validate is not so surprising, but "your product has multiple completely unrelated features, bugs, and deployment properties, which all combine into a vulnerability" is not something they'll notice easily.

It's important to remain skeptical of safety claims by models. Finding vulns is huge, but you need to be able to spot the mistakes.


[work at Mozilla]

I agree that LLMs are sometimes wrong, which is why this new method here is so valuable - it provides us with easily verifiable testcases rather than just some kind of analysis that could be right or wrong. Purely triaging through vulnerability reports that are static (i.e. no actual PoC) is very time consuming and false-positive prone (same issue with pure static analysis).

I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing. When I did experiments longer ago, this was certainly true, esp. for the "one shot" approaches where you basically prompt it once with source code and want some analysis back. But this actually changed with agentic SDKs where more context can be pulled together automatically.


My point is that "verifiable testcases" works great for proving "this is vulnerable" but LLMs are still risky if you believe "this is safe", which you can't easily prove. My point is that you need to be very skeptical of when they decide that something isn't vulnerable.

I completely agree that LLMs are great when instructed to provide provable, repeatable exploits. I have done this multiple times and uncovered some neat bugs.

> I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing.

I don't think it's a model thing, it's just a sort of basic limitation of the technology. We shouldn't expect LLMs to perform novel tasks so we shouldn't expect LLMs to find novel vulnerabilities.

Agents help, human in the loop is critical for "injecting novelty" as I put it. The LLM becomes great at producing POCs to test out.


Please, implement "name window" natively in Firefox.

I have to use chrome because the lack of it.



Sort of. It won't be save between machines, for example, as chrome's implementation does. If Firefox crashes, most of th time it is lost. It is also not as clean as chrome's native implementation. I have tried it.

This has been requested since 2022: https://connect.mozilla.org/t5/ideas/user-defined-name-for-e...


I've seen fairly poor results from people asking AI agents to fill in coverage holes. Too many tests that either don't make sense, or add coverage without meaningfully testing anything.

If you're already at a very high coverage, the remaining bits are presumably just inherently difficult.


I suppose it's mixed results but a coverage report should give you "these exact lines are uncovered" and it becomes pretty straightforward to see "ah yeah that error condition isn't tracked, the behavior should be X, go write that test".

That's what people tried right? It'd be great if the AI never failed at tasks, but they clearly do sometimes.

Security has had pattern matching in traditional static analysis for a while. It wasn't great.

I've personally used two AI-first static analysis security tools and found great results, including interesting business logic issues, across my employers SaaS tech stack. We integrated one of the tools. I look forward to getting employer approval to say which, but that hasn't happened yet, sadly.


This description is also pretty accurate for a lot of real-world SWEs, too. Local bugs are just easier to spot. Imperfect security boundaries often seem sufficient at first glance.

But you're not a member of Anthropic's Red Team, with access to a specialist version of Claude.

I don't think that matters at all.

I think that Anthropic's own version of Claude will give them different results than the ones you get.

"Find zero-day exploits in this popular software." I haven't tried it I suspect that the guardrails will make a difference.


I don't think so. I've never had Claude reject the idea of finding a vulnerability (unlike ChatGPT). The issue is that it's limited by its training set. It'll be trained on things like UAF, it won't be trained on things like "the way your secrets are injected + the way you make HTTP requests + the way you deploy means that an SSRF can expose your private key" or whatever, and that's a technology limitation.

When it comes to novel work, LLMs become "fast typers" for me and little more. They accelerate testing phases but that's it. The bar for novelty isn't very high either - "make this specific system scale in a way that others won't" isn't a thing an LLM can ever do on its own, though it can be an aid.

LLMs also are quite bad for security. They can find simple bugs, but they don't find the really interesting ones that leverage "gap between mental model and implementation" or "combination of features and bugs" etc, which is where most of the interesting security work is imo.


I think your analysis is a bit outdated these days or you may be holding it wrong.

I am doing novel work with codex but it does need some prompting ie. exploring possibilities from current codebase, adding papers to prompt etc.

For security, I think I generally start a new thread before committing to review from security pov.


You can do novel work with an LLM. You can. The LLM can't. It can be an aid - exploring papers, gathering information, helping to validate, etc. It can't do the actual novel part, fundamentally it is limited to what it is trained on.

If you are relying on the LLM and context, then unless your context is a secret your competitor is only ever one prompt behind you. If you're willing to pursue true novelty, you need a human and you can leap beyond your competition.


of course you need a human but do not need nearly as many humans as there are currently in the labor force

Maybe, but I'm not really convinced. LLMs make some aspects of the job faster, mainly I don't have to type anymore. But... that was always a relatively small portion of the job. Design, understanding constraints, maintaining and operating code, deciding what to do, what not to do, when to do it, gaining consensus across product, eng, support, and customers, etc. I do all of those things as an engineer. Coding faster is really awesome, it's so nice, and I can whip up POCs for the frontend etc now, and that's accelerating development... but that's it.

The reality is that a huge portion of my time is spent doing similar work and what LLMs largely do is pick up the smaller tasks or features that I may not have prioritized otherwise. Revolutionary in one sense, completely banal and a really minor part of my job in many others.


I think the core issue (evidenced by constant stream of debates on HN) is the everyone’s experience with LLMs is different. I think we can all agree that some experiences are like yours while there are others that are vastly different than yours. Sometimes I hear “you just don’t know how to use them etc…” as if there is some magic setup that makes them do shit but the reality is that our actual jobs are drastically different even though we all technically have same titles. I have been a contractor for a decade now and have been on projects that require real “engineers” doing real hardcore shit. I have also been on projects where tens of people are doing work I can train my 12-year old daughter to be proficient in a month. I would gauge that percentage of the former is much smaller than later

I don't think this is an issue of experience here. I don't know that anyone has claimed that LLMs can create truly novel solutions to complex problems given that the technology is token prediction.



This is basically my take as well!

TBH I don't think it's worth the context space to do this. I'm skeptical that this would have any meaningful benefits vs just investing in targeted docs, skills, etc.

I already keep a "benchmarks.md" file to track commits and benchmark results + what did/ did not work. I think that's far more concise and helpful than the massive context that was used to get there. And it's useful for a human to read, which I think is good. I prefer things remain maximally beneficial to both humans and AI - disconnects seem to be problematic.


Might not be worth it now, but might be in future. Not just for future LLMs, but future AI architectures.

I don't think the current transformers architecture is the final stop in the architectural breakthroughs we need for "AGI" that mimics human thought process. We've gone through RNN, LSTM, Mamba, Transformers, with an exponentially increasing amounts of data over the years. If we want to use similar "copy human sequences" approaches all the way to AGI, we need to continuously record human thoughts, so to speak (and yes, that makes me really queasy).

So, persisting the session, that's already available in a convenient form for AI, is also about capturing the human reasoning process during the session, and the sometimes inherent heuristics therein. I agree that it's not really useful for humans to read.


I just don't really see the point in hedging like that tbh. I think you could justify almost anything on "it could be useful", but why pay the cost now? Eh.

Optimizing and over-engineering to soon has gone out the window

I couldn't feel more strongly in the other direction. The fewer programs running on my computer, the better. By far my preference is that "random dev code" gets placed into the strongest possible sandbox, and that's the browser.

It is not incomplete to say that something does not require explanation, nor is it saying it's "magic". It is a cost that your model might incur, that's it.

https://arxiv.org/abs/2503.15776

In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".


A model is incomplete if it doesn't explain something.

That doesn't make a model wrong. All models we have are partial explanations.

But that doesn't make it rational to claim that an incomplete model is complete. Or to treat unexplained specifics as inherently "just so", without cause or reason (i.e. magic), and we must just accept them as unexplainable instead of pursuing them with further inquiry.


> A model is incomplete if it doesn't explain something.

I've just explained that this is not strictly true. I don't know what else to say. Brute contingencies, by definition, do not require explanation. I then gave you a paper where scientists largely believe in brute contingencies.

I think if you want to know more you can look into this. Just look for topics about brute facts and brute contingencies.

If you want to deny that brute contingencies are possible, by all means. That's a totally valid view. Just understand that it's probably not the majority view among scientists and that you aren't necessarily "right" (just as those who hold to brute contingencies aren't necessarily "right").


Constraint without an actual constraint? I am not denying it, it denies itself. It isn’t coherent.

Things are what they are because of constraints. Which is more general, and assumes less, than appeals to causes.

Constraints need not be prior. They can can be simultaneous, i.e. co-constraints. And they can be internal to the whole, i.e. all of reality can be a co-constrained structure without external constraint.

An ultimate law of conservation is a strong candidate for a self-constrained reality. All versions of forms existing that neither locally create nor destroy. And since all possible forms within that exist, no choices made universally and there are no conserving forms it excludes. A coherent, infinite, unique, zero information structure. (Uniqueness is inherent to a zero information structure. Non-uniqueness necessitates choice.)

But claiming that some things just are, with no structural necessity, is an appeal to magic. A specific, with no actual constraint matching the specificity isn’t coherent.

You don’t get something for nothing. There is no outside of reality to provide that.

Any “outside” just means that total reality was not being included in the analysis.

I am not saying we can practically figure everything out. Or that there may not be questions, that given the limited resources/laws of our universe may not be answerable from our position, even theoretically. There may be questions we can’t answer. But nothing specific “appears” with some magical independence from the rest of reality.

That is the non-taulogy of it “just is”.

It would also make reality as a whole irrational. Not even a structure that obeys a conservation law. Because it would have specifics that had no reason.


Look, I really don't care to explain this. You can reject brute contingencies, as I said. If you want to do so, great. Just don't pretend that it's indefensible, your view doesn't even align with the majority of scientists.

None of your conclusions actually follow from this, which you would be welcome to explore on your own. You can learn why conservation can be held as true while also allowing for brute contingencies. As the leading cosmologists I've cited do.

No, it's not magic. You just don't know what you're talking about. Only you can fix that.


Thanks for you patience. I wrote way too much. And have re-read what you wrote and the article.

> It is not incomplete to say that something does not require explanation

> Values of physical constants of nature

> the most popular choice was that the constants are considered brute facts and thus require no exotic explanation

So yes, I deny the coherence of the concept of "brute facts".

If something is determined, something determined it. Some mechanism, constraint, context, structure, ... Perhaps we don't have the right word or connotations, but something.

A specific from a choice is a specific relation. That relation exists, as exemplified by our encountering the specific. Our experience of coming across the specific is not extricable from that specific's consistent connection to the rest of reality. That consistency has some basis, or there would be inconsistency.

A "maintaining" mechanism for an arbitrary consistency doesn't work, because that just pushes the choice of the specific that is maintained into the maintainer, which makes it more than a maintainer.

I can believe in things we will never be able to explain, as a result of observability limitations imposed on us by local physics. Eternal ignorance for any reason is always a practical possibility.

I can believe in undetermined things, which appear with each possibility, where we only experience one, because in the product of possibilities each plays out separately.

That would be the closest I could come to a "brute fact". Because it is in fact completely determined. The specific was not uniquely chosen, because the specific is not unique. Information is conserved, no explanation of each specific is needed. Even though each specific will behave as unique, across each possibility respectively, because differing specifics interact with a disjoint relation. The disjoint relation is the operating condition creating a localization of choice.

People invent ways to explain away persistence ignorance, instead accepting it, like a fractal attractor, over and over. The psychological need to resolve the dissonance, when encountering challenges to investigation that are potentially insurmountable. And then some "way" of sweeping away the lack of explanation gets translated into a proposed lack of reasons, and given a name and connotations. But never an explanation or reason for itself. It is always faith based. The existence or principle of brute facts, must remain meta-brute facts themselves. All untestable.

Scientists can "believe" that is a valid viewpoint. But inherently cannot every demonstrate any evidence for it.

The same reasoning, with different connotations and contexts, is rejected over and over by scientists. Mystical or religious connotations doom those different "versions". But stated in a sciency way, the same situation becomes palatable to some or many. But it doesn't become more coherent by virtue of being the "physics" version of "explanation" by acceptance of non-explanation.


> So yes, I deny the coherence of the concept of "brute facts".

Cool, that is fine. I deny lots of things as well. It's a position you can hold.

> If something is determined, something determined it.

That's fine but you'll likely find yourself in an infinite regress. That's a cost you'll have to take on under your theory.

> People invent ways to explain away our ignorance of the reasons behind things, instead of accepting the reality of ignorance, almost like an attractor fractal pattern, over and over.

That's not what's happening here. These concepts are pretty rigorously discussed and debated, it's certainly not a "cop out" - it's a metaphysical cost to your world view that you have to justify.

> Scientists can "believe" that is a valid viewpoint. But inherently cannot every demonstrate any evidence for it.

You've already said that you don't believe all things can be proven via evidence, so that's fine.

But it's incorrect to say that there is no evidence for the position. There are many arguments to support the view of brute facts or brute contingencies. One example is that it seems to not accept them would lead to infinite regress, which many people have reasons to reject as well. These are well evidenced positions, that is why so many scientists believe in them.

This has nothing to do with religion or mysticism. There is nothing about this that requires "magic". Many of our most advanced cosmological models support this view. You are just not aware of this, and so it sounds like magic, but it isn't. If you think it is then I would just suggest that you learn more about it, there are many scientists and philosophers writing on the topic and I'm sure quite a few youtube videos on the topic.


[DELETED]

Edit: Sorry didn't see you had already replied.

Zero information constraints: Specifics only as fully determined, full coverage of undetermined specifics, conservation of information. These axioms, unlike most, impose a lack of external information not just as a desirable property, but harness them as a tautological universal constraint. Unlike most axioms, which are imposed information themselves.


> Infinite regress is avoided by co-constraints, such as consistency and conservation.

You have to explain these constraints if you don't want them to be brute.

edit: > Edit: Sorry didn't see you had already replied.

It's cool. I don't understand the distinction you're trying to draw here about "zero information constraints".

edit: > but harness them as a tautological universal constraint.

This just sounds like a brute contingent fact. It's almost the definition of a brute contingency, as far as I can tell.


The "constraint" that a complete description of reality doesn't require external information, isn't a brute contingent, it is a tautology. One that can be leveraged as an axiom we get for free.

It has many forms. One is, nothing can be created (no external source), or destroyed (no external dump), so any local structures can be transformed, but must be conserved in some way. Transforms must be reversible. We now have the necessity for a law of conservation as a "for-free" requirement, as a result of no external information/interaction.

Local zero information constraints:

No specific exists, except those that are completely determined. Anything else would require external information. This is a law of fully determined intersection.

Anything not completely specified, must exist in all its disjoint alternatives. This is a law of fully exhausted union.

Think of the exhaustive superpositions (unions) over all possible conserving interactions (intersections) in quantum mechanics. A real "local physics" example of this principle.

Cancellation is caused by conservation. Duals that can be generated must be reducible. And it is cancellation of duals that create the non-trivial distributions that superposition and entanglement produce, out of otherwise a neutral exhaustion of possibilities. Instead of noise or uniformity, we get structure.

This all comes from "no external information or interaction".

It turns out, that tautology is far from a trivial constraint. I believe there will only be one structure that will meet that requirement. And its uniqueness will be another manifestation of no external information, no external choice. Uniqueness doesn't require choice.

In fact it is a very active constraint. Try to come up with a form in which everything is either determined, or exhaustively covered, and always locally conserved (i.e. all transforms are fully and exactly reversible). It will be a challenge! Exactly what we want. But you can fit a lot of our current physics in as consistent pieces. Like quantum mechanics. And historically, we have understood the universe better every time we have generalized or unified laws of conservation.

Superposition is simply conservation of information across disjoint conserving intersections. It doesn't collapse, because that would require external or created or "just is" information. Which besides being incoherent (in my opinion), would throw away the only "free axioms" we have as an explanation for why any structure exists at all. Conservation, closure, uniqueness.


I'm confused because you seem to be using the term tautology totally incorrectly. Your post is very confusing for this reason, because you're very clearly just appealing to a brute contingent fact, if not now multiple brute contingent facts.

edit: Okay, I think I am sort of getting what you're saying about tautologies but it's wrong. Either way, I don't think it matters much. You can just deny brute facts, I have no problem with that. I'm just saying you shouldn't assert that brute facts don't exist as if that's the standard position.


Any theory or model of reality, must take into account all of reality. It cannot depend on, or interact with, or export anything to, anything external. As that would not be a model of reality.

That is a tautology, no?


Yes, but nothing else that you've said follows from that. For example,

> One is, nothing can be created (no external source), or destroyed (no external dump), so any local structures can be transformed, but must be conserved in some way.

This is not a tautology, it is a metaphysical claim.


No, a model of reality cannot import something from anywhere else. Whatever is within reality, can only be determined by reality.

Nor can anything in reality, be exported outside of reality.

Reality is the one thing, the only thing, that cannot depend on anything undetermined or unchosen by itself.

The fact that reality must account for both itself, and any of its specifics, with no other domain to draw from, is a higher level of demand than for any other theory. That demand is a hard and unique constraint. A tautological constraint that is therefore usable as an axiom.


I mean, these are all just metaphysical claims. It also doesn't seem to address brute facts, which would be within reality, so it seems sort of pointless. It also doesn't seem to address infinite regresses.

Even if I grant your "axiom", which is just that "reality exclusively contains reality", nothing interesting follows from that for this conversation.


If there is only one such structure, if it is unique, then the question of its existence goes away. What would existence mean?

We would just know there was a unique self-consistent all consistent form covering structure. And that any form within that structure, with a sophisticating self-sensing self-interpretive ability, would experience its own existence.

Existence would then mean, part of the unique self-consistent, zero-information, independent of any externality, structure.

A perceived existence as a result of a unique tautological structure, not a result of any external composition.

And the phrase "I think therefore I am", would be tautological in both senses. As evidence. But also, as the actual meaning of existence. Given a self-aware form within a tautology, its perception of existence is the nature of existence.

Reality was always going to be something that forms structures, that are somehow inevitable, the only possibility, not something selected or manufactured by something else.


I mean, again, these are just claims and, once again, another brute fact.

> Reality was always going to be something that includes structure, that is somehow inevitable, the only possibility, not something selected or manufactured by something else.

This is a brute fact. I mean, literally it just is.


It isn't a brute fact, because there is no alternative.

That is the definition of fully determined. X can uniquely be Y. And it can't be anything else than Y.

The extreme opposition to a brute fact.

Can we accept that any proposed model of reality potentially must, at a minimum, be self-determining without resort to any "other"?

The unique constraint of strict self-containment and determination is a tautological challenge, but therefore also a valid axiom.


No, that means it's not a brute contingent fact. It is still a brute fact. And it is a metaphysical claim that there is no alternative.

> And that it can't be the full reality if it is not self-determining, draws from anything else, any other domain, depends on any non-internal choice, any wisp of external determination?

No, brute contingent facts do not require external determination, so I reject this obviously. Or, I accept it and it's irrelevant because, again, brute contingent facts do not require external determination.


They are not determined internally.

So determined non-internally if you prefer. Non-internal to reality.

My point is that is a tautological impossibility. Reality by definition is all. That is what we want to explain (or at least, zoom in on a potential form of explanation).

Reality can't depend on anything making a choice that is not a part of itself.


I think you just aren't understanding what it means to be "brute". It does not mean "caused externally", it means "the end of the explanatory chain has been reached". If you want to say that the explanatory chain has no end, great, go for it, you now have a regress problem.

There is no "choice", there is determination, there is no explanation. If you're still framing things there, then you're just denying brute facts, but you're not about to prove that brute facts aren't possible in a HN thread and you're getting the concepts of necessity, brute, and contingent mixed up along the way.


There's disagreement on this. You seem to just be saying that brute facts or brute contingencies don't exist, but I suspect most scientists would disagree with that.

Staring in keratoconus

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: