Hacker Newsnew | past | comments | ask | show | jobs | submit | akoboldfrying's commentslogin

After bogo-sort, it's the most badness-maximising "solution" I've ever come across. Why bother asking for the creator's consent to copy and run the original bytes, when you could instead ask for their consent to have a robot that no one understands and could potentially do anything read a few paragraphs of text describing what those bytes do, imagine how it might work, and try to build something resembling that from scratch, using a trillion or so times more energy.

What about my latest algorithm, VibeSort

    // VibeSort
    let arr = [51,46,72,32,14,27,88,32];

    arr.sort((a,b)=>{
      let response = LLM.query(`Which number is larger, number A:${a} or number B:${b}. Answer using "A" or "B" only, if they are equal, say "C".`);
      if(response.includes('C')) return 0;
      if(response.includes('B')) return -1;
      if(response.includes('A')) return 1;
      return 0;
    });

    console.table(arr);

The energy thing won't sail. A backhoe or front-loader uses far more energy than the equivalent human labor, but having higher energy solutions available is what technological civilization does.

Arguably Cowen's "Great Stagnation" was driven primarily by not embracing higher energy provision in the form of fission.


AFAICT at least 2 people in this thread don't seem to think that visibility -- a function of, among other things, weather and time of day -- influences driving safety. I find this amazing.

The point of terryf's example was to point out that for practical reasons, existing laws don't capture every relevant variable. I (but not everyone, it seems) think that visibility obviously influences safety. The point I want to make is that in practice the "precision gap" can't be perfectly rectified by making legality a function of more factors than just speed. There will always be some additional factor that influences the probability of a crash by some small amount -- and some of the largest factors, like individual driving ability, would be objected to on other grounds.


> when we exchange generic information across networks we parse information all the time

The goal is to do this parsing exactly once, at the system boundary, and thereafter keep the already-parsed data in a box that has "This has already been parsed and we know it's correct" written on the outside, so that nothing internal needs to worry about that again. And the absolute best kind of box is a type, because it's pretty easy to enforce that the parser function is the only piece of code in the entire system that can create a value of that type, and as soon as you do this, that entire class of problems goes away.

This idea is of using types whose instances can only be created by parser functions is known as Parse, Don't Validate, and while it's possible and useful to apply the general idea in a dynamically typed language, you only get the "We know at compile time that this problem cannot exist" guarantee if you use types.


> The goal is to do this parsing exactly once, at the system boundary

You are only parsing once at the system boundary, but under the dynamic model every receiver is its own system boundary. Like the earlier comment pointed out, micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively. Yes, you are only parsing once in each service, but ultimately you are still parsing many times when you look at the entire program as a whole. "Parse, don't validate" doesn't really change anything.


> but under the dynamic model every receiver is its own system boundary

I'm not claiming that it can't be done that way, I'm claiming that it's better not to do it that way.

You could achieve security by hiring a separate guard to stand outside each room in your office building, but it's cheaper and just as secure to hire a single guard to stand outside the entrance to the building.

>micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively

I think microservices emerged for a different reason: to make more efficient use of hardware at scale. (A monolith that does everything is in every way easier to work with.) One downside of microservices is the much-increased system boundary size they imply -- this hole in the type system forces a lot more parsing and makes it harder to reason about the effects of local changes.


> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.

Scaling different areas of an application is one thing. Being able to use different technology choices for different areas is another, even at low scale. And being able to have teams own individual areas of an application via a reasonably hard boundary is a third.


> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.

Same thing, no? That is exactly was what Kay was talking about. That was his vision: Infinite nodes all interconnected, sending messages to each other. That is why Smalltalk was designed the way it was. While the mainstream Smalltalk implementations got stuck in a single image model, Kay and others did try working on projects to carry the vision forward. Erlang had some success with the same essential concept.

> I'm claiming that it's better not to do it that way.

Is it fundamentally better, or is it only better because the alternative was never fully realized? For something of modern relevance, take LLMs. In your model, you have to have the hardware to run the LLM on your local machine, which for a frontier model is quite the ask. Or you can write all kinds of crazy, convoluted code to pass the work off to another machine. In Kay's world, being able to access an LLM on another machine is a feature built right into the language. Code running on another machine is the same as code running on your own machine.

I'm reminded of what you said about "Parse, don't validate" types. Like you alluded to, you can write all kinds of tests to essentially validate the same properties as the type system, but when the language gives you a type system you get all that for free, which you saw as a benefit. But now it seems you are suggesting it is actually better for the compiler to do very little and that it is best to write your own code to deal with all the things you need.


The following paragraph appears twice:

> Now 2 case studies are not proof. I hear you! When two projects from the same methodology show the same gap, the next step is to test whether similar effects appear in the broader population. The studies below use mixed methods to reduce our single-sample bias.


Might be worth doing the kind of "manual ECC" you're describing for a small amount of high-importance data (e.g., the top few levels of a DB's B+ tree stored in memory), but I suspect the biggest win is just to use as little memory as possible, since the probability of being affected by memory corruption is roughly proportional to the amount you use.

Precautionary Principle is always about blast radius times probability. Condensing the state reduces the odds that the bit flip will be in your critical memory but increases the damage when it does. That tends to be a proportional amount so if it’s not a lateral move it’s at least a serpentine one.

> but increases the damage when it does.

For this to be true, I think you would have to assume an "additive" model where each time corrupt memory is accessed it does some small amount of additional "damage". But for memory holding CPU instructions, I think it's more likely that the first time a corrupt byte is read, the program crashes.


It's clearly just a hallucination. Everyone knows there was never a movie called Heat, Val Kilmer did not play Chris Shiherlis in it, and he has always been pregnant.

Maybe this is what you meant, but the dangerous part is ensuring that the final claim being proved is correct -- the actual proof of that claim is, by design, something that can be quickly and mechanically verified by applying a series of simple rules to a set of accepted axioms. Even a human could (somewhat laboriously) do this verification.

I have no experience with this, but I'd expect the danger to arise from implicit assumptions that are obvious to a mathematician in the relevant subfield but not necessarily to an LLM. E.g., whether we are dealing with real-valued or integer-valued variables can make a big difference in many problems, but might only be actually explicitly stated once in the very first chapter (or even implied by the book's title).

There are also many types of "overloaded" mathematical notation -- usually a superscripted number means "raised to the power", but if that number is -1, it might instead mean "the inverse of this function".


This is correct, with the proviso that the meaning of the claim/statement depends on the definitions and checking these can be a bit tedious. But still feasible, and better by orders of magnitude than "checking" the proofs. Abuses of notation and just plain sloppiness are a big issue though if you wish to auto-translate an informal development into correct definitions and statements for formalization.

Yes, but it's interesting that you can teach it to do arithmetic, don't you think? Most things can't be taught to do arithmetic, making this "transformer" thing slightly magical. And so then it seems interesting to investigate exactly how much magic is needed to achieve this.

In theory, there is an infinite number of systems with simple emergent rules that can eventually be taught arithmetic.

> Most things can't be taught to do arithmetic, making this "transformer" thing slightly magical.

Yep, for people who don't have know the fundamentals (i.e. maths). To people who don't know the universal approximation theorem, this may seem like "magic", but it's just as much magic as making a dark room bright by flipping a light switch.


The tokenisation needs to be general -- it needs to be able to encode any possible input. It should also be at least moderately efficient across the distribution of inputs that it will tend to see. Existing tokenisation schemes explicitly target this.

Being let go from a job sucks.

So does being dumped from a relationship. You might not be able to find another relationship in 6+ months. But I don't think people would seriously propose that people should therefore not be able to leave a relationship.


You do realize that having a girlfriend actually doesn't pay for your rent, food and medication?


Everybody realises this. That said, a huge number of people around the world are financially dependent on their partner. (One of the main goals of feminism has been to normalise relationships where this is not the case.)

The fundamental question is: Should employers be obligated to employ the people they hire forever?


[flagged]


Thanks!

A lot of people would focus on the many obvious differences, and use those to deflect attention from the important similarity I was highlighting: That they are both things that ought to exist only so long as both parties want them to.


I know they might sound superficially similar, if you're 12 years old, so let me breakdown a little bit why your analogy is a bad one.

Relationships are between two people, not between a human and a public entity like a company.

My partner doesn't have extremely disproportionate leverage over me, if tomorrow I leave – company will chug along just fine, if I'm laid off tomorrow – I might lose my home, relationship, well-being, never recover from the layoff (meaning I won't contribute back to economy and go on well-fare, and potentially start a revolution if there are millions of me) or ultimately die.

I know it's a difficult concept for 20 year old tech-bros who sucked VC money with the milk of their mother to grasp, but money does dry out. You might think you're invincible right now and that it's you and companies against them (lazy, stupid coworkers), but you're the same cattle to them as the rest of us. As you can see by the topic of the thread.

Back to the analogy: Main goal of a company is to produce value for society, not making money for VCs. It's a difficult pill to swallow, I know, tech bros been taught for decades that job security, health insurance, taxes, value creation – all of those are commie concepts aimed to undermine our God given right to make money, and we – temporary embarrassed millionaires – need to fight it with every ounce of our existence by working 60 hours a week.

Labor IS the main input that turns capital + IP into products and services. Without those people Block would be nowhere near the current position it is in. But when business is strong, though, VCs and C level get obscene bonuses while employee compensation stays flat. Go figure.

I could go on and on, talk about tax reliefs [0], that countries and companies exist for people and not the other way around, but this should be enough to understand that THIS IS NOTHING LIKE A RELATIONSHIP WITH A HUMAN BEING.

[0] https://www.irs.gov/credits-deductions/businesses?utm_source...


> My partner doesn't have extremely disproportionate leverage over me

This is the most... hinged... thing you said. I totally agree that bargaining power is usually far more skewed in the employment case. I think unequal bargaining power is the root of much unfairness in the world, and that the only way to really counteract this is through organising, i.e., unionisation. It's far from a panacea, mind you.

With all that said: Fundamentally, I don't think that an employer should be obligated to employ someone indefinitely. If you do: Think about whether regulations enforcing this would make an employer more likely, or less likely, to hire someone they are on the fence about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: