Hacker News new | past | comments | ask | show | jobs | submit | ainoobler's comments login

It's only going to get better and better.


Insufficient enthusiasm. You have been fined one credit.


Please drink Mountain Dew verification can.


[smiling as hard as possible]


Too late. You have been transferred to a dormitory with more appropriate peers.


Eventually both the hype and its criticism will be automated with AI as well so that we can all go to the beach and relax.


Are you sure about this? It's well known that cannibalism in animals leads to degenerative disorders.


I think the direct action of a person taking their idea and thoughts and going through it many times (making changes / updates / fixes) fits better than eating something. however, I do think you still some form of validation data to ensure these are good changes.

However, I do get the spirit of the article, that as more information generated online is done by LLms, the validity and use of the output decreases


What exactly is doing the validation?


depends on what one was doing. could be as simple as re-writing a sentence and asking someone if it looks better


Not sure why you’re downvoted, I think a comparison with prions seems apt and interesting, and bad protein copies that can replicate is essentially an information process. GAN research in recent years showing how you can sabotage a working dog/cat classifier with a one pixel change feels similar to how the tiniest parts of large systems can sometimes undermine the whole completely, albeit with low probability. And finally, since models will bootstrap models that bootstrap models, inevitably there are already subtle issues out there in the wild that may have an incubation period of many years before the downstream effects are completely clear.


The problem is systemic. People believe that the pursuit of monetary and financial profits by corporations will lead to the creation of benevolent artificial intelligence. I personally think this is essentially a religion because it is obvious that the pursuit of profits can not actually create anything benevolent, let alone intelligence.


How can you tell the difference between (r,r) and (r+ε,r+ε)? The argument in the article assumes that there is an ε such that it is impossible to tell the difference between r and r+ε. So this means that the entire square can be covered by squares of side length ε and since the unit square is compact this means that there are only finitely many ε squares required to cover the entire unit square. Variation within the ε squares is imperceptible so this means there are only finitely many symbols that can be perceived to be different.


Interesting argument. This assumes that cognition must also be happening on a compact manifold which seems like a reasonable assumption but the conclusion is somewhat counterintuitive because it means there are only finitely many personality types and ways of thinking.


Is that counterintuitive? There's a finite number of electrons in a human brain (which fits in a space of size 1m^3), and information takes time to propagate across any distance, and humans die before 150 years; this all gestures at there being not only finitely many personality types and ways of thinking, but finitely many human mind states.


But there may yet be more brain states than number of possible thoughts by the finite humans.


Oh, 100%, it seems extremely unlikely that multiple brain states in this sense don't code for the same thought! If nothing else, you could only avoid that with an absurdly precise definition of "brain"; any practical definition of "brain" is very likely to include matter that is physically irrelevant to at least some thoughts. I hedged with "1m^3" because the argument goes through even if you take the entire body to be the brain.


I guess that means reincarnation is real.


Assuming you're not trolling: nope, that would be an empirical question of how many brain states ever actually arise in the world during the time humans exist, and how much space and time it takes to maintain a given brain state. The universe is currently expected to stop being able to support human life at some point in the future, so the pigeonhole principle argument requires some physical parameters to be known before you can apply it; and even if the pigeonhole principle did apply, you still can't know by that argument whether any given brain-state is repeated (and if so, how many times). Things which you can deduce using only mathematics, and no physical data, are necessarily extremely weak statements about reality.


If you are a mental clone but uncausally linked then is that still reincarnation?



The universe is mathematical, all we can ever know about it is mathematical.


A very strong empirical statement about the nature of reality, and a strong statement about the nature of mathematics, neither of which everyone agrees with! (What will your reply be if we discover a halting oracle at the centre of the galaxy? Mathematics doesn't forbid that: the Church-Turing thesis is an empirical statement, and it's the empirical scientific law of induction which is what tells us that we're extremely unlikely to find a halting oracle.)

But you've missed the point: even assuming that mathematics can perfectly model the universe, you cannot use weak physical hypotheses ("human brains are finite, there is an injection from human mind-states into human brains") and fully general mathematics (the pigeonhole principle) to derive such specific strong truths as "a particular human mind-state is repeated in a predictable way in the universe". You can at best derive general truths such as "at least one human mind-state is repeated at least once, somewhere", if the universe happens to have its physical parameters set in such a way that the pigeonhole principle holds.


Who said this had anything to do with cognition at all? I think Turing's argument goes through with humans replaced by automata and eyes by cameras. It's just that perception has finite resolution.


I made the obvious connection. It's an original thought from me and no one else.


That's how the math works out in general unless you think some kind of soul exists.


At this point I kind of do. The matter that goes into forming your body and brain is somewhat special, having accumulated the properties it has experiencing billions of years of traveling through the universe.

Once you die it decomposes and goes on its way, and forms something new.

Its not a "soul" in the same sense but still pretty trippy


It all follows from compactness so if the cognitive manifold is not compact then the conclusion is not true.


Well, this is true only if you think cognition and thought as a pure biological process totally dependent on state of brain.


I don't think that. I'm just explaining what follows logically if the cognitive manifold is compact.


What exactly do you mean with "cognitive manifold"?


It's a made up concept and it includes all possible ways people can think anything. If you want something concrete then you can consider sheaves of finitely presented algebras on the nervous system considered as a topological space with the obvious covering relations, continuous transformations, and algebra morphisms.


The electromagnetic radiation still turns to waste heat.


Not necessarily, it can also turn to motion or to electricity or to chemical processes or to other things. For example, if you're using the light bulbs to light plants, it gets turned to sugar, not to heat.


I googled LED light efficiency what percentage of electricity becomes heat and looks like only about 20% of the input is wasted as heat?

So a 15W LED light would only need 3W of cooling? It feels even more ridiculous when we put the numbers like this... There is no excuse for LED lights to not have adequate cooling or for them to fail because of overheating...

> This means that about 80 percent of the electrical energy is converted to light, while 20 percent is lost and converted into other forms of energy such as heat.


Light bulbs are build to break and be replaced, as far as I understand we could have eternal light bulbs but that’s not in the interest of light bulb makers.

That same logic might still apply for modern light solutions like LEDs. I have a more expensive LED bar which is build on top of aluminium to disperse the heat.

Did you have heard about the Dubai Lamp? It’s basically a very efficient longlife LED which they only sell in UAE.


The Dubai Lamp is produced by Phillips, and is about 3x more efficient than typical LED bulbs -- normally 9w for 60w equivalent vs 3w. Their rated life is 25,000 hours, which is 2.5x longer than a typical 10,000 hour led.

They are damn impressive, but not "forever".

https://www.mea.lighting.philips.com/consumer/dubai-lamp


In the case of computers it turns to waste heat and whatever is radiated away is not usable energy and is essentially waste heat. Computers are basically devices for converting useful energy into waste radiation.


Agreed, in computers all of the input power becomes waste heat.

One small exception may be microntollers that use PWM to emit a signal through an antenna, or to drive a motor. I think in that case some of the energy becomes radio waves (which may never be absorbed entirely so they may not become heat) or motion in the motor.


'Useful' signal also comes out in the form of photons from the monitor.


Monitors have an external power source, I have never heard of a monitor being powered by signal coming from the CPU or GPU. That is, the control signal coming from the CPU/GPU doesn't become light, it merely helps decide what light a separate circuit might emit.


That's still radiation which turns to heat.


You are correct. All the electron motion in a CPU turns to waste heat or electromagnetic radiation which eventually becomes waste heat.


Fundamentally, yes. All motion does end up adding entropy to the environment. This is especially egregious for fossil fuels which are literally combusted/oxidized to move chunks of metal on wheels carrying different kinds of cargo. Electric vehicles change the dynamics somewhat but the motion of magnets still ends up contributing entropy to the environment at a somewhat reduced rate than fossil fuel vehicles.


Heat and entropy are different concepts. Also, motion can reduce entropy locally, even if globally it always increases. For example, when a puddle freezes, the puddle's entropy goes way down. The entire Earth's entropy can also go down, as long as the entropy of the universe increases.


How are heat and entropy different?


Heat is energy that causes something to raise in temperature, and is measured in Joules. Entropy is a harder to pin down concept, related to the internal state of a system, and is measured in Joules/kelvin. In certain processes, the change in entropy is equal to the change in heat divided by the temperature of the system.


Sounds like it's the same thing.


They are closely related but not the same. DeltaS = DeltaQ /T

A lot of it comes down to whether a process is reversible or not. I've taken multiple courses on thermodynamics and still only understand it a little.


It is not. Two bodies can have different entropy even if they are emitting the same amount of heat.


Where are the calculations?


dS = dQ / T


So T = dQ/dS. Seems like the same thing.


If they were the same thing, then T=1. Since there are many many bodies with T != 1, it follows that dQ is rarely equal to dS.


What computable functions is it performing? Can you show me the code for these computable functions?


How would you verify semantic correctness of the optimizations?


I think I envisioned traces being extracted from a series of open source projects and their automated test suites.

Run the test suite, identify optimizations. One by one, make the the optimization change to the implementation as suggested by the LLM.

Instrument the changed methods on the second test run and see if runtime performance has changed. Verify that the test still passes.


I meant how do you make sure the optimization suggested by the AI is actually valid. If you're using AI to modify bytecode for faster execution then you have to make sure the optimized and unoptimized code are semantically equivalent. Neural networks can't do logic so how would you know the suggestions were not bogus?


You've asked the right question, and for those that think validation is as simpLe as "run it and see if it gets the right result", good start but instruction ordering can be critical around multi thread aware data structures. Taking a fence out, or an atomic operation might give a big performance gain. Trouble is the structure may now go wrong 1% of the time.


A valid accompanying test would ensure this?

You’d be extracting optimization candidates by running the test suite.

You re-run the test suite after changes to ensure they still pass.


JIT optimizers operate at runtime, there are no test suites to verify before/after. It's happening live as the code is running so if you use AI then you won't know if the optimization is actually valid or not. This is why the article is using Z3 instead of neural networks. Z3 can validate semantic equivalence, neural networks can't.


Yes, but this Z3 analysis is not done at runtime. It's done offline, based on JIT traces. A neural network could, in principal, suggest optimizations in the same way, which an expert would then review for possible inclusion into the Pypy JIT.


You'd still have to write a proof for verifying semantic equivalence before implementing the optimization so I don't see what the neural network gains you here unless it is actually supplying the proof of correctness along with the optimization.


The idea is that the LLM would provide "intuition" to guide the optimizer to find better optimizations, but a formal proof would be necessary to ensure that those optimizations are actually valid.


I might be incorrect, but I don't believe that most compiler optimizations have formal proofs written out before implementation. Does Pypy do this?


Pypy doesn't do this in general. The same Z3 model that is used to find these missing optimizations is also used to verify some integer optimizations.

But the point is that as long as optimization rules are hand-written, a human has thought about them and convinced themselves (maybe incorrectly) that the rules are correct. If a machine generates them without a human in the loop, some other sort of correctness argument is needed. Hence the reasonable suggestion that they should be formally verified.


PyPy has formally verified the integer abstract domain using Z3, a quite important part of our jit optimizer (will write about that in the coming weeks).

We also run a fuzzer regularly to find optimization bugs, using Z3 as a correctness check:

https://pypy.org/posts/2022/12/jit-bug-finding-smt-fuzzing.h...

The peephole optimizations aren't themselves formally verified completely yet. We've verified the very simplest rules, and some of the newer complicated ones, but not systematically all of them. I plan to work on fully and automatically verifying all integer optimizations in the next year or so. But we'll see, I'll need to find students and/or money.


Ah, yes, I meant that the LLM could output suggestions, which a human would then think about and convince themselves, and only then, implement in Pypy.


Presumably the LLM would generate a lot of proposed rules for humans to wade through. Reviewing lots of proposed rewrites while catching all possible errors would be tedious and error-prone. We have computers to take care of this kind of work.


Perhaps not, but they’re based on heuristics and checks that are known, checked and understood by humans, and aren’t prone to hallucination like LLM’s are. An LLM suggests something that looks plausible, but there’s no guarantee that it’s suggestions actually work as intended, hence the need for a proof.


I added a few somewhat similar optimization to Racket. The problem are the corner cases.

For example (fixnums are small integer), is it valid to replace

  (if (fixnum? x)
    (fixnum? (abs x))
    true)
with just the constant

  true
?

Try runing a few tests, common unit test and even random test. Did you spot the corner case?

It fails only when x is the most negative fixnum, that is also a very rare case in a real program. (IIRC, the random test suit try to use more of this kind of problematic values.)


Close!

Generate the z3 too - as the need is to verify, not test. It can be a direct translation. For all inputs, is the optimization output equivalent. (Bootstrapping a compiler prototype via LLMs is nice though.)

One place LLMs get fun here is where the direct translation to z3 times out, such as bigger or more complicated programs, and so the LLM can provide intuition for pushing the solver ahead.


Tests can't ensure the correctness of an algorithm, only that it gives the correct output on a specific input.


Depends on the comprehensiveness of the test.


Sure, for booleans you can just test all combinations of input arguments. In some cases you can do the same for all possible 32 bit float or int values that you have as input. But for 64 bit integers (let alone several of them) that's not feasible.


As long as we can agree that we are testing the application logic and not the compiler or hardware, then if (a > 4) {...} else {...} can be tested with just 3, 4, 5 no need to test -430 or 5036.

Known as boundary value testing, you partition all input into equivalence classes, then make sure your tests contain a sample from each class.


Making sure the test contains a sample from each class is the hard part. For example in your `if` example above it may happen that the code computing `a` is such that `a >= 5` is impossible, so that equivalence class is never going to happen. As such you can't have a test for it, and instead you'll have to prove that it can never happen, but this reduces to the halting problem and is not computable.

And even ignoring that problem, there may be an infinite amount of equivalence classes when you introduce loops/recursion, as the loops can run a different amount of times and thus lead to different executions.

Even just considering `if` statements, the amount of equivalence classes can be exponential in the amount of `if` (for example consider a series of `if` where each check a different bit of the input; ultimately you'll need any combination of bits to check every combination of `if`, and the number is 2^number of ifs).


For any practical input no test is gonna be comprehensive enough. Especially for something that has infinite possible inputs like programs.


Is the scope a whole program or a specific algorithm?


Even most algorithms would allow too many inputs. Even a simple algorithm computing the addition between two 64 bit numbers allow 2^128 possible input combinations, which would take billions of years to exhaustively check in the best case.


regehr et al use alive2 which uses z3


But.. But.. But.... This is HN. You must use AI / LLMs for everything! /s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: