Hacker News new | past | comments | ask | show | jobs | submit | ronald_raygun's comments login


Speedrun hacking.

Very impressive! (the method used in that article's case, that is).


The Devonian certainly wasnt a wood age


I can already change my race. I just check a different box on government forms...


von Neumann could do calculus when he was 8. Some people are just born that way


Sentence two does not follow from sentence one.

https://www.theintrinsicperspective.com/p/why-we-stopped-mak...


Because they are essentially bombs


What do you think a serf was?


Someone whose day to day existence was mostly under their own control, so long as they stayed tied to the land and produced. Both slavery and serfdom are terrible, but they are not the same.

From the perspective of industry though, serfdom is a purely agricultural institution.


Not a slave.

They couldn't be sold. They had rights. In general the classic school hierarchy of emperor, vassals, valvassori (dunno in english), valvassini (also don't know in english), serfs has been debunked.


Idk lots of civilizations had slaves (I almost wrote ancient, but slavery didn’t go away until less than 200 years ago, and human trafficking is still a big issue). But compare Rome with say the Spartans, who had an insanely big slave class, but who didn’t produce the same types of things as Rome.


Wasn’t there an OpenAI paper where they showed that MM multiplication with the numerical imprecision of floats was enough to get general function learning?

You might not even need anything else…


One almost certainly does not need anything else ... unless what you want is big piles of investor cash, a Scrooge McDuck swimming pool quantity of investor cash - then you need to call whatever maths you do AI.


I always found the idea of infinitely self improving AI to be suspect. Let’s say we have a super smart AI with intelligence 1, and it uses all that to improve itself by 0.5. Then that new 1.5 uses itself to improve by 0.25. Then 0.125, etc etc. obviously it’s always increasing, but it’s not going to have the runaway effect people think.


There are many dimensions where improvements are happening - speed increase, size reduction, precision, context length, using external computation (function calling), using formal systems, hybrid setups, multi-modality etc. If you look at short history of what's happening - we're not seeing below 50% improvements over those relatively short periods of time. We had gpt1 just five and a half years ago. We now have open weight models orders of magnitude better. We know we're feeding models with tons of redundancy and low quality inputs, we know synthetic data can improve and lower training cost dramatically. We know we're not near anything optimal. We'll see orders of magnitude size reductions in coming years etc. Humans don't represent any kind of intelligence ceiling - it can be surpassed and if it can be surpassed and we know humans alone produce well above 50% improvements - it will get better and getting better.

Saying that models will get attracted to bullshit local maximum is similar fallacy to saying that wikipedia will be full of rubbish when it was created. Forces are set up in a way that creates improvements that accumulate, humans don't represent any ceiling and unlike humans models have near zero replication cost, especially time wise.


Sure, but it seems that with a fixed amount of hardware or operations there is some sort of efficient frontier across all the axes (speed, generalization, capacity, whatever), so there should logically be a point with diminishing returns and a maximum performance.

Like there is only so much you can do with a single punch card.


Yes, there are physical limits but they are so far off from human pov that they don't matter much.

For example information communication rate that humans can perform (read or write) compared to what computers can do.

Same with information storage, retrieval, precision, computation rate etc.


Sure computes have lots of transistors, but brains have 10s of billions of neurons and only use 12W of power.


If it's smarter than us it's pretty irrelevant whether it takes 12W or 5KW or even 1TW to run. Sure it may stop improving once it's far surpassed Von Neumann-level (at some point nobody knows) due to some physics or unknown information constraints but I don't think that has any practical bearing on much.


If it improves at a faster rate than humanity, it pulls ahead even if the absolute speed is slow. That's what people are really more worried about, not instant omniscience.


Why should it be (1/2)*i improvement rate ? It could be 1/i ? I don't understand this argument.


The general assumption is that some form of Moore’s Law continues, meaning that even without major algorithmic improvements AIs will blow past human intelligence and continue improving at an exponential rate.


Yeah but there are arguments that Moore’s law won’t continue because at a certain point you can’t really get transistors closer without quantum effects messing with them


Yes, but the assumption is that Moore's law (or something like it) continues way past the point of machines surpassing human intelligence. And maybe the AIs find completely new ways to speed up computing after that.


Why would the rate of improvement follow your imagined formula?

If people are worried about a runaway effect, why would you think you can dismiss their concerns by constructing a very specific scaling function that will not result in a runaway effect?


The more general point is people are asymptomatic growth and assume a never ending exponential, when in reality it’s probably something with a carrying capacity


Your AI will end up twice as intelligent - eventually.

Were it to improve by 0.5, then 0.333, then 0.25, then 0.2 and so on would be an entirely different matter!


Do you have some reason to think that will happen?


Yeah you could imagine that with a fixed amount of resources that implies a maximum “computational intelligence”. Right now we aren’t close to that limit. But if we get better algorithms, there’s going to be fewer gains as we get towards a finite ceiling.


> You could probably design some degenerate probability distribution that ml-estimation behaves really badly for, but those are not common in practice.

Anything multimodal...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: