>there is a relationship between information theory and thermodynamics, and nobody, including no superintelligence, will be able to break it.
The way the lesswrong people could know or not, to "break" through information theory and thermodynamic constrains is the known one to work, the way humans are using since the 70s. Algorithm improvements, leading to better hardware, leading to new algorithm improvements, leading to better hardware.
Along the way we got ASML machines, tensors, stuff keep coming regularly. A very low percentual improvement in current algorithms, given the scale of current AIs infrastructures, lead to improvements, even radical improvements.
LLMs got thrown into the mix recently, and now we have something like better hardware > better LLMs > better algorithms > better hardware.
i.e. Many recent (I'd insist here with "radical", but very, very small improvements have a magnified effect, so they are really important too), improvements could potentially be already the outcome from a LLM, or the collaboration between LLM and humans. There were attemps to register patents for "AI discoveries" (denied), maybe those were examples.
So, ASI is not yet proved constrained by information theory nor thermodynamics, having not reached yet any "wall". It potentially could exist ASI well within the constrains of suchs variables.
The training datasets contain and reflect human imperfections, including implicit mistakes (which could not be fully extracted in the refining work because ambiguity), and failures in practical logic. Most probably many hallucinations are just the evidence that LLM intelligence or capability to solve (a less controversial way of say it), is actually quite aligned to the way humans think.
I think this is key for new knowledge generation, this could be the "secret sauce" behind the non-deterministic ouput we see when we feed the same prompt to a LLM and we get different outputs, I think is what "allows" the LLM their "generative" vein: a inherent, implicit mathematical structure well hidden in the human substract present in the datasets, that creates a way out of the fixed paths, not merely a recombinatorial behavior, but a true generative behavior.
Hence, the generative AIs we already have, could be the very evidence of the LLM technology being capable of generating new knowledge.
>For an AI, a thought experiment and a real experiment are indistinguishable.
>As a result, any world model that is learnt through the analysis of text is going to be a very poor approximation of reality.
As of recently, and regarding to newer models, the public GPT 3.5 of recent months (last 3 months maybe), Claude, maybe Gemini, this isn't true anymore, or at least the outcome of prompted improbable o unreal situations leads the LLMs to answer with warnings that it is answering in the context of the gibberish you prompted it in, but the whole proposed stuff/thing is not possible in the real world, or it is very improbable to happen, or even (I've given this last one), it is not possible within the constrains of known physics laws.
I was asking Claude/GPT how someone could jump up and at the same time fall down, i.e. "you can't simultaneously go up and go down", or having modified the prompt, "you can't go up after having jumped off a building, gravity will take you downwards", etc.
The prompting interfaces face the entire human population, a sizable number is currently feeding the models with valuable/actionable experiments plus outcomes, re-feeding the prompts with intermediate results (failures), till the point it gets something, a valid outcome which in the training data will be tagged as a successful, granted RLHF from the real world, maybe accounting for dozens or even hundred of instances and variations of the same problem, ranging dozens of countries, cultures, age ranges, etc.
That data being live-captured most certainly is between the most valuable datasets to used to train (re-train), current and future models, hence granting data to the AIs that the author implies it can't get by itself nor it is accesible to the AIs.
> Conflicts (such as an attempt to kill humanity) have no zero-risk moves
I think the author is onto something here, probably correct. Hence, most theories of AGI doomsday relay on AIs deception and asymmetrically deployed actions.
Some recent thoughs were in the direction of really creative strategies - given that now we see clearly that the AIs can be extremely good creating completely new stuff, including never seen strategies in games - but the humans keep thinking they'd probaby include a good degree of (even implicit) deception.
The way the lesswrong people could know or not, to "break" through information theory and thermodynamic constrains is the known one to work, the way humans are using since the 70s. Algorithm improvements, leading to better hardware, leading to new algorithm improvements, leading to better hardware.
Along the way we got ASML machines, tensors, stuff keep coming regularly. A very low percentual improvement in current algorithms, given the scale of current AIs infrastructures, lead to improvements, even radical improvements.
https://research.google/blog/distilling-step-by-step-outperf...
LLMs got thrown into the mix recently, and now we have something like better hardware > better LLMs > better algorithms > better hardware.
i.e. Many recent (I'd insist here with "radical", but very, very small improvements have a magnified effect, so they are really important too), improvements could potentially be already the outcome from a LLM, or the collaboration between LLM and humans. There were attemps to register patents for "AI discoveries" (denied), maybe those were examples.
So, ASI is not yet proved constrained by information theory nor thermodynamics, having not reached yet any "wall". It potentially could exist ASI well within the constrains of suchs variables.