Hacker News new | past | comments | ask | show | jobs | submit | rsrsrs86's comments login

Wasm.


And compute.


Yes, but the most expensive CPU doesn't cost $40K.


If ASI is artificial specific intelligence, I bet your pardon; intelligence can hardly be specific. Intelligence can reflect upon itself.


sorry what? I didn’t understand the sentence


They have never been thinking. This is important.

Predicting the next word is not intelligence.


Well, if you consider the case of a linear regression, fitting on your output will add no new information to the weights. Try that on any notebook.


Please spread the word that predicting the next one is not intelligence. It’s markov…


It depends on how you predict. To predict better and better you need intelligence.


Knowledge is distinct from intelligence.

You can predict better and better with simply more knowledge i.e. data.


That only gets you as far as the frontier of knowledge. To go beyond you need intelligence.


I feel abdominal pain when I see the words “thinking” or “reasoning” related to LLMs.

I feel back pain when I read the crazy, unsound speculation about how the brain is supposed to be like a computer. Serious mistake.


Unless you can show an example of humans reasoning solving a problem outside the Turing computable set, there is no rational basis for assuming the brain is anything but a computer, as the very notion that we exceed Turing computability would be revolutionary and utterly mindbending in terms of consequences on a number of fields.


there is no rational basis for assuming the brain is a "computer" in the same way an intel x86 chip is a "computer" or that the universe is a "computer". Using language in this way without defining terms like what even is a computer is folly.


There is no rational basis for assuming it is not, as we have not a single example of a computable function outside the Turing computable set.

The term "computer" has it's original outside of "electronic computer". It used to be a role, a job function. There has been no time in human history where the only computers have been electronic computers.

But, sure, let's be more precise: Any Turing complete system is equivalent to any Turing complete computer and can reasonably be called a computer, but let's also limit it to any system that can not compute functions outside the Turing computable set. We don't know of any such systems that have been shown to compute functions outside the Turing computable set, at all, including brains.

The rational basis for assuming the brain is a computer is that we have not a single shred of evidence that exceeding Turing computability is possible, nor any theory for how to even express a function that is computable for humans but not Turing computable.

If you can find one single such example, there'd be a rational basis for saying the brain isn't a computer. As it stands now, assuming it isn't, is nothing more than blind faith. .


> we have not a single shred of evidence that exceeding Turing computability is possible

if your basis that anything that has equal to or less than turing computability is a computer, then everything is a computer.


In the same way that everything is physics.


Brain is subset of computers, but llms are not subset of brains.


The reason a lot of people are unhappy about this notion is that it doesn't really matter: Any Turing complete system can emulate any other Turing complete system, and an LLM can trivially be made to execute a Turing machine if you put a loop around it, which means that unless you can find evidence humans exceed Turing computability AGI is "just" a question of scaling and training.

It could still turn out to be intractable without a better architecture, but the notion that it might not be impossible makes a lot of people very upset, and the only way it can be impossible even for just an LLM with a loop bolted on is if human brains can compute functions outside the Turing computable set.


"Llm thinks" is false advertising. (Maybe useful jargon, but still)

> Any Turing complete system can emulate any other Turing complete system, and an LLM can trivially be made to execute a Turing machine if you put a loop around it

Wouldn't it be more efficient to erase the LLM and use underlying hardware as Turing complete system?

BTW. Turing test is just admission that we have now way of defining human level intelligence apart from "you'll know it when you see it".


I agree with you. "Chain of thought" is not reasoning, just like LSD trip isn't.

I think we lack a good formal definition of what (fuzzy) reasoning is. Without it, we will always have some kind of unexplained hallucinations.

I also believe AGI could be implemented as a model that can train models for specific tasks completely autonomously. But that would kill the cash cow, so OpenAI etc. are not interested in developing it.


100% agree. I miss the days where the title would describe the method instead of being a sales pitch


That reminds me of the punchline to this lengthy coming, which you might enjoy and/or find back-pain from.

https://www.smbc-comics.com/comic/the-talk-3


Lol

What’s the subnet of the security group of my user group for Aws lambda application in a specific environment that calls kms to get a secret for….


I’ve held the staff engineer title some times and when the company did so my role was to be a sort of backup if some team could not solve something. When I was not allocated to a specific team I collaborated with other engineers on specs and higher level planning. We also worked on internal libraries and tools.


Typst is simply amazing. I wrote a thesis using latex and boy would I love if I had found about typst sooner.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: