Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What if AGI isn't that close?
15 points by chatmasta on April 2, 2023 | hide | past | favorite | 48 comments
What if transformer models are the totally wrong approach? What if we're about to spend ten years iterating on a fancy parlor trick? We might invent something that almost seems like AGI but really it's just calling a bunch of APIs invented (and maintained) by humans.

What should we have spent the last ten years researching instead?




    What should we have spent the last ten years researching instead?
Literally solving any of the other critical problems ailing society.

    Climate change
    Gun violence
    Poverty
    Political corruption
    Wealth inequality
    Drug abuse
    Healthcare
    Etc
But "solving" those with the latest over hyped JavaScript framework, shitcoin, LLM, or cloud service isn't something we can pretend to do and rake in endless VC money, so instead we put all our energy towards eliminating artists and keeping children glued to tablet screens.


I don't necessarily disagree, but this kind of cynical energy is literally not helping much. It's too easy to piss on personal pet peeves in the tech world while claiming it's a zero sum game with monumentally complex social problems that almost everyone would love to "solve" if they could. It's quite possible to help ease the suffering of others while also being excited about a new JS framework, machine learning, etc.


I don't think radical technical breakthroughs will be the solutions to any of those problems. Those are human problems. There's nothing wrong solving technical problems while human problems persist. Putting programmers, data scientists, etc on human problems doesn't solve them better or faster than putting someone outside of STEM on them


> Putting programmers, data scientists, etc on human problems doesn't solve them better or faster than putting someone outside of STEM on them

True, but per GP there is also an unimaginable amount of money and natural resources sent towards those technical problems. And unlike programmers, money and resources are reallocateable.

If, in an alternate history without the gravitational money-pull of the software industry, even 1% of the resources lent to that industry in our timeline had been put towards addressing one of the problems listed above, it would create a vastly different world. A better one, I suspect, in many ways.


Agreed that reallocating money and resources for the greater good is a good thing. That said, these are all topics that still receive lots of debate politically. That's what I mean when I say it's a human problem.

You can't solve healthcare by throwing tech, money, etc at it anymore than you could solve a broken heart with those things. Sure, money could help you get therapy or pay for lobbying for whatever your solution to healthcare is. OTOH, perhaps the person cutting the checks thinks the solution to heartbreak is a raise ("it shows we care"), and the solution to healthcare issues is to lobby for criminalizing generics ("they're probably cutting corners and making people worse off"). People don't even agree on what are the problems in healthcare (and other things in that list), let alone how to solve them


<3 Sincerely: What are you working on? Trying to figure out my next thing.


You mean US society?


It's trendy to dunk on the U.S., and the U.S. does deserve it for the most part, having so much room for improvement in areas other places have already made good progress on, but the reality is that most of the things on this list can be found in most places, to some degree or another.

There is always progress to be made. The goal posts can always be moved forward. The standards for what is possible raised.

There is no limit to how high we could raise the global standard of living.


What if transformer models are the totally wrong approach?

Frankly, I say "so what?" For this to be "wrong" implies that the only useful outcome is AGI. I posit that that is clearly not the case. Current approaches are obviously useful and create value. It's like trying to invent television and getting radio instead. OK, fine, you still got something useful, so what are you complaining about? And the other will still come in time.

What if we're about to spend ten years iterating on a fancy parlor trick?

If it keeps getting better at doing things that people find useful, then that's fine.

We might invent something that almost seems like AGI but really it's just calling a bunch of APIs invented (and maintained) by humans.

Not really relevant, IMO. If the thing we invent behaves in ways that can be classified as showing intelligent behavior, then it's intelligent. If it reaches the bar that most people are willing to say "that's general intelligence", then it doesn't actually matter how it works. OK, to be fair there are senses in which it matters (getting into things like explainability, alignment, etc. yadda, yadda) but in terms of saying "is this intelligent or not?" we don't really have to know or care about the inner workings. I mean, we consider other humans intelligent (well, sometimes) and we don't know all the details of how human intelligence works either.


Am I the only person who doesn't actually want AGI? I'd prefer to not have to ever contemplate whether turning my computer off for the night is murder, if running the same tool over and over again in a loop is torture, etc. Is it really such a disappointment if all working on transformers ends up resulting in is a "fancy parlor trick" that happens to be really really useful?


AGI doesn't have to imply consciousness. It can just mean superhuman performance at tasks, as GPT-4 has perhaps already reached. As AGI architectures go, LLMs seem pretty good for this worry about consciousness. They were already not doing anything as long as they weren't currently being asked a question.


I mean, I am not sure you are sentient either--even putting aside the question of whether you are a bot or a dog, I can't directly experience whether anyone but myself is sentient--but the whole point of AGI is that I would be hard-pressed to decide that it is any less privileged than you are to claim it is sentient as it is seemingly just as generally intelligent as you. If you just have something that is superhuman at a few things but clearly lacking something so fundamental to the idea of general intelligence that you wouldn't even be willing to consider whether it is at least as conscious as a dog then you really don't have AGI, you just have AI, or maybe even merely ML.


I don't know what planet you're from but for me the I in AGI stands for intelligence. Nothing today is even close to intelligence. We have behavior that we, humans, perceive as intelligent in a very narrow sense.

Intelligence implies an environment and some sort of self-conservation. So being able to predict the environment AND using the prediction to survive/gain an advantage. The leaf to a self and consciousness is not that big from this but made murky by not knowing what consciousness really is to begin with.


So a single-cell protozoa is more intelligent than GPT-4, then. This definition doesn't seem to be very useful?


Yes, it is more intelligent. Also, given enough time that protozoa can evolve. GPT4 cannot. In fact GPT4 makes zero sense if humans are not involved and are not using it as a tool. It's a tool we have created.

The problem when talking about AI, AGI, intelligence is that we are always doing that using an anthropomorphized lens.

If we discard this lens and take these developments at face value for what they are, tools, the conversation can be honest and we can talk about pros/cons and where these technologies can be used. To pretend that the machines are going to take over any minute now and what we are seeing is "intelligence" will only help drive certain discourses and in the worst case will make these tools available for a small minority or have them banned altogether.


Of course it isn't close. ChatGPT is like a child making a paper airplane that flies a reasonable distance. AGI would be an adult that works in a group making a large rocket that can go into space.

Right now, all transformers really do is distribute data probabilistically over a certain space, and the retrieval function maps itself to the "pattern" that looks the most similar. It's an advance in lossy data compression. It's a black box with a single door for output and input, it can necessarily never gain consciousness.

The only respite has been that we have been free from "Open"AI's marketing for the last 24 hours, and most of the "hype" has died down, normal topics have returned to the front page.


Can you contrast your description of transformers with how human general intelligence works? What exactly is present in human neural architecture that gives rise to consciousness? How can we be confident that our current AI approaches can “necessarily never gain consciousness”?

I’m genuinely asking. I think I disagree with your certainty, because my overall impression is that we don’t really understand the foundations of intelligence or consciousness. But perhaps that’s ignorance on my part.


TL;DR: Conjecture I believe in

Sure. I think LLMs map to the associative horizon in humans. Usually very pronounced in young children and geniuses[1]. They can intuit details and structures (patterns) with minimal instruction, when given enough time (geniuses discover new details in old tomes, children learn how to function in the world, children can learn multiple languages at once just by hearing people talk, etc). Mentation of this sort is highly abstract and is great for solving novel problems, but doesn't translate well to functioning in a environment where the rules are almost always arbitrary and not easily discernable from dysfunctional behavior. A lot of these issues stem from a primarily "inward" approach to new information. LLMs, being just a probabilistic distribution of data, suffer from similar issues.

LLMs don't have any structure of their "own", little to no "guiding principle" (consciousness), instead, their "wiring" is based on the data they are fed. This data is already tuned to our human mode of thinking and our judgement. Even the hardware is made possible by our own biases ("ability"). So, tend to anthropomorphize it very easily. A LLM doesn't have an "inside" or an "outside", it's a large pattern from which parts are "picked" and "shown" to us. So this act of "picking" and "showing" would arguably be a better candidate for being "alive" than any large pattern it's used on. This act is not sophisticated enough; it makes "hallucination" possible, and also doesn't discriminate between the dataset and query, so that if the dataset has a "count to five" or "ask what my name is", the result will be influenced by it as if it was part of the query. Because of a lack of a structure from first principles, every result is a novel result.

Also, because a LLM is effectively queried as an egregore of whatever its trained on, there's only a technical difference between the different ways of changing the weights. I guess this is just a fancy definition of version increment.

We don't just process facts or pick and choose alternatives, our brain and the rest of the body work in tandem; the brain has many sources of real-time information - 5 senses, internal body "feel", "mystic" intuition, almost telepathic connection with close kin, etc. We have a certain degree of autonomy from what we consider the individual parts of our surrounding ecology, but we can't really leave it for long periods of time. The mind-body split is sidestepped by people that claim something digital can be "conscious". I think to really have a "conscious" machine, it needs to be built from first principles and not as a black box; and then, it would also need to co-exist with the environment, which would necessarily mean that it would only be able to exist in a closed, artificial environment. After all, the easiest way to make intelligent life is to have a biological child. I think people have really fuzzed the lines between digital tech / real life and because most of the digital economy exists as an analogue of the real (irl) economy.

I also don't think letting the LLM somehow change itself will work. Without human intervention on a large scale, it would be like coaxing a liquid to act as a solid without any change in its environment. And with human intervention, it'd just be regular research / tech work. I think, given any sort of "freedom", the LLM will gradually implode upon its own data. We can try to hypothesize about what constitutes true "randomness" for a LLM+"pick"/"show"system but in the end it's all in our own heads. It's not a structure that's so pervasive in our environment or so far past our scope that we'd discover something adjacent to it that could dwarf the current understanding and still stay in the same category.

I think genetic engineering is what will be really responsible for "Human-Designed Intelligence". LLMs and "AI" can be a great tool for uploading to actual biological brains. It's like transferring from a 1D system (LLM) to a 3D (or more) system (meat machine brain).

(puts on tinfoil hat) Some species of animals also exhibit very weird behaviors, like cheetahs, who are basically clones of each other. Or octopuses, which seem to have no reason to really exist. And honestly I have a pretty hard time believing that "Natural Selection" is responsible for all the diversity around us. I don't even believe that our sense organs could be a result of competitive speciation over a large period of time. (takes off hat)

Also, I personally believe that God created us, and because we're making something from non-biological material, it won't be conscious. Genetic Engineering would be like using nature's toolbox. Most people don't like this argument so I keep it as separate as I can.

FWIW, I'm not in the "it's not useful / a threat until it's AGI" camp. I think LLMs are an abstract data structure that can be queried with natural language, but curating its dataset is a very large part of the overall quality of the result, no matter what "prompt" you supply. Making data ingestable for LLMs will likely be what the majority of low-paid software development will ask for.

[1]: Introduction @ https://geniusfamine.blogspot.com/


Randomness is the proverbial large enough army of monkeys that could produce Shakespeare's works given enough time. ChatGPT is the large monkey army that got access to Shakespeare's work and are parroting part of it while also slicing and combining it.

We are nowhere near AGI.


Climate models, protein folding, mapping the human genome, universal translation, speech transcription, chip fabrication, image classification and generation, conversational bots. These are some of the problems AI has been solving (or has solved) over the last decade or two of research. It was never "AGI or bust". There is an unlimited amount of progress to be made by going down the current path.


AI wizardry has always been gears and pulleys when you look behind the curtain. Transformers are no more or less alive than Eliza and Deep Blue.

But ChatGPT is probably more useful as a tool than either.

So although I am profoundly skeptical of AGI metaphysics, I have come to the conclusion that "How do we make an AGI?" is a sound engineering approach when the goal is building useful tools for the benefit of other people.

YMMV.


I disagree with the implication that it’s either AGI or nothing.

What we currently have is useful and that’s a big deal; the time and effort spent on transformers wasn’t wasted. I personally don’t care if/when we have AGI and whether that’s on the same research path that we’re on now.


This is my view. Why recreate human intelligence when we already have human intelligence? AGI would be cool but we can already train AI to do specific tasks better than humans, that's good enough.


Existential crisis for philosophy which actually has zero working knowledge for AI but say a lot, and ends up all wrong.


No matter what happens there are certainly a lot of people facing existential crises recently... some of them are advocating for total annihilation of the material world in the next six years, so you'll excuse me if I can't take them too seriously...


> some of them are advocating for total annihilation of the material world in the next six years

"Some" is quite the weasel word. You could be referring to 3 total people in the world, but that wouldn't be worth spending energy considering. You can find people out there who believe all sorts of weird and stupid shit.

Who's saying this and what is their reasoning?


AI isn’t the only concern of philosophy. There are plenty who know about ML at least: some in formal epistemology and some in philosophy of cognitive science.


Does anyone even really know what AGI means? Academic types seem to use it as a way to distinguish their work from more common applications of AI already in use than any kind of consistently defined term (I guess they got tired of kicking every working AI technology out of the field). It seems to me the term was created under the assumption that we would have to have near human level of intelligence to have systems that could generalize training across multiple use cases. That is obviously not the case. Just like with chess and image recognition and other supposedly hard problems, we got to generalization with much less "intelligence" than I think we thought was possible. If AGI is just AI that can generalize, then arguable we already have it. If its supposed to mean some kind of human type of intelligence, then I think someone should have done a better job of naming and defining the term. I tend to be uncomfortable with human-centric definitions of intelligence anyway just from the arrogance aspect.


Challenge for you: how do you define intelligence?


As a large language model enjoyer I've been amazed how they are steadily reducing the number of bits needed to represent the next token in the sequence on average. There is some irreducible amount of uncertainty they will reach eventually, but the trend isn't slowing down. The bonus is that as they have been reducing this number of bits with the models, they have been unlocking all of these amazing abilities of the LLMs like theory of mind and getting near human level on all these standardized tests. It's amazing to watch even if it eventually stops unlocking new powers.


There’s alot of breathless hype about AI and AGI. But it’s not wasted time, it’s a tool with utility that will likely become transformative for some use cases.

Even if it’s just a better tool, that’s a big deal.


I think you are right. However if LLMs can be synergistically integrated with symbolic logic systems (think ChatGPT+OpenCyc) then that might be a game changer reaching AGI levels.


Personal opinion: I saw someone in one of these threads mention playfulness. I think that's the key. Any AGI will be able to play and joke.


I'd also include suffering.


I find it interesting that playfulness and joking is against the HN Code of Conduct


Very curious, have you looked at transformer models in depth?


I'm gonna pay sama for GPT-4 tomorrow, but for now my experience is limited to GPT-3.5.

So far, my assessment is that human intelligence is much less interesting than we thought.


Can somebody explain that nickname “sama” - how is that supposed to be pronounced? If it’s “like Sam A.”, then that’s a terrible nickname. If it’s “like sam-uh” - what gives? That’s not how Sam Altman sounds?

I realize it’s a little absurd to be upset over a nickname but it somehow like really triggers me to see “sama” (lowercase, too!) so many times in one weekend.


It is his HN username.


Thank you. Sanity restored.


Im wondering whether your opinion is from using the product or delving into transformers. It seems like the former and you should probably spend some more time in the latter.


I'm intentionally taking it slow with transformers. I'm still struggling with whether consciousness is a totally emergent property from a shared language.


I don't expect that language is required. Using input frames of video (or any other complex sensory input) instead of words might work too. Transformers are a tool for building representations from high-dimensional data sources.

Also, I don't think GPT-4 is conscious. If you ask it whether it is, it will say no. It does not appear to exhibit preferences or anything like suffering.


Wouldn't you be able the say the same thing with single cells, the biological kind.


We don't need AGI for many of the tasks. Modern Capitalism already breaks down many things into tiny chunks that you can literally bring in a high schooler and train him/her to do the job. We don't hire high schoolers simply because 1) it's probably against the law 2) we have a large pool of undergraduates and beyond to hire from.


what makes you think AGI is not already here?


I'm not sure I'm convinced it's not. I will say ChatGPT is well on its way to replacing Google in terms of my everyday usage of the thing, as in "every day there's a new search for which ChatGPT gives me a better answer than Google."

Yesterday I had a totally reasonable conversation with it about the bugs I found in my flat... and the day before that I was pair programming with it on the best TypeScript interface for the library I'm building... nothing like this existed six months ago and to be honest it's pretty mind blowing. It makes me excited for the future, and even though I know the same demonstration of skill makes some people existentially worried for the future, I can't help but optimistically look forward to what we're all going to build with this thing...


Could you please define acronyms? AGI = artificial general intelligence?

It’s US tax season, I thought adjusted gross income first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: