Hacker News new | past | comments | ask | show | jobs | submit | Jensson's comments login

> The profit if any would be much less than $30bn.

It doesn't cost that much to maintain and run the appstore, it is almost all profits.


You are trying to tell me that credit card processing fees are negligible, software engineers work for free, advertising doesn’t require overhead, etc…

I guess that kind of thinking that everything is basically free is why alot of startups just fail so easily.


None of them are close to 30 billion!!!

Your profit is whatever your revenue minus costs. Plenty of app stores have operated in the red that we know this isn’t trivial to get right. It definitely is nowhere near negligible as patent asserted. I’m frustrated by how dumb HN is getting lately.

> Google should sell Search to OpenAI

OpenAI can't afford to buy Search.


> I think we can agree that a modern phone is “hardware and software” for sure.

You should be able to use your phone and install software on it without signing up for the sellers services, I think there is no reason to not allow that except evil lockin.

See windows for what happens when you don't legally enforce that, its ridiculous windows forces you to sign up to their services.


> Do you really think that clicking buttons through a GUI is the one true optimal way to use a computer?

There are some tasks you can't do without vision, but I agree it is dumb to say general intelligence requires vision, vision is just an information source it isn't about intelligence. Blind people can be excellent software engineers etc they can do most white collar work just as well as anyone else since most tasks doesn't require visual processing, text processing works well enough.


> There are some tasks you can't do without vision...

I can't think of anything where you require vision that having a tool (a sighted person) you protocol with (speak) wouldn't suffice. So why aren't we giving AI the same "benefit" of using any tool/protocol it needs to complete something.


> I can't think of anything where you require vision that having a tool (a sighted person) you protocol with (speak) wouldn't suffice.

Okay, are you volunteering to be the guide passenger while I drive?


Thank you for making my point:

We have created a tool called "full self driving" cars already. This is a tool that humans can use, just like we have MCPs a tool for AI to use.

All I'm trying to say, is AGIs should be allowed to use tools that fit their intelligence the same way that we do. I'm not saying AIs are AGIs, I'm just saying that the requirement that they use a mouse and keyboard is a very weird requirement like saying People who can't use a mouse and keyboard (amputees, etc.) aren't "Generally" intelligent. Or people who can't see the computer screen.


There were elephants there that humans hunted to extinction, elephants typically keep forests down and create grasslands. So it seems likely it happened, and that humans was the cause (by killing the elephants).

Edit: So it is likely that the change happened and had nothing to do with the soil change.


Depends, there’s elephants in the Congo forest, they’re just not too easy to see.

The trees grow faster than the elephants can wreck them. But in areas with less rain fall elephants keep the grasslands more open.

As did Mammoths in the northern forests.


> If the thing is intelligent, then there’s nothing artificial about it… it’s almost an oxymoron.

Artificial means human made, if we made a thing that is intelligent, then it is artificial intelligence.

It is like "artificial insemination" means a human designed system to inseminate rather than the natural way. It is still a proper insemination, artificial doesn't mean "fake", it just means unnatural/human made.


Well, you and I agree, but there’s an entire industry and pop culture throwing the term around rather imprecisely (calling LLMs “AI”) which makes actual discussion about what AGI is, difficult.

I guess I don’t understand the technical difference between AI and AGI and consider AI to refer to the social meme of “this thing kinda seems like it did something intelligent, like magic”.


> Artificial means human made, if we made a thing that is intelligent, then it is artificial intelligence.

Aren't humans themselves essentially human made?

Maybe a better definition would be non-human (or inorganic if we want to include intelligence like e.g. dolphins)?


> Aren't humans themselves essentially human made?

No, not in the sense in which the word "made" is being used here.

> Maybe a better definition would be non-human (or inorganic if we want to include intelligence like e.g. dolphins)?

Neither of these work. Calling intelligence in animals "artificial" is absurd, and "inorganic" arbitrarily excludes "head cheese" style approaches to building artificial intelligence.

"Artificial" strongly implies mimicry of something that occurs naturally, and is derived from the same root as "artifice", which can be defined as "to construct by means of skill or specialized art". This obviously excludes the natural biological act of reproduction that produces a newborn human brain (and support equipment) primed to learn and grow; reportedly, sometimes people don't even know they're pregnant until they go into labor (and figure out that's what's happening).


If I asked my wife if she made our son, she would say yes. It is literally called "labour". Then there is "emotional labour" that lasts for 10 years to do the post-training.

I drove my car to work today, and while I was at work I drove a meeting. Does this mean my car is a meeting? My meeting was a car?

It turns out that some (many, in fact) words mean different things in different contexts. My comment makes an explicit argument concerning the connotations and nuances of the word "made" used in this context, and you have not responded to that argument.


Judging by this response, I’m guessing you don’t have children of your own. Otherwise you might understand the context.

Your guess is wrong!

Maybe you should have written a substantive response to my comments instead of trying and failing to dunk on me. Maybe you don't understand as much as you think you do.


I honestly don’t care enough to even have even remotely thought about my reply as trying to dunk on anything. You’re awfully jacked up for a comment so far down an old thread that you and I are probably the only ones who will ever read it.

Okay!

> Aren't humans themselves essentially human made?

Humans evolved, but yeah the definition can be a bit hard to understand since it is hard to separate things. That is why I brought up the artificial insemination example since it deals with this.

> Maybe a better definition would be non-human (or inorganic if we want to include intelligence like e.g. dolphins)?

We also have artificial lakes, they are inorganic but human made.


"ii" (inorganic intelligence) has a better ring to it than AI and can also be stylized as "||" which means OR.

General intelligence means it can do the same intellectual tasks as humans can, including learning to do different kinds of intellectual jobs. Current AI can't learn to do most jobs like a human kid can, so its not AGI.

This is the original definition of AGI. Some data scientists try to move the goalposts to something else and call something that can't replace humans "AGI".

This is a very simple definition that is easy to see when it is fulfilled because then companies can operate without humans.


What intellectual tasks can humans do that language models can't? Particularly agentic language model frameworks.

Weird spiky things that are hard to characterise even within one specific model, and where the ability to reliably identify such things itself causes subsequent models to not fail so much.

A few months ago, I'd have said "create image with coherent text"*, but that's now changed. At least in English — trying to get ChatGPT's new image mode to draw the 狐 symbol sometimes works, sometimes goes weird in the way latin characters used to.

* if the ability to generate images doesn't count as "language model" then one intellectual task they can't do is "draw images", see Simon Willison's pelican challenge: https://simonwillison.net/tags/pelican-riding-a-bicycle/


A normal software engineering job? You have access to email and can send code etc. No current model manages anything close to that. Even much simpler jobs can't be automated like that by them.

So basically any form of longer term tasks cannot be done by them currently. Short term tasks with constant supervision is about the only things they can do, and that is very limited, most tasks are long term tasks.


> You have access to email and can send code etc. No current model manages anything close to that.

This is an issue of tooling, not intelligence. Language models absolutely have the power to process email and send (push?) code, should you give them the tooling to do so (also true of human intelligence).

> So basically any form of longer term tasks cannot be done by them currently. Short term tasks with constant supervision is about the only things they can do, and that is very limited, most tasks are long term tasks.

Are humans that have limited memory due to a condition not capable of general intelligence, xor does intelligence exist on a spectrum? Also, long term tasks can be decomposed into short term tasks. Perhaps automatically, by a language model.

Have you actually tried agentic LLM based frameworks that use tool calling for long term memory storage and retrieval, or have you decided that because these tools do not behave perfectly in a fluid environment where humans do not behave perfectly either, that it's "impossible"?


> Have you actually tried agentic LLM based frameworks that use tool calling for long term memory storage and retrieval, or have you decided that because these tools do not behave perfectly in a fluid environment where humans do not behave perfectly either, that it's "impossible"?

i.e. "Have you tried this vague, unnamed thing that I alude to that seems to be the answer that contradicts your point, but actually doesn't?"

AGI = 90% of software devs, psychotherapists, lawyers, teachers lose their jobs, we are not there.

Once LLMs can fork themselves, reflect and accumulate domain specific knowledge and transfer the whole context back to the model weights, once that knowledge can become more important than the pre-pretrained information, once they can form new neurons related to a project topic, then yes, we will have AGI (probably not that far away). Once LLM's can keep trying to find a bug for days and weeks and months, go through the debugger, ask people relevant questions, deploy code with new debugging traces, deploy mitigations and so on, we will have AGI.

Otherwise, AI is stuck in this groundhog day type scenario, where it's forever the brightest intern that any company has ever seen, but he's forever stuck at day 0 on the job, forever not that usefull, but full of potential.


Why would it be a tooling issue? AI has access to email, IDEs, and all kinds of systems. It still cannot go and build software on its own by speaking to stakeholders, taking instructions from a PM, understanding it needs to speak to DevOps to release its code, suggesting to product team that feature is better developed as part of core product, objecting to SA about the architecture, and on and on…

(If it was a tooling issue, AGI could build the missing tools)


> This is an issue of tooling, not intelligence. Language models absolutely have the power to process email and send (push?) code, should you give them the tooling to do so (also true of human intelligence).

At a certain point, a tooling issue becomes an intelligence issue. AGI would be able to build the tools they need to succeed.

If we have millions of these things deployed, they can work 24/7, and they supposedly have human-level intelligence, then why haven't they been able to bootstrap their own tooling yet?


> Have you actually tried agentic LLM based frameworks that use tool calling for long term memory storage and retrieval,

You can work around the limitations of LLMs' intelligence with your own and an external workflow you design, but I don't see how that counts as part of the LLM's intelligence.


Humans have general intelligence. A network of humans have better general intelligence.

LLMs have general intelligence. A network of LLMs have better general intelligence.

If a single language model isn't intelligent enough for a task, but a human is, there is a good chance there exists a sufficient network of language models that is intelligent enough.


> LLMs have general intelligence.

No they don't. That's the key part you keep assuming without justification. Interestingly enough you haven't responded to my other comment [1].

You asked “What intellectual tasks can humans do that language models can't?” and now that I'm thinking about it again, I think the more apt question would be the reverse:

“What intellectual tasks can a LLM do autonomously without any human supervision (direct or indirect[2]) if there's money at stake?”

You'll see that the list is going to be very shallow if not empty.

> A network of LLMs have better general intelligence.

Your argument was about tool calling for long term memory, this isn't “a network of LLM” but an LLM another tool chosen by a human to deal with LLM's limitations one one particular problem (and of you need long term memory for another problem you're very likely to need to rework both your prompt and your choice of tools to address it: it's not the LLM that solves it but your own intelligence).

[1]: https://news.ycombinator.com/item?id=43755623 [2] indirect supervision would be the human designing an automatic verification system to check LLMs output before using it. Any kind of verification that is planned in advance by the human and not improvised by the LLM when facing the problem counts as indirect supervision, even if it relies on another LLM.


Read a bunch of books not present in the training data on a specific topic, and learn something from it.

You can cheat with tooling like RAG or agentic frameworks, but the result isn't going to be good and it's not the AI that learns.

But besides this fundamental limitation, had you tried implementing production ready stuff with LLM, you'd have discovered that language models are still painfully unreliable even for the tasks they are supposed to be good at: they will still hallucinate when summarizing, fail to adhere to the prompt, add paragraphs in English at random when working in French, edit unrelated parts of the code you ask it to edit, etc, etc.

You can work around many of those once you've identified it, but that still counts as a fail in a response to your question.


Tell them humans need to babysit it and doublecheck its answers to do anything since it isn't as reliable as a human then no they wouldn't call it an AGI back then either.

The whole point about AGI is that it is general like a human, if it has such glaring weaknesses as the current AI has it isn't AGI, it was the same back then. That an AGI can write a poem doesn't mean being able to write a poem makes it an AGI, its just an example the AI couldn't do 20 years ago.


Why do human programmers need code review then if they are intelligent?

And why can’t expert programmers deploy code without testing it? Surely they should just be able to write it perfectly first time without errors if they were actually intelligent.


> Why do human programmers need code review then if they are intelligent?

Human programmers don't need code reviews, they can test things themselves. Code reviews is just an optimization to scale up it isn't a requirement to make programs.

Also the AGI is allowed to let another AGI code review it, the point is there shouldn't be a human in the loop.

> And why can’t expert programmers deploy code without testing it?

Testing it can be done by themselves, the AGI model is allowed to test its own things as well.


Well AGI can write unit tests, write application code then run the tests and iterate - agents in cursor are doing this already.

Just not for more complex applications.

Code review does often find bugs in code…

Put another way, I’m not a strong dev but good LLMs can write lots of code with less bugs than me!

I also think it’s quite a “programmer mentality” that most of the tests in this forum about if something is/isn’t AGI ultimately boils down to if it can write bug-free code, rather than if it can negotiate or sympathise or be humerous or write an engaging screen play… I’m not saying AGI is good at those things yet, but it’s interesting that we talk about the test of AGI being transpiling code rather than understanding philosophy.


> Put another way, I’m not a strong dev but good LLMs can write lots of code with less bugs than me!

But the AI still can't replace you, it doesn't learn as it go and therefore fail to navigate long term tasks the way humans do. When a human writes a big program he learns how to write it as he writes it, these current AI cannot do that.


Strictly speaking, it can, but its ability to do so is limited by its context size.

Which keeps growing - Gemini is at 2 million tokens now, which is several books worth of text.

Note also that context is roughly the equivalent of short-term memory in humans, while long-term memory is more like RAG.


Artificial means human made, if we made an intelligent being then it is artificial. What do you think artificial meant here?

> but you can disagree agreeably, right?

No, the concepts are linked, agreeable people don't want to be rude and most people see disagreements as being rude no matter how you frame it. You can't call a woman overweight without being rude for example no matter how you frame it, but maybe you want an AI that tells you that you weigh too much.


Good point, but calling a woman overweight isn't necessarily a disagreement.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: