If AGI is possible, then the entire human economy stops making sense as far as money goes, and 'owning' part of OpenAI gives you power.
That's if AGI is possible and not easily replicated. If AGI can be copied and/or re-developed like other software then the value of owning OpenAI stock is more like owning stock in copper producers or other commodity sector companies. (It might even be a poorer investment. Even AGI can't create copper atoms, so owners of real physical resources could be in a better position in a post-human-labor world.)
This belief comes from confusing the singularity (every atom on Earth is converted into a giant image of Sam Altman) with AGI (a store employee navigates a confrontation with an unruly customer, then goes home and wins at Super Mario).
If I recall correctly, these terms were used more or less interchangeably for a few decades, until 2020 or so, when OpenAI started making actual progress towards AGI, and it was clear that the type of AGI that could be imagined at that point, would not be of the type that would produce singularity.
Exactly. I continually fail to see how "the entire human economy ends" overnight with another human like agent out there - especially if its confined to a server in the first place - it can't even "go home" :)
But what if that AGI can fit inside a humanoid robot and that robot is capable of self replication even if it means digging the sand out of the ground to make silicon with a spade?
Yes. The goal is to emulate that with different substrates to understand how it works and to have better control over existing self-replicating systems.
The first AGI will have such an advantage. It’ll be the first thing that is smart and tireless, can do anything from continuously hacking enemy networks to trading across all investment classes, to basically taking over the news cycle on social media. It would print money and power.
Depends on how efficient it is. If it requires more processing power than we have to do all these things competitors will have time to catch up while new hardware is created.
The GP said, "and exponential". If AGI is exponential, then the first one will have a head start advantage that compounds over time. That is going to be hard to overcome.
I believe that AGI cannot be exponential for long because any intelligent agent can only approach nature's limits asymptotically. The first company with AGI will be about as much ahead as, say, the first company with electrical generators [1]. A lot of science fiction about a technological singularity assumes that AGI will discover and apply new physics to develop currently-believed-impossible inventions, but I don't consider that plausible myself. I believe that the discovery of new physics will be intellectually satisfying but generally inapplicable to industry, much like how solving the cosmological lithium problem will be career-defining for whoever does it but won't have any application to lithium batteries.
I don't recall editing my message, but HN can be wonky sometimes. :)
Nothing is truly exponential for long, but the logistic curve could be big enough to do almost anything if you get imaginative. Without new physics, there are still some places where we can do some amazing things with the equivalent of several trillion dollars of applied R&D, which AGI gets you.
This depends on what a hypothetical 'AGI' actually costs. If a real AGI is achieved, but it costs more per unit of work than a human does... it won't do anyone much good.
Sure but think of the Higgs... how long that took for just _one_ particle. You think an AGI, or even an ASI is going to make an experimental effort like that go any bit faster? Dream on!
It astounds me that people dont realize how much of this cutting edge science stuff literally does NOT happen overnight, and not even close to that; typically it takes on the order of decades!
Science takes decades, but there are many places where we could have more amazing things if we spent 10 times as much on applied R&D and manufacturing. It wouldn't happen overnight, but it will be transformative if people can get access to much more automated R&D. We've seen a proliferation in makers over the last few decades as access to information is easier, and with better tools individuals will be able to do even more.
My point being that even if Science ends today, we still have a lot more engineering we can benefit from.
That's if AGI is possible and not easily replicated. If AGI can be copied and/or re-developed like other software then the value of owning OpenAI stock is more like owning stock in copper producers or other commodity sector companies. (It might even be a poorer investment. Even AGI can't create copper atoms, so owners of real physical resources could be in a better position in a post-human-labor world.)