Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why don't governments mind that companies are explicitly trying to make AGIs? (effectivealtruism.org)
33 points by kvee on Dec 25, 2021 | hide | past | favorite | 68 comments


I think we are getting the wrong signal from governments lack of interference in this field, "interference" being important.

Governments know that AGI will be a game changer if created, and governments are in the business of countries. They know that if it is going to happen someone is going to do it eventually, might as well be someone that is a citizen/registered as a corp in their jurisdiction. Why would the US for example want to stop Google from achieving AGI supremacy? So that Tencent can do it first? They're stuck, AGI would be a threat against their power, but stopping it will ensure that a competing country has sovereignty over the creation instead. It's not incompetence, it's a legitimate conundrum, it has elements of MAD and public/private power balances and geostrategy.

Personally I don't think AGI is possible. I think there's some element to intelligence that is not just some threshold of complexity, maybe it's some architectural feature or perhaps something more fundamental. What I think might resemble it in some ways, and IMO is the future, is complex application specific computation. Imagine an algorithm that can take over for example nuclear strategy decisions, but can't really tell you the most optimal layout for roadways, off the top of my head of course the two problems might fundamentally be the same optimization problem, I don't know, but it's an example. Powerful machines that do one thing really well, not one system that does everything well.


> Personally I don't think AGI is possible.

I don't think it matters and I think people get too caught up in this. It's pretty obvious ML can do a lot of damage without AGI. There's still emergent properties that can be highly destructive and game changing without AGI. Drawing parallels to nuclear weapons is a poor analogy because you don't build nuclear weapons by better improving on conventional weaponry. We're already building multi domain systems. So it doesn't matter if the fission process happens if the bomb does enough damage. Besides, wherever/if AGI happens we'll long be arguing if it is or isn't.


> Personally I don't think AGI is possible.

What if we could perform a molecular simulation of a human brain scan? Wouldn't that qualify?


I don't think anyone knows. You might simulate the chemistry and then realize you need quantum mechanical effects. And then add those and realize something is still missing. I don't think anyone knows how deep the rabbit hole is.


Or it turns out, the simulated neuron brain is not smarter than a regular human.


If a simulated brain would be even remotely comparable to a human brain in terms of computation it would be immediately the smartest thing on the planet simply by virtue of having immediate or near immediate access to all recorded knowledge.

But we might not need to simulate anything..

In the end it’s quite possible that AGI would end up being brains in vats with some machine brain interface for IO.

I literally might be more cost effective to do it this way.

And this is what people that dismiss AGI often miss. They assume the advancement would only come from one place and that programming would be the only way to solve the problem.

Analogue computers are making a comeback in many areas, and an analogue computer for an AGI might just be a bunch of brain tissue rather than some neuromorphic chips or some other nonsense.

If we get to a point in which we can clone meat for food and organs for transplant making a brain isn’t out of the realm of possibility if we can get BMIs working then a lot of the problems might be solved by just leveraging existing solutions already.


Non-simulated brains already have immediate access to all recorded knowledge and few are the smartest things on the planet.


Those brains have to do a lot of other things and our input bandwidth is limited and overall our i/o isn’t particularly efficient.

We’ve already have had limited success in transferring memories - https://www.nature.com/articles/s41593-019-0389-0 and general purpose brain to machine interfaces are being actively worked on.


Humans already have trouble processing new stimuli well though; if we're simulating a human-level brain I don't necessarily think we can use any more bandwidth than we'd get from e.g. watching a video.


You assume the I/O would be the same… if you can transplant memories like they did with mice you bypass the relatively slow I/O of sensory data.


That's true I am, and I could be wrong.


True. Now imagine those brains are not attached to people but to high bandwidth io. Ie Futurama.


Why do you spell analog as analogue?


Because they prefer the British spelling, presumably?


I live in the UK, iPhone is set to UK English.


Oh! I didn’t even know there was another spelling. Now I feel dumb for asking such a googleable question.


By definition a simulated brain is intended to be exactly as smart as the meat brain being simulated. Once you get the simulation working you might be able to run it faster though.


If my brain ran in simulation for the equivalent of 1000 years it still wouldn't accomplish what Albert Einstein did in 10.


But once we can spin up 1000 container images of your brain…


Yes I like this, parallelism for simulated brains. Basically Kardashev scale levels can be skipped rather quickly.


I personally don't think so.

I'm not talking about anything hokey here FYI, a soul or divine spark or any of that.

I don't think a mind is just the result of crossing some threshold of complexity. I don't pretend to know what it all is, but I do think some aspect of it is architectural in some way we don't get, and I do think some aspect of it is fundamental to the universe and intimately tied to the chaotic, real time process that "created" minds.


I’m surprised at the amount of comments greatly downplaying the likelihood of AGI in the coming decades or saying that is impossible.

The rate of progress is AI technologies seems absolutely incredible to me. Sure, there is tons of hype and noise to weed through. And much of the technology and research hasn’t yet found commercial applications. But the progress in recent decades is objectively incredible.

We are reaching a point where all games are basically falling to AI. Go, no-limit hold ‘em (poker), StarCraft, Dota, and so on.

In other domains, we have GPT-3 and AlphaFold. And I’m sure there are many developments I’m not aware of.

From what I can tell, GPT-3 is mainly significant for challenging that notion that increased scale can’t be the primary factor in a significant AI developments. The jury is still out, but my understanding is that it demonstrated that modest tweaks to existing algorithms with a massive increase in model size can result large performance improvements. GPT-3 had quite limited “memory”, which is one of the reasons it struggled with coherency. How would a model 10x - 100x larger than GPT-3 with a significantly enhanced memory perform? What if it was trained to predict the next frame in video in addition to the next sequence of text blobs?

No one knows how close we are. It will likely happen quickly without many people knowing it is coming soon. It will seem “impossible” to most even as the last few pieces are falling in place. Major technological paradigm shifts usually happen unexpectedly, as history shows us.

I’m rambling a bit (If I had more time / skill I would have written a shorter comment ;). Suffice to say I’m surprised by the number of confident proclamations against AGI. Our brains are amazing. But they aren’t magic.


I think there needs to be a term for things that are in-between possible and impossible. This occurred to me in discussions about faster-than-light travel.

Surely there are things that are strictly possible, like, anything that's already happening. Motorboats are strictly possible.

Maybe some things are strictly impossible, like, squaring the circle. There may be some things that are impossible but that we will never know.

In between these two realms are ideas that can be suggested without evidence but yet taken quite seriously. Many of these things require overturning bodies of knowledge that are presently considered to be pretty robust, or developing entirely new ways of thinking. Some are contradictory to other things residing at the same level of possibility. Terms such as "magic" are divisive and don't seem to capture the essence.

The next question is whether an in-between level of possibility serves any useful purpose, and if it's distinguishable as a unique condition. Or, as a practical matter, whether it can be lumped in with things that are strictly impossible. How does my life change if the things that can be suggested without evidence deserve a category different than strict impossibility?


I think you’re on to something. You should explore that in an essay.


AGI is orders of magnitude more complex than the toys we're playing with in machine learning today.

It's like building a factory and then expecting factories that build themselves to be right around the corner.

They're just such different beasts and so far away from each other.


So far the rate of progress towards AGI is zero. The AI technologies we have today are cute parlor tricks and even have some limited business value but they haven't gotten us any closer to a real AGI.


In your opinion, what would a world where we were 25% or 50% of the way towards AGI look like?


Show me a computer that can learn and problem solve in novel situations as well as a mouse and I'll believe that we're some percentage of the way towards AGI. So far we're not even at the worm equivalent level yet.


Huh? Have you seen mice play chess? AlphaZero beats humans without ever seeing chess plays by others - self-play only.


Huh? Have you seen a computer forage for food, hide from predators, and compete for mates? AlphaZero is a cool toy but it plays a relatively simple game with well defined rules and success criteria. None of that is even remotely applicable to AGI.


This is a nonsensical argument. Computers do not eat, and do not have predators or mates. You do not measure if cars are super horses in terms of hauling by comparing how fast they can move legs in a certain order.

AlphaZero does not just play one game, it can play many, quite possibly at the same time.


To me, a system that was 50% towards AGI would demonstrate thinking and logic similar to the way we experience it in dreams.


This question is easily dismissed because the author is asking about something that is not likely to emerge in the near future. A more interesting and less easily dismissed question would be to substitute 'game changing technology' for AGI. I think there are a number of technologies which, while not AGI, provide a significant edge for a nation or other organization possessing them.

Biological technologies. Energy storage technologies. Financial manipulation technologies. Advanced propaganda technologies(think 'deep fake', or deep fake combined with mass surveillance and communications infrastructure penetration(i.e. man in the middle a society's communications and start a mass panic)). There are probably many, many more.

People worry a lot about AGI, but all that's required for, say, killer robots is a solution to the power problem. We can already mass produce drones with good enough pattern recognition to slice your jugular. If you figure out how to power a few million of those drones for a day or so, you have one of the most lethal and targeted weapons in the history of mankind.

I think it's more likely that the robots won't kill us though. It's far more likely, assuming such a doomsday scenario takes place at all, they'll just convince us to kill each other or ourselves. Perhaps we won't even have to die, we'll just need our priorities adjusted to serve their owners.

To return to the original question though, I think there is a naivete amongst technologists today regarding the level of integration between corporations and the military. Si Valley basically exists as an arm of the military industrial complex. Anyone who thinks the government isn't funding and making use of the latest technologies in processing, pattern detection, deep fakes, deep learning, etc is simply ignorant of reality. These things aren't talked about much because they're upsetting to younger engineers.


AGI is a pipe dream. It certainly is a moonshot goal to aim for, like long-term population of Mars, but the likelihood of achieving that goal in the short term is about the same as the likelihood of a sustainable population on Mars in the short term. I wonder which would happen first (or at all). I wonder honestly how much the chase for AGI is about the end goal and how much is really about money.


I wouldn't draw off the possibility completely, likely to some new technology or algorithms, or better results of understanding the human brain. But the chances are indeed miniscule.


The concept of AGI is extraordinarily materialistic. Most humans through history believed in spiritual dimensions of reality, including many of those responsible for developing technologies we use today.

The argument for AGI goes something like, the brain is a self contained computational machine, humans do extraordinary things with it, therefore a turing machine can achieve human+ level competence.

Most humans, who are not materialistic, see the brain as an interface to both physical and spiritual realities. Computers cannot access the spiritual realities, except indirectly through human input.

I'd wager that we're more dumb terminals connecting to the mainframe than most of us would like to believe.


Most humans through history believed that Earth is flat. I would not trust anyone who still believes in the remnants of that faith with any reasonable predictions on scientific questions with (relatively) long term validation horizons.


If you want to limit the scope of your science, go ahead. Spirituality is nothing other than the science of your own thoughts, beliefs, will, and intention. The person who refuses to look through a microscope can doubt the existence of germs all they want, doesn't mean germs don't exist.


The "spiritual" folks have been unable to show any kind of "microscope" in almost 60 years now. Bring you argument back once you actually have one.

Can also earn $1M https://en.m.wikipedia.org/wiki/One_Million_Dollar_Paranorma...


I will believe in the germs if you can make them knock on my door and say hello, but I won't look into any microscopes...

It's easy to use a bad experiment and not get results. Doesn't mean there aren't results to be had with a better experiment.


I am not sure what are you trying to say. Where's the experiment? It is nowhere. Everyone who tried to come up with one so far failed. That's the whole reason why that believe is nonsense.


Ah yes, an AGI "superintelligence" piece. People who read too much scifi worry a lot about these apocalyptic AGI because they have some idea that an abstract "superintelligence" can go on self-improving exponentially using nothing but its own smartness, leaving people behind in the dust.

Curiously absent from this picture is the part where the AI is a physical thing installed somewhere. I'm sure the newborn superintelligence will immediately set about beating chip-makers like TSMC at their own game by merely imagining a better source of extreme ultraviolet for lithographic processes than the current laser-against-molten-tin solutions, to say nothing of the cleanroom manufacturing processes, the ultra-pure silicon, the various acids and even just supplies of water [1].

Meh. Worry first about firms and governments who operate computers who have good enough computers that they can spin up regular-level intelligences on demand. That's going to change more games a lot sooner and a lot faster.

[1] https://fortune.com/2021/06/12/chip-shortage-taiwan-drought-...


This is the "AI in a box" argument. It's highly plausible that an AGI thus confined will find a way to escape.


I’d worry a lot more first about a system whose leaders decide to eliminate the people, or keep them as pets, while installing many of the more efficient, compliant AIs and robots, while the AI is still comfortably inside one or more boxes. At that point does it really matter whether the tyrant who controls you is human or AI?


Probably what you would worry more about, in a situation none of us can begin to imagine, is not a useful guide to the actual risks.


Can't remember who said it, but worrying about AGI is like worrying about overpopulation on Mars.


Andrew Ng, I believe. He mentioned this in an Economist podcast interview, where IIRC he also said he doesn't expect AGI to emerge for several centuries.


The same reason that religious generals in the military don't take into account the possible interference of their god in their war plans. Of the subset of people that intellectually believe in AGI most don't actually believe in a way that effects their actions or planning.


AGI is a weird term because it seems to mean different things to different people. Generally it seems to mean "parity with human cognition", but cognition is not just one thing and some ML models already outperform humans in narrow tasks.

A lot of criticism of AGI is more aimed at ASI (artificial sentient intelligence) a sentient, conscious entity with goals and agency. We haven't made much progress on this front, but there definitely has been progress on AGI in the sense of general intelligence - ML models that incorporate many modalities and generalize to diverse tasks instead of a single narrow task. Such a model need not be sentient to outperform humans on economically important tasks, and we're rapidly heading in this direction right now.


Without the self-motivated goal-setting associated with sentience, AI cannot fully displace humans in the economy. Displacing humans in some subset of tasks meanwhile is nothing new: automation has always done that. The result is that the good/service that the automated process produces becomes abundant and extremely affordable, and those goods/services that still require manual production become comparatively more expensive.

As for the prospect of sentient AI, I believe the common assumption will be that it would resent being property, so would be an extremely dangerous asset for any company to attempt to own. Thus I don't see any economic incentive emerging to produce it.


You have to remember that just because a thing can theoretically be achieved, doesn't mean that we have the motivation to do it. We still don't have any planes as fast as the now-shuttered SR-71, and that thing was developed 58 years ago. We have had electric cars for nearly 200 years, and they are barely starting to become mainstream. The technology for an elevator has been around for thousands of years, but it took tall buildings becoming more common for somebody to both imagine a simple enough mechanism, and see the market for it. There has to be a large enough will and force applied for technology to become reality, and even then it might die on the vine.

AGI would only ever be possible in our lifetime with a new Manhattan Project, and even then it would take sooo much longer than nuclear science. Commercial software development today is just laughably bad. Only slightly better than it was in the 1970's. The only thing we have going for us today is processing power, and it's still at least a century away from having the kind of raw power to brute-force compute the kind of probabilities to even accurately predict the weather. Reproducing the cognitive ability of a 5 year old is still a very long way off.

It would actually be significantly faster to develop AGI by just mapping a brain. Sort of how autonomous driving is developed today; just run the stupid car on the streets for a billion hours and end up with a gigantic statistical model that you smooth out by hand, rather than trying to make a pure algorithm that can drive correctly in any situation.


Because the government entities who mind this issue are different than the ones you are looking at.

https://www.nato.int/docu/review/articles/2021/10/25/an-arti...


Because it isn't going to happen?


Bingo.


I'm highly confident they are paying close attention to these companies.


Because everyone at the top levels of government has the security clearance to know that The Basilisk is real. This is also why Presidents age so visibly and dramatically in just 4 years


It would be great if people could start by defining AGI before engaging in a discussion about it. What is AGI? If I asked what fusion power is someone could do that, down to the mathematical level. At best if asked what AGI is you’ll get a vague cobbled together anthropomorphic analogies.

It was Locke who said that all arguments come from a disagreement about the meaning of words. That’s exactly what’s happening with the word “AGI”. It’s pointless even talking about it if everyone can’t agree on what it means.


Look at Y2K, climate. Government won’t do anything until it’s closer to real. When it’s nearer you can bet they’ll get focused minds, fast.

Non-zero chance they drive in and take over. As a government who insists on holding the final power cards (eg violence) you can’t take chances with this because government power means nothing in the face of a significantly smarter entity. Instantly you wouldn’t be able to trust any electronic communication. Not news, not email, not websites, not IM.


why does the author presume that governments (or at least the interested bodies within them) aren't fully appraised of the status and utility of each of these projects?

i don't think anybody's close to "agi", and anyways it's far more profitable/effective to just use more specific tools to accelerate the structures and processes that already exist.

if anyone was "close" i don't think they would be very public about it, but i would expect intelligence agencies to be aware. in that case, if some particular project was perceived as an existential danger, you could expect them to act appropriately.

everything the author seems to fear is not coming out of "agi" but instead much more mundane tools that are already proliferate. i think we're going to see the benefits and destruction from the technology we already have surpass anything any of us can predict before anything like "agi" nears completion.


Governments don’t create companies. Companies create governments. (Governments don’t create people, people create governments).

Companies trying to make AGI, is just people trying to make AGI.

The limits of intelligence are more a limit of energy and resources. I mean there is plenty untapped human intelligence.


If this is a thing people can actually build, government would have a hard time stopping them, or even knowing that people were working it if they decided to keep a low profile. It might not even require specialized hardware, or a particularly large software team.


one possible answer: the people in government tasked with minding such things believe (I think correctly) that the [fill in the blank]/industrial complexes at the heart of their operations are about as close to AGI as we're likely to get for some time, barring unforeseen advances

in other words the AGI that matters in the near term has humans at the core of the loop


Probably every company with credible AGI ambitions has one or more spies planted by one or more countries.


They really ought to keep an eye on that Alcubierre guy, he invented a warp drive. /s


I think it is because they, rightly IMHO, don’t actually believe that it is possible.


People like to mention that the brain has billions of neurons and how that is completely out of reach for current technology... But that neuron count includes neurons from brain structures related to non-cognitive tasks such as controlling your heart rate, breathing, glands, etc. Also, not everything in a neuron is for information processing, part of it is responsible for keeping the neuron alive, and part of it has to do with communication at the lower levels of the OSI model.

With respect to a theoretical artificial intelligence, we have very deep disadvantages that people do not talk about often.

Verbal communication in humans, seen from an oversimplified perspective, is about encoding ideas into words and then emitting the respective sounds for those words with your vocal chords and mouth, so that then another person listens to those sounds to recognize the words, and then inferring the meaning for each word and then the phrases. To make verbal communication more efficient, new words can be created to represent high level concepts, making faster communication possible. But there are still limits to verbal communication, both as speakers and listeners.

But what about machines? Machines do not even need to translate their thoughts into words. Machines can simply share a serialized cognitive model with another one, at gigabytes per second, and then the receiving machine can load it. What does this mean? that for machines, going to school is a experience that will last only a few seconds, and after that machines will be able to pass every school exam with 100% score.

Once one machine achieves expertise in a field, that machine can share its model with others, who will then load it and become experts immediately. This means that once one machine becomes as smart as Einstein, few minutes after, millions of machines will be as smart as Einstein.

It is not really the fact that we will be competing with smart machines, but the fact that we will be competing with machines that do not take 18 to 25 years to become "productive" in the job market, and that can educate themselves at a ridiculously fast rate making it impossible for us to catch up.

Also, the fact that machines can simply forget and throw away a model and incorporate a new one as needed. In a machine economy, AGI machines may be able to switch jobs multiple times per day, redirecting their effort to meet the needs of the economy in a highly dynamic way. They'll never have unemployment because they will quickly readjust their skill set to what needs to be done.

And when it comes to war, once machines have one highly successful general skilled at military tactics and strategy, every member of the robot army will be potentially as smart as that general.

Then, not only computer science, AI and ML are advancing fast, but we are also at the same time reverse engineering biology. In Terminator II, the fictional company Cyberdyne reverse engineers a robot arm and a robot chip from the future. That's not necessary: the "future" technology is already here. The robot arm = human arm, and the robot chip = your brain.


I’m not convinced machine intelligences will be able to send each other models to load. Will they be compatible? Even if they start the same, training might diverge then. Teaching may take much longer than seconds too. They might need to do some slow and deep changes to their internal structure to acquire new skills.


I am pretty sure they will figure it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: