Hacker News new | past | comments | ask | show | jobs | submit login

Two people in a basement might make significant advances in AI research. So from the start, AI appears to be impossible to regulate. If an AGI is possible, then it is inevitable.

Not necessarily. AGI might be possible but it's not necessarily possible for two people in a basement. AGI might require some exotic computer architecture which hasn't been invented yet, for example. This would put it a lot closer to nuclear weapons in terms of barriers to existence.




Computers are far more general purpose. Any computer can theoretically run any program, it would just be slower, at worst.

Developing specialized hardware isn't out of reach, because of FPGAs.


We're talking about AGI, an area where speed can represent a difference in kind, not merely degree. The same sort of distinction goes for cryptography too.

One sort of exotic computer architecture I had in mind was a massively parallel (billions of "cores"), NUMA type machine. You can't really do that with an FPGA, can you?


If all we needed was "billions of cores", we could have done it by now by simply putting together a million of GPU cards in a large cluster. No exotic architectures needed.


That's not the same thing though. All those GPU cards have to talk to their memory through a memory bus. I'm talking about a system where memory is divided up among the cores and they all communicate with one another by message passing. This is analogous to the architecture of the human brain.


We still have no clue about the architecture of the human brain. Even if we did, it's not clear we need to replicate it.

My point is - even if we had, say, a million times more flops, and a million times more memory than the largest supercomputer today, we would still have no clue what to do with it. The problem is lack of algorithms, lack of theory, not lack of hardware.


We still have no clue about the architecture of the human brain. Even if we did, it's not clear we need to replicate it.

We do have a clue about the architecture of the human brain. Billions and billions of neurons with orders of magnitude more connections between them.

even if we had, say, a million times more flops, and a million times more memory

The point is that we could have those things but we don't have a million times lower memory latency and we don't have a million times more memory bandwidth. Those things haven't been improving at all for a very long time.

There are tons of algorithms we can think of that are completely infeasible on our current architectures due to the penalty we pay every time we have a cache miss. Simulating something like a human brain would be pretty well nothing but cache misses due to its massively parallel nature. It's not at all inconceivable to me that we already have the algorithm for general intelligence, we just don't have a big enough machine to run it fast enough.


We do have a clue about the architecture of the human brain. Billions and billions of neurons with orders of magnitude more connections between them.

You call this a "clue"? It's like saying that computer architecture is "Billions and billions of transistors with orders of magnitude more connections between them". Not gonna get very far with this knowledge.

...we don't have a million times lower memory latency and...

Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind. What are you going to do with it, if your goal is to build AGI? What algorithms are you going to run? What are you going to simulate, if we don't know how a brain works? Not only we don't have "the algorithm for general intelligence", we don't even know if such an algorithm exists. It's far more likely that a brain is a collection of various specialized algorithms, or maybe something even more exotic/complex. Again, we have no clue. Ask any neuroscientist if you don't believe me.


>Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind. What are you going to do with it, if your goal is to build AGI?

You would obviously run AIXI: https://wiki.lesswrong.com/wiki/AIXI

We know how to make AI given infinite computing power. That's not really hard. You can solve tons of problems with infinite computing power. All of the real work is optimizing it to work within resource constraints.


Yes, I've been just thinking about it, and even without looking at your link, it's easy to see how to build (find) AGI given an infinitely fast computer.

Ok, then, back to the very fast computer.


> Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind.

Simulate the set of all possible states and find the ones which resemble AGI.


How are you going to test each state for AGI-ness?


Suppose it's impossible to build one that runs in a reasonable timeframe without access to optimization via quantum tunneling.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: