Hacker News new | past | comments | ask | show | jobs | submit login

> "upload" our minds ... to explore the universe

I don't think it'll ever happen.

Uploading something just means making a copy, and if that becomes possible the AI potential would be millions of generations smarter than a carbon copy of a human brain. Sort of like digitizing a record or wax cylinder - we don't do it for the quality, we do it for historical or nostalgic purposes.

If AI becomes possible then the only reason to upload a human algorithm would be for the same reasons as the wax cylinder. It wouldn't be to rely on the created construct to do decision making or exploring.

More likely AI programs would do all the exploring and it would be brought back holodeck/matrix style for people to experience safely in the confines of earth.




Never? I agree our minds are incredibly complex, but it's not magic. Once humankind can measure what happens inside a brain, it's surely should only be a matter of time before it can replicate its machinery. I agree it may be something that's quite far into the future, but consider how far medicine has come in the past 25, or even 10 years with regards to replacing other human organs.

My personal opinion is that AGI will look a lot more like a convergence of man with machine than software suddenly gaining consciousness. I think it will be a viewed as performance enhancement much more than the merging of two foreign entities. I think it will begin with an interface that will allow us to communicate with software neurally, or store our memories externally (even if it's just copies). Early research is already happening in rats to this effect[1][2], I have a hard time believing that our brains are so complex that they will never be deciphered.

[1] http://www.technologyreview.com/news/424452/a-first-step-tow...

[2] http://www.technologyreview.com/news/511721/rats-communicate...


'columbo' didn't say uploading would never happen, just that uploading in order to explore the universe doesn't make sense


> Uploading something just means making a copy, and if that becomes possible the AI potential would be millions of generations smarter than a carbon copy of a human brain.

I see where you're going with this, but I would like to compare human minds more to a work of art than a tool.

Even today we can easily generate a familiar gradient of all colours on a canvas, or numerous gorgeously fascinating fractals. Yet we continue to value original hand-crafted art far beyond procedurally generated art.

I believe you're right that we will be able to build AIs which far exceed our capabilities in many fields, probably shortly before we figure out how to mirror a human mind within a machine. Even still, there will continue to be value in an organically grown human mind, shaped by decades of experience, weathered by relationships and subtle chemical reactions.

Perhaps someday we will be able to simulate that as well, but it will take much longer and at that point we would have created a true artificial life form.


why would the ai be millions of generations smarter?

what if it only means we can run existing human algorithms on faster and faster hardware?

maybe the future belongs to recognisably human cognitive processes, which experience the universe unfolding at a snails pace because they are running millions of times faster than we are


> why would the ai be millions of generations smarter?

Pure uneducated speculation on my part.

1) In order to emulate a Super Nintendo you need hardware at least 20x faster than the hardware needed to run the Super Nintendo natively. In order to emulate a human brain you'd need something with much more potential... if that potential exists why waste cycles emulating? If it isn't emulated then it is something else entirely.

2) We choose to rely on robots to explore the floor of the ocean with the occasional human being showing up just to make a documentary. We'll have cars driving us around, we use video for everything, we'll rely more and more on stuff brought to us instead of us going to various locations.

3) Even if we start with human intelligence the ability to remove unnecessary components will create something that is no longer human. I don't believe the needs/wants of an ai program or modified human construct will behave remotely human. Take a real basic element of human/animal behavior; survival. I don't think AI would find it necessary to 'defend itself' because it is a logical next-step for any advancing species. If it is destroyed today it'll come back in a ten, a hundred, or a million years. AI (if it is possible) is as inevitable as the printing press, lightbulb and stone mill.

Of course, ymmv, I mean we may as well debate how many noodles the Flying Spagetti Monster has, there's only 1% science and 99% science fiction at this time.


> you need hardware at least 20x faster than the hardware needed to run the Super Nintendo natively

Only if the emulating hardware is general purpose. Yes, simulating a human brain on a cluster of Intel i7's or AMD GPU's, or their future equivalents, will probably require an enormous hardware investment.

But once you have one working emulated brain on general-purpose hardware, you can produce specialized hardware. E.g. if we knew that Super Nintendo held the secrets to immortality, we would soon make updated versions of all its chips with today's technology, and you could probably fit dozens or hundreds of SNES cores on a single modern chip with the decades of improvement in manufacturing processes.

Likewise, once we make an initial discovery of how to emulate human brains on general-purpose computers, the things we find out from that implementation will start to pave the way for special-purpose computers that can more cost-effectively execute the brain emulation application.

Also, it's entirely possible that you can emulate human brains, but you don't know how to create more potential. E.g., if we can simulate the physics and chemistry of human brains well enough to replicate cognitive processes, but our best experts are simply stumped at figuring out exactly which essential features of the physical/chemical processes translate into intelligence. And if early experiments at tweaking model parameters show that the human brain has evolved into some local maximum, where any tweak produces a less powerful intelligence or one that's defective in some way, and after this is observed over many experiments, it might then become taboo to tweak the model for ethical reasons, making the situation stable over the long term.

In other words, we might be "forced" to "waste" cycles if we can't figure out which cycles we're wasting, and the experiments that might tell us would entail sacrificing too many artificial humans to be ethically acceptable.

This state of affairs isn't necessarily likely, but it's certainly possible.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: