Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How do you define the singularity?
22 points by servytor on March 28, 2022 | hide | past | favorite | 57 comments
I define it as when machines understand people.


For me: when General Purpose Intelligence (GI) systems can build better GI systems.

Note that it's not proven that this is even possible: the only GI systems we know about (organic brains) so far have failed to do so, despite trying hard.

And I don't believe those organic brains are anywhere near to succeed, because I doubt seriously that the current approach (programs running on transistors) is capable of producing such a thing.


If it's down to a physical processing power limitation then I don't see any showstoppers to humans making a bigger version of a brain. It's not very elegant to just do it bigger, but it at least highlights an avenue to improving existing systems.

I mean we got here without any sort of intelligent hand, surely there is no mystical barrier to the improvements we can make.


Well just bigger might not cut it, the biggest brain we know about already has about almost 10 kg of weight (belongs to a whale). We want better, not bigger.

You say it yourself, there was no intelligent hand involved in the creation of brains (but another process called evolution). This just supports my point that a GI system creating a better GI system to our knowledge has not yet happened.

I agree there's no mystical barrier, but maybe a conceptual: is there a limit to our cognitive capabilities that prevents us from designing something better than us? I don't know, but it hasn't been disproved yet either.


I still think of that Vernor Vinge novel Across Realtime where people just disappear. Like a murder mystery it has already happened at the beginning of the book and people at various technology levels are one-way time traveling into the future trying to figure out what happened but… they never do.

It’s a good read.

I also think of this differential equation

   dx/dt = x*x
Which like the usual exponential growth equation except x is squared so the rate of growth accelerates as the quantity increases, solutions look like

       x = 1/t
namely it grows pretty slowly for a long time but it really goes bam all the way to infinity at a specific time.

Anyhow I don’t believe in that God at the end of time that Aella and Tielhard de Chardin believe in because I think intelligence itself is limited the way Godel, Turing and many others point to.


Honestly? I define it as a sci-fi fantasy concept that's not worth taking seriously.


Singularities are illusions that occur when models break down. As an apparent singularity approaches either there will be a new model that reveals previously unknown information or there will be a crash.


Yet it's interesting that the presence of singularities does not lead communities to abandon those theories.


You think AI is impossible or its impossible for AI to improve itself?


IMHO we already have both, but their growth is a sigmoid curve, not an exponential spike to infinity.


This is the right answer.


I think its when software gets 'better' at writing AI-software than humans, and when AI-software is capable of writing an AI-software that is better than itself.

The point being that the quality of AI software has some sort of 'exponential' (maybe not numerically but conceptually due to compounding effects) growth in its quality, and that during this growth it crosses the point where it is better at 'general intelligence' than humans.


When generally intelligent machines have been improving the intelligence and capabilities of intelligent machines for a while, presumably better and faster than humans could.

I’m open to the idea that this might not make technology or society completely incomprehensible, or incomprehensible over a short period of time.

I’d expect the world to become increasingly surprising and confusing as we approach the time when machine intelligence is developed, but a successful, safe advent of machine intelligence might conserve enough of the fabric of society that humans can understand it. I’d expect dramatic and explosive developments to be more associated with borderline or unsafe (heaven forbid) AI that ends up doing its own thing.

This is still all in the realm of philosophy, so it would be very surprising if we were able to predict the details.


I define it as religion for nerds who think they're too smart/clever for religion.

To expand on that, see other critical comments e.g. https://news.ycombinator.com/item?id=30830690


Probably you should provide more context?

https://en.wikipedia.org/wiki/Singularity?wprov=sfti1

Singularity to me is within a black hole, an astrophysics concept.



Both of those are absurd.

You hear statements like "physics breaks down at the singularity of a black hole".

Physics (the real Physics) doesn't break down ever, it's your theory that breaks down, it's your theory that is wrong.

General relativity gets a lot of things right, it seems to explain what happens outside of a black hole well. In the usual approach to the Schwarzchild solution (a black hole with no spin and no electric charge) you usually start in a coordinate system that has a mathematical singularity at the event horizon then make a coordinate change that makes that singularity go away (with the deceptive feature that the event horizon seems to be a global instead of local property of the black hole.)

My belief is that in quantum gravity that coordinate translation isn't valid so the classical picture of the black hole interior is "not even wrong". It drives me nuts that every week a new QG paper comes out that the media describe as "a resolution of the Hawking paradox" despite the fact that Hawking admitted he was wrong about the paradox and there is no paradox a long time ago.


I’m no physicist but my understanding of black hole singularity is that it is something we can never learn more about as “by definition” no information ever could get out of it. So it’s not about a good or bad model, but that we can never even verify any model against it.

But again, I have very limited understanding of the topic at hand.


It's predicated on accelerating technological progress, which can perhaps be seen to be true by looking back at how quickly society has moved through different technological regimes. Once change becomes too fast, it's impossible to predict what is about to happen. It's called "the singularity" with a loose analogy to black holes where a big curvature means that light cannot escape - with fast enough technological progress, the immediate future is highly uncertain.

The most commonly expected way that this might happen is via AGI - perhaps with self-improving AI - where a sudden increase in AI abilities might drive extremely rapid technological change.


Well, it's a spectrum to me. Glasses are already humans and machines merging together, and I feel that people in the 1500s would think the singularity had already happened to us, given how smartphones are available everywhere.

If I had to choose an event, though, I think it would be the first time that human intelligence is enhanced by AI in some way (imagine offloading numerical computations on your mind). When that happens, we will have lots of questions to answer, like: what happens when rich people are not only richer, but also fundamentally smarter and more efficient?


We already have calculators and spreadsheets for enhanced mathematical ability, and rich people have had assistants and advisers to enhance their knowledge and help them work more efficiently.

The interesting thing about AI and machine learning is it's actually becoming available to everyone very democratically. We all have access to Siri and Google Assistant because the best way to extract value is through massive scale. Developing a billion dollar AI and then only letting one person benefit from access to it is absurdly inefficient, partly because access to more people and more data and interactive usage it can learn from at massive scale is actually necessary to train the AI. Keeping it private also cripples it.

I know what you mean, you're thinking without any external mechanical interface like keyboards and such, but those things don't matter by themselves. A direct brain interface might provide an incremental advantage in latency, but we've had incremental improvements for ages. Unless it provides some sort of sudden multi-orders of magnitude advantage it's really just more of the same.


You've made some fair points, but there's an underlying assumption: that we are only going to be able to do what we are doing today. Of course, my example is not that good, but imagine things like increased spatial sensing, instant data analysis, better (supervised) sleep, bigger muscle and organ efficiency.

Are those in your opinion still more of the same?


Pretty much, yes. That just sounds like better sensors or information presentation, faster computers, medical advances and maybe some genetic engineering. We've been doing all of that stuff for decades.

The way I see it brain interfaces aren't necessarily a huge game changer by themselves, in fact it's conceivable they might actually be slower than existing interfaces but more convenient for some tasks. It's more likely to be about overcoming bottlenecks that might arise in our existing technological advance trajectory, so we don't stall out.


> what happens when rich people are not only richer, but also fundamentally smarter and more efficient?

One could argue that stock trading bots are already just that. Gives someone an overview of the entire market and performs transactions in matters of miliseconds.


> what happens when rich people are not only richer, but also fundamentally smarter and more efficient?

Even right now, people who are fundamentally smarter and more efficient are likely to be richer.

This is just an extension of capitalism giving an advantage to those who are already ahead.


What if technological problems we haven’t solved yet are exponentially hard and even rapidly self improving AI would still barely make a dent in them?

Then boom, no singularity.


If humanity in 4,000 BC lost 1,000 years of technology, then things might very well be the same today.

If humanity in 1900 AD lost 100 years of technology, then it would be very noticeable, but would not threaten extinction.

If humanity in 2022 lost 50 years of technology, billions would die.

To me, singularity is when humanity relies on the present advance of technology, were any stalling would collapse the entire species.

Snowpiercer (2013) involves this sort of thing.


> I define it as when machines understand people.

I'd define it as when people understand people.

- - - -

Going back to the source, to von Neumann, "the singularity" was literally a mathematical singularity in the curves of accelerating technology.

Vinge thought about self-improving artificial intelligence:

> Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

> Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.

https://mindstalk.net/vinge/vinge-sing.html

He was a little precipitous in his estimates of the time frame, unless you count the GAIs we've already created. It seems clear to me that we have already built self-improving artificial entities, to wit: Google, Facebook, etc... are GAIs. I think they are cyborgs, with whole humans as neurons. I suspect they even have self-awareness "carried" on the substratum of their human components' self-awareness.

- - - -

But to me the real singularity isn't going to take place in cyberspace, the real singularity happens when people start learning how to operate their own minds more effectively. This is happening already too, but "the future is here it's just not evenly distributed". Without going on and on about it, there is a global current of cybernetic psychology (happening largely outside of the view of academic psychology) that is changing and transforming humanity, or, more precisely, allowing humanity to transform itself.


Very good question! Thank you for opportunity to think deep :)

As for definition, I like definition from science - singularity will happen when some group of people, who we name world scientific community, will mostly agree, that singularity happen. That is, epochs named only post factum, not before, not even when things happen.

But what exactly will cause singularity, is totally different question, and I could answer for it with some probability of become prophet.

So, there few theoretical things could happen (ordered just as I remember):

1. When human will create General AI, which will have IQ slightly better than median human.

2. When people invent some computer parts with neuro-interfaces, or some other tech/bio/genetic improvements, which will be usable as commodity, and approachable for large part of humanity (at least 5%), and made these people intellect much more powerful than current people.

Variation of this, is some form of biotech beings, which will have human parts and machine parts, and will be much more powerful than current people, and will got abilities to live in environments, which are deadly for humans, like space vacuum.

3. When people create powerful enough computers to support virtual reality indistinguishable from reality, and invent method, to transfer human Natural Intelligence to those computers, so people will got possibility, to live inside vr forever, and to easily transfer their personality for example to other planet, or even to other Galaxy.

4. Humanity will create some very powerful, cheap and clean source of energy, so energy consumption per capita, will grow at least few times, so we will just continue our current slow progress to space civilization of type 1. https://en.wikipedia.org/wiki/Kardashev_scale

There are lot of other opportunities for singularity, but these I think most probable.


Technological evolution today is very slow, with products coming after years of research or at best months: Singularity is when this evolution accelerates to "near-instant" level - i.e. products of idea are actualized and developed to state of art instantly. This would be my definition: it doesn't require a Sci-Fi AI overlords deciding whats optimal for you, just general technological capability that reduce time of development to 0.


At some point, the material correlates we use for compute, have to match what the human brain uses for consciousness. Probably those correlates are quantum mechanical in essence. And every quantum computing experiment gets us closer to that harmonious unity. But I don't believe it's a "given". We still use YBC oxide type superconducting qubits and not lab grown neural tissue itself ;)


Is there any indication that quantum mechanical effects have any role at all in human cognition?


Could be when humans understand God, but that will never happen on our own. That's why God sent His Son, so humans could understand everything they need to about their creator.

Same with machines, a human will eventually need to go into the machine world and let the machines destroy him, so they can analyze and take upon themselves the human coding, thus machines will be able to understand humans.


I'm working on an AGI, early stages though. Check out https://lxagi.com to sign-up for the MVP.

An AGI isn't the singularity, though, unless it were fully developed and extremely well designed and implemented. It would need to be able to improve itself. That's the really far end of the spectrum.


The day programmers aren't needed anymore.


I define it as when machines start doing things that we don't know why. It may already have happened with ML models, unless we can get them to explain themselves so that we can understand.

Extrapolate this to self-driving, defense systems, recommendation systems, content moderation, etc. The basis is that we gave up executive control.


I'm bearish on AI. The human brain does far more than just learn things. AI still has no ability to reason or think creatively. So we're not even in the ballpark. The space has been worked on for 50 years. I'm not sure why people think the field will now start to progress rapidly.


That's why the singularity is theoretical. Going back to the OP, that's the point where a system is able to improve itself better/faster than the humans who designed it can. Maybe 30, 100 years or never.

Another point on the way there is when AI chat is indistinguishable from humans. People are going to be scammed or influenced all over the place. Scams, influence campaigns, radicalization, which are already industrial in scale are gonna explode. I think this pretty soon, 5-10 years


AI systems can search through millions of molecules to find a drug. How is that different from Thomas Edison trying 1000 filaments for a light bulb?



Language is a rathole.

Interesting AI synthetic chemistry systems work ab initio modeling how a molecule interacts with proteins.


and where do they get their training dat... oh,right, literature.


AI isn't just machine learning. Searching huge spaces (Chess, Travelling Salesman, etc.) is a paradigm in and of itself.

It's possible to model molecular binding with nothing but physics.


A.I. can generate a million art pieces or a love song, but it won’t feel the same as a just one from a human. At least not yet.


It could if the audience did not know the piece was generated by a machine. This has already been proven when art critics gushed praise over works which turned out to have been made by a chimpanzee [1].

[1] https://www.ladbible.com/funny/awesome-the-hoax-that-fooled-...


Unless you're autistic or awkward there's no easier way to deceive people than to get them to perceive that "spark" of another being. Look at how people see faces in plant stems, rock formations on Mars, etc.

One of the few things G. W. Bush regrets is that time he said he looked in Vladimir Putin's eyes and got a sense of his soul.

A.I. search is more creative than most people think; Edison himself said "Genius is one percent inspiration, ninety-nine percent perspiration."

People have been writing programs since the late 1960s that imitate the later works of

https://en.wikipedia.org/wiki/Piet_Mondrian

these might even fool an art historian if you made them with an oil painting robot.

ELIZA could pass as a human somewhat in the 1960s. On a day when I'm feeling awkward I feel envious of GPT-3 which, being a pile of nothing but biases devoid of understanding, passes as a neurotypical really well.


It's an inflection point.

When AGI blinks into existence, what happens after that moment might be so instantaneously transcendent that we don't even have the vocabulary to discuss it.

It's mostly useful for describing the time before AGI and the time after AGI.

What will actually occur is impossible to know.


It's that thing where people who, while otherwise mathematically literate, somehow convince themselves that exponential changes adumbrate some future transcendence rather than, say, the breaking point of systems.


The singularity is when we've exhausted all the low hanging fruit of technology and have started to work on the real hard problems of society. It's essentially our last invention.


I still think the original definition is the most useful, and also gives the greatest intuition as to why it is called the "singularity" and not some other name.

"It is a point where our old models must be discarded and a new reality rules, a point that will loom vaster and vaster over human affairs until the notion becomes a commonplace." - https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity...

What I like about this definition is that its philosophical utility is precisely that is a name for the place where our expectations about the future break down completely, and this is a useful concept to have a name for.

It is true that the most commonly explored particular manifestation of this is an AI technological breakthrough, but using the definition contextualizes that as just one possibility in a set of possibilities, including some "Singularities" that have already happened.

In particular, try to imagine explaining the modern world to a Sumerian peasant farmer in X,000 BC. Good luck with that. You probably don't understand the world they live in very well either, but they can't even conceptualize our world. There's at least one Singularity in human history between that farmer and us, and it's not hard to draw out at least a few more.

However, it can be challenging to point out "where" or "when" they occurred.

From this we can draw out a few more lessons. Singularities are relative to the observer. You generally can't "pass through" one, because as you would approach one, your ability to prognosticate the future reasonably grows through it, and so relative to you, in time, it recedes. Even if the hard AI takeoff is going to happen, we are even now learning more about what an AI world looks like and how it works, so even in that scenario, our ability to understand it improves as we approach it. Hard, species-wide singularities that give no warning and no opportunities to significantly adapt in advance have generally not happened yet, even as like I said, clearly, between that Sumerian farmer and us there has to have been at least one.

Hard AI takeoff is one. But another one that is back in the news is nuclear war. That would also constitute a singularity in its own way; we could fall back on our understanding of previous civilizations at lower tech levels but it's still very difficult to predict what would happen in the aftermath of such a thing. There's a few other possibilities. But what one might call a "hard" singularity, one that completely blindsides the entire species, hasn't generally happened, and there's still good reason to suspect they're not likely.

As this post demonstrates, I find this formulation far more interesting for thinking and saying things than the vague "future tech might be weird" ideas that tend to float about. You need the concept of anchoring the singularity to particular points of view to be able to say much about them, I think. Trying to treat it as an objective event is difficult, precisely because they recede from the observer.

This formulation also disposes of the "Rapture of the Nerds" scenario. There is no reason to believe that the singularity somehow must be accompanied by "transcending the physical world", there is no reason to believe that such an outcome would even be good for humans in specific or humans in general, and there's no reason that singularities must be limited to technological process. That's just one particular part of the space (and IMHO, a very small one when it comes to hypothetical transcending the physical world) that happens to have some scifi stories written about it, but is far from the most likely possibility.


When a single computing device has more processing power than all humankind put together.


Unlimited energy. Until then it's moving deck chairs around the Titanic.


I define it as when the simulation realizes it is a simulation.


The same way it's defined in "The Last Question"


The singularity is a human trying to reach the horizon.


The ultimate act of hubris.


The rapture for nerds.


when machines can beat us at all "games"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: