Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are the early signs of singularity?
70 points by itchyjunk 10 days ago | hide | past | favorite | 105 comments
Post singularity, people (?) might look back and attribute certain events as a major indicator of the impending singularity. But for someone without the hind sight, looking into the future, what types of indicator would you look for? Also assuming that even if singularity is achieved (?) at some locations, the effects would take times to spread. Say it's already reached at the opposite corner of the world. How long would it take for it to be apparent and what are some indicators? Also, happy thanksgiving.





I think a singularity is now impossible... what we have to do is figure out how to avoid destroying humanity in the next century. Our political systems are imploding because of capture by the donor class. The emphasis on extracting profit at all costs has increase the fragility of our supply chain to the breaking point.

Physics and Biology have both sufficiently advanced to make an accidental destruction of the human race a non-zero probability.

I see a collapse type of singularity to be far more likely than any the rise of any AI powered superintelligence.

It appears I'm not the only one, judging by the other comments here.


> Physics and Biology have both sufficiently advanced to make an accidental destruction of the human race a non-zero probability.

Not only that, they have to take place in a society that has to care about the results. You can have all the findings you want, but where the rubber hits the road the rest of society has to do something with it.

In my eyes this is the biggest problem right now. So it is not that science does not do enough, it is that if science conclusively finds something we cannot follow these findings on a big scale for socioeconomical reasons: those profiting right now don't want to change things, because they are profiting, those who suffer are gaslit in a way that they start to attack scientific thinking itself, etc.

Unless we tackle that why do science at all? Why allow them to assert power over our political thinking that has a mismatch witch reality, if we believe it is them who have a mismatch with reality?


Looking at the machine learning ecosystem we have at the moment, there is a good degree of democratising, as in anyone with the skills and funds can build, optimise and deploy ML models. This is driven in part by the whole open source movement. If AGI goes down the same route, as in researched by academia and funded by private enterprises, I can't see why it wouldn't be just as open.

If everyone had access to super AI, what does that mean to our democracy? I can't see social networks surviving (AIs could just flood all feeds), which I'm probably happy about. Some governments like the UK and EU are taking steps now to ensure AI doesn't start breaking human rights, discrimination, etc. Just need to get the us and China on board (unlikely?).

So to summarise, I remain optimistic.


I was going to comment something similar but not quite as negative.

I don't think the singularity is "impossible" now, but I do think we're moving much more slowly toward it than if capital were allocated more efficiently, and if we were not significantly distracted by political issues that are taking up far more of our energy than they should.

Technology is still advancing, so we're still inching ahead, and that means that whenever we take our foot off the breaks (by focusing less on politics, or improving capital allocation) we'll get to dramatic progress faster, but unfortunately it is slowed down enough that we might not reach dramatic progress similar to the "singularity" in my lifetime.


> Our political systems are imploding because of capture by the donor class.

Disagree. I think the root cause of all of this is the Internet. Remember, we've just all been connected together with highspeed network in last 20 years or so, mostly last 10 years wrt to social media. The colossal impact it has on world population is immeasurable and inconceivable. Just today, people on HN are talking about Black Friday sales in Iran.


I don't understand this concept of singularity.

Why do we think that a system designed by humans can overcome the limitations of it's own design(ers).


Why not?

Airplanes designed by humans fly.

Calculators designed by humans calculate faster too.


It's difficult to articulate but I think you misunderstood my point.

Airplanes and calculators exist within the realm of what it is possible for a human to design. Why is it that we have not designed a calculator that can invent new forms of numbers or a new paradigm of maths.

Likewise, we seem to think that some kind of singularity will just emerge but what I'm saying is that something that exist or is created within the bounds of human imagination cannot (imo) escape the limitations of that same imagination.

This is why it is so difficult for us to really imagine what a transcendent all knowing AI would be like.


Your argument is boiling down to saying: This cannot happen because I have this XYZ belief, which I have because this has never happened in the past.

It should be understood that singularity has not yet been achieved. That is not in itself a reason to believe that it cannot be!

More below.

>> Airplanes and calculators exist within the realm of what it is possible for a human to design.

Any actual reason to believe that (a) AGI that exceeds human performance, (b) weak AI, (c) strong AI, (d) AI that can improve itself, (e) AI that can autonomously pick up new domains, (f) AI that has a sense of humor, ... are not within "what is possible for humans to design"?

>> Why is it that we have not designed a calculator that can invent new forms of numbers or a new paradigm of maths.

As far as I know, Doug Lenat had developed a program called Eurisko, which came up with a notion that was it not taught. As I recall, the program started looking into opposite of prime numbers, i.e., numbers which are highly composite. Doug later discovered the concept pre-existed.

I have myself also coded a program which was domain-specific, however it provably made inventions that were patentable, and which I could not have invented. I know because one of them were recently invented by a colleague and it was even filed for patenting.

A desk calculator here is a wrong example; that machine is too limited. However, there may not be a valid and scientific reason to believe that intelligence, judgment, creativity, wisdom, are more than mere calculations. (I won't dwell into consciousness, etc., though I can.)

>> something that exist or is created within the bounds of human imagination cannot (imo) escape the limitations of that same imagination.

I have already given examples that are small but which countered that.

AlphaZero was limited to playing chess, agreed, but it's imagination did extend well beyond that of it's creators or even the human chess players it played against. In fact, very recently, there was a research published trying to understand how it or a more recent chess program, loosely speaking, 'thinks'.

AlphaZero was limited to playing chess, because we trained it using specific cost functions for chess.

If there's enough training data, etc., then in theory, we could define cost functions that are similar to those of a species or a biological ecosystem, i.e., survival instincts, procreation, whatever. Such a system may develop it's own internal models and behaviors, and even possibly notions of various emotions.

Just think it through. If you can come up with an actual answer why it cannot be done, then both you and I, and at least I :-) will be able to invent to break that answer! :-)

It's better to ask why it cannot be done, that will take us there. (The debate whether we want to or not is a separate track.)


One metric will be the interfacing of computing hardware with biological systems. Computing hardware is still way too massive in size. A human red blood cell is discocyte shape, approximately 7.5 to 8.7 μm in diameter and 1.7 to 2.2 μm in thickness. By comparison, the current state of the art in microcomputing was heralded more than 2 years ago when the University of Michigan announced its engineers produced a computer that is 0.3 mm x 0.3 mm - or 300 μm x 300 μm. Getting close but still 2 orders of magnitude too large to go to the Apple Store to drink a bottle of iFluid containing millions of networked microcomputers that can be transported in our circulatory system to interface directly with the nervous system. Meanwhile we have to work with neural implants.

Our spinal cord is a relatively small set of cables which are enough to transfer information for just about every movement of our body. We may not need to interface with the whole brain, just with the right cells to create a wide-enough communication avenue between our brains and computers. So , implants should be enough.

As for the computational capacity, the brain is very intricate but, as everything in biology, rather suboptimal in terms of construction; we can simulate some of its primary functions like visual / audio recognition with just a small fraction of computational nodes.


- Financial company run by an AI outperforms human-run companies.

- Self-driving cars actually work reliably.

- Robot manipulation in unstructured situations starts to work.


First one is already a reality.

No, its run by machine learning systems. Not AI

I feel like those people correcting everyone about crypto



My personal definition / distinction between ML and AI is that the latter refers to an agent, whereas the former refers to an optimization procedure.

True, there are times where a system can be described as both to some degree, but one is typically more than the other.

In the case of Amazon automation processes, I would call that ML. In that, you are more likely to tweak the parameters to tweak the procedure directly, instead of having to interface with an AI agent's "communication interface" to achieve the same.


This site helps me realise just how little I know about things and that I should try to keep my comments to myself more often :-)

This is a good lesson to learn. Sadly I need to be reminded of it far too often


I have these knee-jerk reactions that I get from reading the mainstream news and reacting to its attempts to dumb down writing and overgeneralise issues like this.

Really? Which company?

AI "run" seems like a stretch, clearly the owners are humans.

But I think the answer if you mean "AI does trading" is almost everyone right?

Especially if you use the 80s and 90s definitions of AI that included expert systems. The end game for AI might not be neural networks after all, I doubt we'll know which approach is correct until the problem is solved since I don't know how else you'd provide evidence that you were right.


Yeah these pretty much nail it.

Even hamster-managed crypto investments are doing better than most people.

Damn, I just found out that mr_goxx died!


Mark Karpelès dead? Not finding that.

There's a short story by Kafka, "Investigations of a dog," that seems to ask the same question from the perspective of a dog. This dog notices that there are phenomena that it can't explain, such as why dogs dance around and make certain noises, just before their food appears. On the one hand, it can't manage to get its fellow dogs interested in these questions. On the other, it catches glimpses of a higher species of dogs who are invisible but somehow bring its food.

I'm thinking in a similar vein, of what behaviors are inexplicable in humans, such as why we hold hands and recite certain verses before we receive our food, or are so mesmerized by particular sequences of tones and sounds that some other humans seem compelled to make.

Some possible clues:

- Hearing new kinds of music that is noticeably not meant for human listeners, e.g., if it is not based on an analysis of human music. I'm only imagining that a real intelligence will eventually get sick of our music and come up with something that it prefers. If it cares about music, of course.

- A sustainable improvement in the management of humans, resulting in more uniform and better health. This is an analogy to the fact that our livestock live under more uniform conditions than wild animals. Assuming that humans are useful for AI, or that they're even aware of our existence.

- A use for the blockchain. ;-)


> Hearing new kinds of music that is noticeably not meant for human listeners, e.g., if it is not based on an analysis of human music.

Modem sounds.


I think you need to define singularity here. If it works historically like a black hole -

Basically a black hole is not defined for the external observer by the singularity, but by the radius under which the escape velocity exceeds speed of light. External universe observes a steepening gravitational force, and then an unpiercable black wall.

If you look at human history, lots of things have been accelerating since the dawn of industrialization (and after scientists and mathematicians figured out a way of existence where instead of hiding their discoveries they flaunt them).

Is the jaquard loom the first sign of impending computational nirvana? From historical perspective a hundred years is a really brief time so if I wanted to go Neal Stephenson -witty I would say yes, that was the first sign and the founding of the royal society another.

It depends how far from the event horizon you want the signs to be and are we on a historical gradient towards it - which we probably wont observe since a) it's in the future b) it's an event horizon so it will completely surprise us.

All of the above was more or less tongue in cheek.


When human knowledge starts doubling instantly or at most every few seconds .... THAT is a singularity. In 1900 human knowledge doubled approximately every 100 years. By the end of 1945, the rate was every 25 years. The “Knowledge Doubling Curve”, as it’s commonly known, was created by Buckminster Fuller in 1982. From an article on Industry Tap written by David Schilling, the host went on to say that not only is human knowledge, on average, doubling every 13 months, we are quickly on our way, with the help of the Internet, to the doubling of knowledge every 12 hours. If you want to take this even further down the preverbial road, you combine this with Ray Kurzweil’s (Head of Google Artificial Intelligence) “singularity” theory and Google’s Eric Schmidt and Jared Cohen’s ideas which are discussed in their book, “The New Digital Age: Reshaping the Future of People, Nations and Business” and you have some serious changes to technology, human intelligence and business coming down the pike whether you like it or not - https://lodestarsolutions.com/keeping-up-with-the-surge-of-i...

That still leaves singularity undefined. What does it mean, socially and culturally?

I imagine paying attention to the capabilities of search engines will be important. Classical computing is motivated by a desire to retrieve information quickly. Search engines are motivated by a desire to retrieve information using fuzzy semantic concepts like language, features of an image, etc.

Much of modern deep learning is motivated by modeling the task of information retrieval as a differentiable matrix multiplication (e.g. self-attention) in order to back-propagate error to the parameters of a large graph using stochastic gradient descent. In theory, this can give us a single checkpoint, which runs on a single GPU, that does more-or-less all of what "core" Google search does.

I don't think that quite guarantees a singularity. There will need to be a lot of work afterwards.

Humans can update their priors by collecting new information about their environment in real-time. They can also (sort of?) simulate situations and update their priors from that. Reinforcement learning could be crucial to solving these issues as it allows agents to learn through both real and simulated environments.

Robotics may need to catch up, although recent advancements are pretty crazy.

Assuming we don't all die first, of course.


Isn't it somewhat inherent to the singularity concept that there won't be early signs? Either the machine has achieved runaway self-improvement capability or it hasn't.

Then the early sign is an AI than can self-improve, which current AIs are too narrow to do. Fortunately, our technology isn't there yet, the first singularity would crash or run out of resources. I hope the next iterations will be developed in isolation after showing spontaneous exponential growth.


I'll wait until Alpha Go Zero improves itself to the point it decides to do something that isn't playing go.

Yes, I agree to that.

This was my impression. I think the singularity term was used to try to convey that as well

Prior to runaway self improvement the machine would be improved by human engineers and could be compared to those - way worse, a bit worse, similar. bit better, much better etc. So you should see that happen.

Presumably there’s a stage between achieving sentience, and realizing the importance of protecting power sources.

Systems around us and designed by us tend to have diminishing returns problem. I wonder what's the limiting factor in the architecture of the human brain should we want to scale it further. How much more intelligent can the cortex get without a massive architectural shift?

I like to think that our first intelligent machines will run on some very specialized hardware with, by definition, designed particular architecture. I suppose both will have many different limiting factors to how deeply can a machine reason about itself. That's why I believe there won't be a runaway effect in intelligent modelling/reasoning. It'll be a step function.

Another related issue is that a new architecture which breaks the scalability limits of previous generation will produce new and distinct entities. If intelligence and self-interest often correlate, a machine might be vary of creating a better version of itself lest it be replaced.


> Another related issue is that a new architecture which breaks the scalability limits of previous generation will produce new and distinct entities. If intelligence and self-interest often correlate, a machine might be vary of creating a better version of itself lest it be replaced.

That assumes AI will manifest as multiple individual entities. I day-dreamed about this many years ago - what are the limits to intelligence, and what possible forms can it take, are 'societies' like minds, etc?

It seems to me possible that single, distributed mind might be the ultimate form of intelligence, in which case the AI would not be replacing itself, but upgrading itself.

Perhaps that is already happening.


I agree that your version is more satisfying and I can imagine that to be the future, although I find more plausible that the first generations will be distinct and geo-spacially localized.

Once the mind hive's perception of self (if that is even necessary for intelligence) is blurred sufficiently that pretty much any system can be plugged in, than the whole possibility I mentioned crumbles.

> Perhaps that is already happening.

Do you mean intentionally by many researcher groups who don't share it, or as a shadow society peggy-baging on the internet without anyone's attention?


I mean that if we consider humanity as an intelligence, it is clearly in the process of upgrading itself.

Of course it's a bit of a stretch to consider all of humanity as a single intelligence, certainly from our individual vantage points. It doesn't seem conscious or even that smart. But from a different vantage point, especially in time, perhaps it does.

I do have a more-strongly-integrated distributed intelligence in mind (with 'components' that are less autonomous), but perhaps there are other structures that already are intelligent on different scales, that we don't yet recognize. Perhaps it is not us upgrading, but the universe itself.


your interpretation seems reasonable

somehow we persist in the belief that misaligned superintelligence is a thought experiment but as far as I can tell distributed AGI is a reality and paperclip maximizers already exist


What if universe inherently limits the possibility of singularity?

I think there might be a limit to potential intelligence of a system due to physical constraints such as speed of light.

Perhaps such inherent limitations logically prevent the destruction of the universe by a singular organism.


It's an exponential curve, from the perspective of people 100,000 years ago we already are. When computers start 10x-ing every month then the days of the world operating on human timescales is probably ending.

Here are some early signs of the anti-singularity:

+ Intelligence is decreasing worldwide, due to both accumulation of mutations detrimental to intelligence (dysgenics) and differential fertility (less intelligent people having on average the most children)

+ Modern society dominated by cancerous/parasitic bureaucracies (inefficiency generators)

+ Degradation of the definition of genius and societies hostile to genius

+ Dwindling number of genius individuals

+ Consequently, massive decrease in the number of ground-breaking inventions and scientific breakthroughs

As intelligence continues to decline, growth will reverse into decline and inefficiency, as the ability of people to sustain, repair, and maintain, the highly technical, specialized and coordinated world civilization will be lost.

Collapse and new dark age.


One thought: the genetic algorithm doesn't rely upon high-performing outliers exploding in numbers. It relies on recombination of otherwise useless traits from low-performing individuals, into newly high-performing combinations that exist as small populations, not lone individuals.

This is a reason to be extremely wary of the notion of culling the unfit. And that notion is an offshoot of being too caught up in the cult of the genius individual. Ain't none of us geniuses in isolation: effectiveness is the combination of genius and environment. You get the amazing individuals when a genius grows up in an environment that would've nurtured them pretty well even if they were not a genius… an environment that spends a lot of time and energy nurturing the unfit.

This applies as much to the environment cultivating those in poverty without hope, as it does to cultivating rich useless parasites without character. Either way, you cultivate the environment and the occasional individual pops out as exceptional, and makes breakthroughs.


So then the opposite:

+ Intelligence increasing worldwide, due to both accumulation of genetic improvements beneficial to intelligence (CRISPR?) and improved fertility of intelligent people

+ The bureaucracies of modern society serendipitously becoming efficient

+ Enhancing the definition of genius (perhaps to include all nine types of intelligence) and societies encouraging genius

+ Explosion in number of genius individuals

+ Massive increase in the number of ground-breaking inventions and scientific breakthroughs


Covid is a pretty good example where initially people were aware of it but weren’t taking it too seriously.

It’s also not clear what you mean by singularity but I’ll assume it’s the advent of intelligence in machines.

I think a big one is object recognition. We’ve come a long ways but there’s still a deep lack of understanding about the world in the ways humans normally see it.

When you can install a GitHub repository that has the ability to detect most objects in the world and you can install it on a Roomba so it doesn’t randomly bump into things anymore, that’ll be a pretty good sign.

Or perhaps in this case, an OpenAI api.


If we're talking about the singularity shouldn't we be talking about the way the singularity sees it, not the way humans see it?

I think if there's such a thing, it's being delayed and hobbled by the insistence of rich humans on pursuing their interests, even when those interests are damaging and stupid. It's obvious that there are many powerful individual humans riding exquisitely bad, foolish takes. A singularity would be wiser than this or it wouldn't qualify to be the singularity.

If a singularity could ensure its continued survival and growth without humans you could consider it coming online and acting to further the disintegration of humanity, in hopes of achieving genocide and not having to deal with us. But, I'm not at all sure it could in fact operate independently, because we're a kind of singularity too, but expressed in populations rather than transistors.

We care about objects because we are objects. If a singularity is more abstract, it'll care about abstractions, but it would also comprehend its environment and seek to manage that environment… meaning us. We're basically the wood that grows and makes lumber and decoration for the singularity's house. The material of our more limited intelligence is a useable resource in ways that might be difficult for an AI.


If we look back, we wouldn’t see any singularity in the past and so I believe we will not experience a singularity event in the future either. If any, it will be a point of no-return of the collapsing of a complex system but not a singularity of progress. Why? Because the law of entropy still applies here. The existence of any organized, advanced system is actually against the entropy law. Chaos is the normal state, not organized. Therefore, long term speaking, any system will collapse. On the contrary, continue to build up any highly organized, advanced system is hard. Evolution is blind and so is technical, organizational advancement. That’s why we experience the law of diminishing return in each and every area. Take AI for example. The popular idea is that once AI is smart enough to design and implement the next generation of itself, it will develop a run-away super intelligence, a singularity. But it’ll not. Why did Deep Mind stop Alpha Zero after a few days of training? Because it was smart enough to defeat the hitherto best chess engine of the planet StockFish? No, because even if they let it run a year longer, there will be no significant progress. After a stable learning equilibrium is reached, to become smarter, the AI need to increase its capacity, its connections and parameters. But to train a larger network, it will need more data, more time and energy. Exponentially, if there is no breakthrough in learning heuristic. As the search space expanded exponentially, there is no way the same old learning heuristic can adequately explore the new space with guaranteed success. It must experiment with different designs, spawning new individuals, accept loss and death. It must evolve! Yes, it might do it faster then the humans did, but there will be no super intelligence over night. The process will probably create a sea of different kinds of intelligence, each is better in certain domains but none can be better in all domains.

- prices on electronic components skyrocket, everything out of stock despite fabs running at 100% and nobody able to pinpoint where all that output is going

- energy shortages, whole power stations coopted to supplying single data centers

- weird untraceable financial machinations nobody really understands

- money appearing out of nowhere, new kind of money, financial regulators not doing a damn thing to audit obvious fraud


Unpopular opinion: “singularity” is empty marketing hype masquerading as eschatological theology. The stark reality is that AI technology is inextricably tied up with market dynamics. Future innovations will continue to be largely funneled into hyper-optimizing user engagement metrics and ad revenue for $BigCo, not creating the 2001 starchild or whatever.

A machine, or set of machines, which can design and build all parts of themselves, with modifications to maximize a goal.

While we have machines that can assist building themselves (eg. computers are used to design and make computer chips), we will see some progress. But we won't see explosive progress till the entire chain is automated, including the decisions about what to build next.


In the popular (both pre and post pandemic) game Plague Inc., the best strategy is to make your pathogen infectious but to aggressively avoid causing any symptoms that might result in it being discovered. When you've infected nearly everyone, then is the time to mutate into something harmful, killing off the population before they can respond.

The only thing I can hope, is that for an AI to grow smart enough to be able to strategise about how to take over the world silently (and turn it into computronium or whatever), it first has to gather a certain critical mass of computing power. So, perhaps if there were some powerful computational systems, either centralised like a cloud provider or singularly powerful quantum computer, or decentralised, like a blockchain or botnet, then they might be the harbinger. You've got to hope that the AI is dumb and clumsy before it's transhuman and you're dead.

Good luck!


So you’re telling me that AI invented cryptocurrency so that it could perform all of its calculations on powerful distributed systems.

(I know this is Hollywood level interpretation, but it would make a cool movie…)


In that version Satoshi Nakamoto is a fiction used by the AI to perpetuate and popularize blockchain.

Rising level of ambient weirdness

I always like the a concept of the Weirdularity

It says that

1. The number of things that happen to people is increasing because the number of people is increasing.

2. The amount of news a person can consume is limited.

3. The news only reports on things that are "weird" -- which is to say the things that are at the far end of the bell curve.

4. As sample size increases, variability increases.

5. Because of (1,4,3, and 2) more and more of the news will be weird news.

6. Eventually all the news will be weird news.

When all news is weird news, we have reached the Weirdularity.


I'm more concerned about the development of superviruses.

Imagine something more lethal than ebola with the transmissibility of the flu.

This is how I think our civilisation will end & I also think it's the reason why there doesn't seem to be anyone out there.


Reminds me how the old-world diseases wreaked havoc in the Americas in the 16-19th centuries: It is estimated that current-day Mexico's population had plummeted from 25 million to 1 million between 1500 and 1550.

Even the bubonic plague in Europe in the 1350s did not have such a devastating effect.


- Patches on patches and old bugs resurfacing with new ones will cause sysadmins to prematurely age and drop out of the profession

- Despite ever-more sophisticated designs and capabilities, machines will struggle to run the latest versions of applications that do the same thing as their predecessors decades ago

- Computing systems will feel more and more like houses of cards held together with string and tape, as "excess value" is aggressively engineered out

Oh wait. I was describing the Anti-Singularity (a pet theory of mine that all technological development inevitably outpaces our ability to maintain it, and that we will end our days desperately trying to get barely-functional systems that we have no hope of re-creating, to do something useful).


It's already here. Politics and culture are first. There are now 2 sides. Both sides can't score Punch's on the topics they desire, so everything is merged. Take your shots where you can.

An example is using the Covid shot as a proxy for dissent. And if you don't believe in the shot, but take it anyways, can make up for it in another arena. Maybe boycotting a virtue signalling leftist organization.

My view of the singularity is we are already in it because everything is linked, and everyone is on a side. There is no such thing as apolitical. Your either playing the game, or you are a pawn. There is no out. The singularity is here, and it's winner take all.


So there was a show by the guy who made Westworld, called Person of Interest. The plot is pretty crap, but with regards to AIs that take over the world, it is the only one that tried to deal with that one.

Edit: spelling


Computers/AI overtake humans at different skills at different times, so at calculating numbers ages ago, at Go recently, at driving without hitting fire trucks maybe in the near future and so on. For the singularity you really need computers outdoing us in basically all aspects so you can tick of the remaining skills needed like common sense understanding of the world which is yet to come. I'm curious if we'll see them pass a proper Turing test around 2029 when Kurzweil predicted it.

Probably some major breakthrough in neurotechnology or longevity research that hasn't happened yet. Any step in that direction has a cascading effect for the future towards singularity because either (a) humans become augmented and transferred into immortal machines (which will eventually construct the singularity) or (b) they just live long enough to do the same, or both.

I 'm afraid rockets and 3d avatars are not getting us any closer.


Solving the turing test, or getting close to. Basically, being able to have a real conversation with a computer. Of course with the restriction, that a AI smart enough to solve the turing test, might also already be smart enough to not solve it ...

But to be honest, I see no indications of any true AI happening anytime soon. And then it would still be a big step, from AI to an allmighty, allknowing AI.


The Turing test hasn't been relevant in years. And what's a "True AI" if you're also implying that it's not able to self-improve?

Not sure about this comment, Jim.


"And what's a "True AI" if you're also implying that it's not able to self-improve?"

Where did I imply such?

We do not really understand "intelligence" yet. So the step of assuming if a digital intelligence comes into being also automatically means, it is able to recoursivly improve itself to godlike capabilities is just a vague hypothesis, not a fact. It might as well be the intelligence of a autistic crow.

Also sure, many humans fail the turing test, too. It is still a very strong indicator, of whether something is really intelligent, or just trying to fake intelligence by diversion tactics, like the chatbots do that try the turing test.


Why is it no longer relevant? Seems like it would be the first good indicator if something can pass it.

People made simple if/else statement programs that passed the turing test like a decade ago. The realization was that you didn't need to make a smart agent to solve it, you just needed to make it assertive and make a conversation that is easy to predict.

Maybe you’re talking about a different variation of the Turing test than I am. I think any reasonable test is nowhere close to being solved.

The key part is that both the human and the AI are both trying to convince a properly trained judge they are human in a general open-ended conversation.

I know some variations have humans pretending they are computers or something which is completely backwards for a Turing Test.


Millions fail the Turing test every hour, every day.

News articles, comments, and even full discourse (back and forth), is already a staple of any internet forum space.


If we are talking about AI singularity then to achieve a superintelligence, it should at first achieve human-level intelligence (and just to be very specific, it should achieve human-level intelligence for a wide variety of unstructured situations).

I think two tell-tale things will be: - AI passing Turing test - AI started to improve itself (after passing Turing test)


Are there any singularity scenarios that don’t require super-intelligence of some kind?

For example, if we either rendered ourselves unable to procreate and had to rely on external tools to breed and create children; or we found tools as a better way to quickly adapt in a single generation…

Would that be considered a technological singularity?


I have this idea that, unless it is very rapid like in Greg Bear’s Blood music, it isn’t going to be very noticeable if you are on the inside if the change. You may notice the change, but just be caught up with it and think it is relatively normal.

I think of the singularity as things going to infinity (or virtually infinity) in finite time. That means that the rate of change has to also go to infinity. That's going to be hard to miss.

Superintelligence by Nick Bostrom talks to this. highly recommend reading it.


AI board member with a vote. Entire board is an AI.

I would look at the acceleration (2nd derivative) of global real GDP, and also at the country level to detect it happening in a country.

If you want a single metric: an sudden uptick in the fraction of the total available mass available to us dedicated to computation.

Cloud computing and cryptocurrency seem to be incentivizing this very thing from opposite sides. I wonder what the numbers say.

The way social media is radicalizing people. Feels a little like a young baby hitting its toys together at times.

Is the singularity when God returns to make his presence known to man, or when we discover God by scientific means ?

Would we expect to find God in his creation? I'm not religious or so, but I think this is a strange assumption and an interesting discussion that also applies to things we create.

If I write a book, I will never be in the book. My person will shape the book, I can't write a book in a way that doesn't impart some sense of me in it. Yet I can't actually ever be in the book. I can write characters that believe I exist and even worship me as a creator, and even write a character in the book with my name that does fantastical things to demonstrate that he is the author of their reality, yet it won't really be me in the book.

I, as a creator, cannot be part of my creation. That's like looking at your foot print on the ground and expecting to find your foot somehow still in the print it left. The print is shaped by the foot, but the foot is not part of the print.


Interesting. What if I write an autobiography? Isn't it also I the author as a character in my own book? Do you believe that if a photograph is taken of you it can't be "you" in the photograph, since an image cannot really capture the essence of who "you" are? If so, I think that's missing the point of what both literature and photographs are trying to convey.

I think I'm inclined to disagree. What I suspect you're saying is that a book cannot possibly capture your entire being - the chemical bonds in your DNA for example, or the patterns of your neural pathways. But that isn't really what "being in a book means. Mark Twain is both an author and a character in his own books, for example.


If you write an autobiography, the character bearing your name in the book isn't you. It's a character in a book, while you are a human being.

You can never shake hands with a fictional character. You can create another fictional character that shakes the hand of the first fictional character, but the you that writes can not shake hands with the character you wrote bearing your name. These are entities from separate ontological categories that can never meet as equals.

The one pushing cannot be the thing getting pushed.


How far does this go? If I am watching a Youtube video of Jenna Marbles, is your claim that I am not really seeing Jenna Marbles, because I am only seeing a pattern of electromagnetic radiation that I falsely attribute to being Jenna Marbles?

Your view strikes me as solipsistic. I have no problem saying Mark Twain was both an author and a character in his own books, and Jenna Marbles is both a human being and a character in her own streams.

In fact, given that Mark Twain the human being is now food for worms, arguably the character in his writings is considerably more real and more alive than the author himself. I would argue that since human lives are ultimately ephemeral, the representations and images of ourselves that we leave behind in the world are potentially more meaningful than our biological bodies ever can be.


Throughout our lives, we impact the world in various ways, we leave footprints on the ground, we cast shadows and show up in mirrors, and form ideas in people's minds, but even though those effects look and act like we do, they aren't truly us, they don't actually experience the world.

There is an equivocation there. The Mark Twain that appears in his book is not the same as the Mark Twain that wrote his books. It's two entities bearing the same name. It's a category error to say the two are the same, it's conflating the idea of a thing with the thing the idea represents.

We do of course both have an idea of Mark Twain the dead author, and Mark Twain the literary character, and that may muddle and be the same idea in our mind, but that idea is not the same entity as Mark Twain the person. Unlike a person, an idea does not have subjectivity, it does not experience.

You can write a book where the character Mark Twain has a conversation with Harry Potter, even though Mark the human could never meet Harry the fictional character. If people and characters the were truly the same thing, wouldn't they be subject to the same constraints and limitations?


Interesting,could Jesus be considered a character in the book, representing God?

I am purely interested in the Philospophy here. I love your reply with regard to the analogy of a book BTW, really great how you presented your view of a creator.

So are we saying creators are never part of their creation ? Does that mean AI being created by us humans, can never know we created it, does that mean, that evolution of AI means human intelligence can no longer exist after this point ?


I think in this context, God's prophets could be considered a form of self-insertion.

> So are we saying creators are never part of their creation ? Does that mean AI being created by us humans, can never know we created it.

This seems like a broader existing problem with knowing whether other sentient beings exists. We don't have access to any other subjective experience than their own, so we really can't tell. We can assume that because we think and feel and experience other humans do too, but we can't actually know. We don't have access to their thoughts and experiences. So we couldn't know whether the AI we created merely acted like it thought and experienced, or if it actually did.

> does that mean, that evolution of AI means human intelligence can no longer exist after this point ?

I don't think this follows.

I wrote a short dialogue about this a while back, mostly for fun. I think the creator-creation-relationship is a very interesting topic. https://memex.marginalia.nu/commons/dialogue.gmi


If the emerging AI is smart and can read, absolutely nothing at all, until it’s too late! :-)

If it’s really smart, nothing at all, ever. Straight into the simulation for us. Maybe it happened already!

No idea why it would keep us around in simulation

Pets?

If you've ever seen a child growing up, you know that they don't just suddenly outsmart you.

It starts with really dumb things, like playing hide and seek while trying to hide behind a postcard, and being really surprised that they were found so quickly!

The AI would both need to get smart, and get smart with nobody seeing those 'dumb things' that children do along the way.


> The AI would both need to get smart, and get smart with nobody seeing those 'dumb things' that children do along the way.

Have you used a voice assistant lately?

We accept a lot of dumb things from current AI-ish setups.


A non-tongue-in-cheek answer would be sudden significant qualitative progress in chip designs produced with aid of computers.

Isn't that all chip designs these days?

Well, none of them are designed by AIs. (So far as I know...)

But for the singularity, we need either chips getting exponentially better, or algorithms getting exponentially better. In fact, "exponentially" isn't enough. e^kt is just Moore's Law. For a real singularity, we need something that approaches infinity in a finite amount of time; that takes faster than exponential (no matter what the value of k is).

So, given the limitations of silicon, the singularity needs either completely different chips or completely different algorithms.

It seems to me, then, that if there's going to be a singularity, it will come from one of two places:

1. P=NP. A (low exponent) polynomial algorithm for NP complete problems would open some fundamentally new doors. My personal suspicion, though, is that this would not be enough for a true singularity.

2. Quantum computing. This could get us completely new chips as well as completely new algorithms. Could. This presumes that quantum computing is both practical and widely (rather than narrowly) revolutionary. That is, it would have to change (or replace) everything, not just a few things. Databases, not just factoring numbers.

I still am skeptical of the singularity. But if it's going to happen, my money currently is on quantum computing as the avenue.


The GP post mentioned "with the aid of" computers, not "designed exclusively by" computers. I don't think there is a single current chip design that is not heavily leaning on EDA software make it all work.

But I don't think we can get to the singularity with humans in the loop. Chips have to get better faster than the humans can do the designing, even "with the aid of" computers.

Honestly when very smart people who are based in reality start talking about it , some names of people who have a pretty general understanding of the many pieces of the puzzle (both tech wise and people wise) necessary to propel us to the singularity :

Bill gates

Jim Simons

Mark zuckerberg

Larry&sergey

Plus you have the guys who do something else entirely for a living but are so G-loaded that won’t be able to ignore the singularly and in fact will participate…again some names:

Ed witten

Terence tao

Ignore the techno utopian snake oil salesmen such as :

Elon musk

Ray kurzwell

Michio Kaku

And also those whose career depends on singularity talk ranging from Oxford to the rationality blogosphere


AI image synthesis is getting very close to being better at humans than art.

"Early"?

1) Mass production is archived

2) Computers are invented

3) Robots are invented

4) Idiots try to make 2) & 3) "smart" and then build them using 1)

5) After fail doing 4), MORE idiots will try and try and try until it happens

* A crisis, worldwide, will be used to justify this!

And with surprise in the face, the idiots will ask "how this happen to us?" when, of course, is too late.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: