Hacker News new | past | comments | ask | show | jobs | submit login
How Ray Kurzweil’s 1998 predictions about 2019 are faring (militantfuturist.com)
123 points by apsec112 on Jan 2, 2021 | hide | past | favorite | 115 comments



"MOSTLY RIGHT" seems to mean "a pilot project around silicon valley (or in a big Chinese city) exists".

I've visited a dozen countries and I've never encountered an AI system to prevent interpersonal violence. AI gunfire detection system seem to exist solely in a certain country with a tendency to tack startup solutions onto deep-rooted societal issues.

This is a recurring theme throughout the article. The average hospital runs on Windows XP, not a sci-fi network of smart-watches.

Virtual artists are somewhat respected for sure, but AI-generated art is a niche for nerds. I say "art" you think Picasso, not Google Labs.

No, no one passed the actual Turing test yet. The public discussion was doomed from the start, since Turing himself published several variants. This led to an explosion of (mostly dumb IMO) variations that are much easier to solve - which are getting solved.

Honestly, the original predictions are written badly. They are full of bullshit words (many, some, there is) making them not really quantifiable. Also, they contain an implied focus on US tech centers, which allows the existence of a single PR-stunt pilot project as a reason to answer "Totally Right" to anything proposed. Maybe I just don't the point, but this smells like pseudo-intellectual reasoning for "amortality will definitely be achieved before I die, see?"...


The most comical example of this the moving goalposts is Kurzweil's scores of his own predictions in 2010.

> 12. Communication | Haptics let people touch and feel remotely

PREDICTION: Haptic technologies are emerging that allow people to touch and feel objects and other persons at a distance.

ACCURACY: Correct

He basically interpreted his prediction to mean touch screens and some vaporware demo videos or blog posts.

His predictions are often technically a possibility but make virtually no impact on how society functions

https://kurzweilai.net/images/How-My-Predictions-Are-Faring....



https://www1.nyc.gov/site/nypd/about/about-nypd/equipment-te...

> In March 2015, the Department began employing ShotSpotter, a technology that pinpoints the sound of gunfire with real-time locations, even when no one calls 911 to report it. This technology triangulates where a shooting occurred and alerts police officers to the scene, letting them know relevant information: including the number of shots fired, if the shooter was moving at the time of the incident (e.g., in a vehicle), and the direction of the shooter's movement. ShotSpotter enables a much faster response to the incident and the ability to assist victims, gather evidence, solve crimes, and apprehend suspects more quickly.


>Honestly, the original predictions are written badly.

Predictions are vague by their very nature and this is a hell of a lot clearer than Nostradamus & co.

>AI gunfire detection system seem to exist solely in a certain country

Don't think you need AI for that. Some cities have had these for a decade plus


> Predictions are vague by their very nature and this is a hell of a lot clearer than Nostradamus & co.

Humanity has literally only one way to gain knowledge about the world: Making a testable prediction and then testing it (The Scientific Method).

Just because other people used their authority to make even worse arguments does not mean these are good.


> Humanity has literally only one way to gain knowledge about the world: Making a testable prediction and then testing it (The Scientific Method).

I think Popperian science occupies too large a place in your epistemology. "Science" has come to mean two distinct things: any procedure for arriving at highly reliable knowledge, and one specific procedure for arriving at highly reliable knowledge. The latter is constantly threatening to crowd out the former.

The scientific method itself, for example, would be valid even if no science had ever been done. It is not at all the product of empirical analysis, but of deductive reasoning.


Also "I've tortured the meaning of this to make Ray look like less of an idiot." I'm pretty sure gunshot detectors, which arguably date from WW-1, don't qualify as "Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”


A lot of the supposed successes existed in more or less their current form at the time the so-called "prediction" was made.


I’m not sure how you think they work or how they are currently being used if you’re under the impression that statement is an inaccurate description of reality.


This made me laugh, thank you. Do you have source on the WW-1 detectors? Sounds intriguing!


They're referring to artillery sound ranging. (Using sound to triangulate the position of enemy artillery batteries.)

https://en.wikipedia.org/wiki/Artillery_sound_ranging#World_...


> "Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions."

> MOSTLY RIGHT

I'll quibble with this one and call it mostly wrong. Yes, a very narrow and specific set of possible cardiac abnormalities can be detected (with significant error rates) by the Apple Watch's optical (PPG) and electric (ECG) heart rate sensors. Other vendors have functionally equivalent capabilities.

But medically speaking, smartwatches are probably decades away from independently diagnosing anything, let along recommending an intervention (i.e. a drug or surgery). Kurzweil's terms have specific meanings in medicine and current technology is nowhere close to that.

What exists today (or, in 2019) is a very narrow use case and far from the broad and general scenario Kurzweil envisioned. And there is no realistic roadmap to achieve any more than incremental development in this area - PPG and ECG were "low-hanging fruit" in some sense, and medically actionable wearable sensors for pretty much any other medical signal present orders of magnitude larger engineering challenges that few people are even seriously working on. Hence, "mostly wrong."


I agree with your assessment of MOSTLY WRONG but I think that you are overly pessimistic about the timelines:

> But medically speaking, smartwatches are probably decades away from independently diagnosing anything, let along recommending an intervention

Continuous PPG in its current form can diagnose acute and chronic changes in our health condition but the vendors are reluctant to do so. As examples, the newer PPG devices with SpO2 can detect acute and chronic changes in oxygen saturation (critical with COVID-19), Heart Rate Variability (HRV) can indicate infection associated stress, HRV should be used as a proxy for breathing rate, and the Machine Learning derived sleep cycles can indicate sleep disorders.

The data is useful now but the off-device analysis necessary seems to missing in action.


What about the blood sugar and insulin devices now available for diabetics?


Those aren't integrated in watches, jewelry or clothing, as Kurzweil suggested. In fact the whole trend around wearable devices never really seemed to reach a sophistication necessary for the mass market, just like 3D printed clothing. So far we've got Google Glass which disappeared as quickly as it arrived, smartwatches that were mostly a question of time, and some fitness products like those running shoes with Bluetooth. But the actually challenging, futuristic part is still more a topic for blog articles in the maker community than a serious product category.

Even with those insulin devices the biggest innovations were made by some of its users themselves as far as I'm aware(see "looping").


That’s a marvel which is ubiquitous in developed counties. However, there’s been too much hype around the wearables. Manufactures have foreseen that progress in smartphones will inevitably stall and promised wearables as the next big thing. And, as it turns out, “smart watch” is still dumb as hell, at least when it comes to health.


It's definitely wrong.

> But medically speaking, smartwatches are probably decades away from independently diagnosing anything

But it's because we won't do it. There's nothing technical or hard to make this right.

I don't believe it's hard to pick up a heart attack using a wearable. But I don't think many heart attacks are picked up with wearables.

Record everything, if something is different see a doctor should both meet this requirement and be highly beneficial.

A smartwatch will just communicate. That could be to an online database or a person.

If the intent is around privacy and no doctor or external thing is ever consulted and not recommended to be consulted. Then, yes it's decades away.


> I don't believe it's hard to pick up a heart attack using a wearable. But I don't think many heart attacks are picked up with wearables.

> Record everything, if something is different see a doctor should both meet this requirement and be highly beneficial.

Cue massive amounts of false alarms.


There's a lot of technical problems to doing this. We're still not great at getting heart rate or blood oxygen. Sure, you get a decent reading but that's with lots of averaging and the temporal resolution is decently low. Just because it's hard to maintain a good monitoring position when active let alone adjusting for things like skin tone/optical transparency.


Totally agree, the limitations here aren’t technical, they are regulatory. The barrier to entry for medical devices (FDA approval) is so incredibly high it greatly slows down development on health related devices. Apple has an entire team dedicated to regulatory stuff for the one tiny health feature they offer on the Apple Watch.


It may slow it down, but for good reason: it's outright dangerous to sell someone a device you claim to help monitor their health that does no such thing. Even worse, selling people's private health details (as sketchy as they may be) to whoever wants, or just making them public for hackers, is a much bigger detriment than any tech on the horizon can bring as a positive.


ITT: non-medical people saying something medical is easy.


There are large interpersonal physiological differences due to so many variables that it makes getting high accuracy with machine learning models hard. Only way I see this accuracy improving is by calibrating those models with good reference data for individuals.


When the topic of Kurzweil's predictions for 2019 came up 2 years ago (!), I gave him a ~20% hit rate [1]. However, at the time, I didn't realize that he was writing a sequence of 10-year predictions, so I didn't look at the corresponding predictions for 2009 or 2029.

When you instead grade his foresight on the basis of how he predicts the development of technology over time, it's clear that a lot of what would seem to be partial hits if assessed alone should actually be graded as hard fails. Discussion about the ubiquity of computers and their use in non-traditional roles for computers is a firm example here.

> “Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”

This is an example of where the author is going way too easy on Ray (he called it "partly right"). One of the biggest futurist misses of the 21st century is the role of genomics. While genomic sequencing has become incredibly routine, that hasn't actually resulted in the revolution in medicine that it was thought to have by, well, everybody. A key case in point is COVID-19 and some of the mutations: we understand very well details about some of its evolution, and we can pinpoint those mutations to individual genetic letters, but we still struggle to actually develop any predictive effects on its behavior (e.g., is this new strain more infective? more deadly? more resistant against vaccines?--we can't answer those questions without seeing what happens in real infections).

Contrast with organic chemistry. Decades ago, figuring out how to synthesize something ugly like vitamin B12 was insanely difficult, even Nobel Prize-worthy. But nowadays, it's the sort of task you might give to a junior chemical engineer in the first week to onboard them (although maybe not for something as difficult as vitamin B12). The thought was that genomics would lead biology to a similar sort of plane of existence--that we could treat genes as a ready menu for achieving various biological goals, and treating disease (at least those with genetic components) would be handled with a similarly combination of picking and choosing from this well-understood menu. That's what Ray was predicting here (more specifically, that the genetic influence on aging, cancer, and heart disease has succumbed to this process), and we really haven't made any progress on that front.


> When you instead grade his foresight on the basis of how he predicts the development of technology over time, it's clear that a lot of what would seem to be partial hits if assessed alone should actually be graded as hard fails. Discussion about the ubiquity of computers and their use in non-traditional roles for computers is a firm example here.

Absolutely, and this blog post is being way too generous.

In part one, he gave this a RIGHT: “Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”

Yes, but actually no. You know that what Kurzweil was imagining was that there would be powerful computers embedded in stuff, and each would run some sort of AI with personality, so your toaster would have a personality, and your TV would have a personality, and it could - in a Star Wars droid-y way - recommend movies to you, allowing you to have a conversation with it:

"Sir, I think this new SciFi series called The Expanse would be very interesting to you!"

Yes, my TV has a processor. Yes, my TV has WiFi. But it's primary purpose is to show me ads and spy on what I'm watching to increase ad revenues somewhere, and although I can "talk" to my TV, what actually happens is that if I say the title of a movie or a show, my audio will be transmitted to some server somewhere, wrongly transcribed, and then my TV will display all the places I can rent episodes of Friends from.

¯\_(ツ)_/¯

Hard fail.


As an aside, my droid TV better recommend The Expanse or I’m going Butlerian Jihad on it.


If your definition of computer includes microcontrollers then we indeed have computers embedded in about anything.


> walls, tables, chairs, desks, clothing, jewelry, and bodies

None of those routinely have microcontrollers in them. The closest are probably bodies (e.g. pacemakers) and maybe walls (e.g. smart light switches or power outlets with USB sockets). Jewelry with various computing devices exists but is hardly common and the only desks with computers in them that I know of are these gaming PC case desks that almost nobody has.


I have an eymask with integrated Bluetooth headphones & motorized standing desks are controlled by a MCU. As for chairs - dentist chair setups are pretty sophisticated and include stuff such as built in panoramic xray device.


Sure but those things are not the norm. Or rather, motorized standing desks with microcontrollers are the norm for motorized standing desks, but not for desks. Same for dentist chairs, they are not the norm for chairs.

Do they actually put xray devices directly into the chair? I always had to go into a special shielded room with an xray machine that slowly moves around my head.


Yes, but he picked the wrong "everything". Not "walls, tables, chairs" but "cars, phones, and tvs"


I agree with this. I don’t think the promise of genomics is even possible because genes (unless it is something simple like a loss of function mutation) don’t work like we think they do on phenotypic expression. Every gene seems to be involved.

https://en.wikipedia.org/wiki/Omnigenic_model

And that doesn’t even begin to touch on epigenetic modifications.


> I don’t think the promise of genomics is even possible

It is possible, but building a whole-organism model requires huge amounts of data for each individual, which isn’t economically feasible at the moment. The 1000$ genome is there, but we need to bring it down to 10$ or less.

That is one of the reasons why modern bioinformatics is relying on mechanistic statistical models so much. You have to be incredibly smart with your models since there’s so few samples to work with.


The ones he was mostly right or right about were the less speculative and more generally predictable ones. Cast any of them in the negative, and ask yourself if the proposition felt more or less true.

The ones he was mostly wrong on, are the highly speculative ai boosterism which is why Kurzweil is laughed at by many people.

I've felt grumpy about Kurzweil since 1982 or so. He made some really good innovative stuff happen in computing for blind people and then dived into fame and attention seeking lunacy. He's unquestionably smart, far smarter than me. He appears to have set himself time wasting goals.

I totally fail to see what value he brings google compared to eg Vint Cerf or Rob Pike or Ken Thompson.


Kurzweil since 1982 is easy to understand.

His predictions are part of his personal eschatology. He is man in religious quest for personal salvation.

All his past predictions and timeframes are conveniently set so that there is change for him to live forever. He is nutrition and immortality nut (Fantastic Voyage: Live Long Enough to Live Forever and Transcend: Nine Steps to Living Well Forever). He is now 72 years old. If he lives to old age with all his life extension pills and rapture of the nerds happens in 25 years he could just make it.


Looking at how quickly our knowledge of intracellular processes is growing, I fully expect us to invent biological immortality. In eight hundred years or so.


It will happen far sooner than you expect. In five hundred years.


If we can improve lifespan by more than 1 yr/yr, then you may already be immortal...


The whole boomer thinks technology will allow him to live forever is something that was really common when I was growing up. I long decided animals are not at all evolved to last.


Well they should hurry up then!


That would be completely logical. Predictions can be self-fulfilling, and he would have nothing to gain by placing his predictions for biological immortality out of his own reach.


That is the logic of religion - believing only things that allow personally satisfying outcomes.


That's not how religion operates at all. It is how New Age adherence works, see for example this recent article on how Covid exposed narcissism amongst its members:

https://www.elephantjournal.com/2020/12/how-covid-exposed-th...


Practically all religions promise better outcome after death is you live virtuous life.


You have to understand a person's motivations to understand that person. Kurzweil doesn't want to die. He wants to live forever, in some form. Once you understand that, you'll understand why he's so bullish on AI, gene therapy, lifespan extension, etc.


No rational person should want to die unless their life is full of suffering and they have no hope of recovery.

People who truly understand technology can be very optimistic about it's potential.


> No rational person should want to die unless their life is full of suffering and they have no hope of recovery.

Or, no rational person should want to live, because life is mostly drudgery and inconvenience and it also includes a fair amount of pain and anxiety. If you're a materialist, death would free you from that and any regrets about missing out on any positives or guilt about unfulfilled obligations.


Of course rational people should want to die.

Death is a good thing on the whole. If we cease to die then humanity is doomed. Young people need to invent a world of their own at each new generation; if old people (and therefore, old ideas, old prejudices) never die then there is no room for new.

Additionally, from a practical perspective it becomes indefensible to make new people (have kids) because where will they all go?

Not dying is the most selfish attitude imaginable; that level of selfishness is irrational.


if old people (and therefore, old ideas, old prejudices) never die then there is no room for new.

This is a profoundly ageist viewpoint that sadly is very common in our industry. That’s why 40-somethings are laid off so 20-somethings can reinvent the wheel yet again in the shitty “framework” that’s fashionable this week. In every other industry, medicine, law, science you name it, experience is prized.


I think the op is arguing against having immortal 200 year old cyborgs running the world, not putting 40 year olds out to pasture.


If I had 50 years experience as a business leader, 50 more as a software developer, another 50 as a UI/UX designer, and 50 as a digital security consultant, I would be more valuable than 4 people who had 50 years each, because what would be meeting between the four would be instant and complete awareness of all issues and without communication ambiguity for me.

Certainly there would be need of a different economic model in a mortality-optional world so that young people who have yet to celebrate their first centennial can thrive, but that was also true when industry replaced feudalism and people started to stay in school until 21 (YMMV) instead of starting families the moment puberty hit.


If you had last practiced software development between 100 and 50 years ago, and spent the last 50 years doing business leadership, I don't think you'd necessarily have as much value in a software development meeting as someone who just finished 20 years of software development. Even more frighteningly, you would likely be very convinced of your knowledge in software development, and would have the gravitas to silence any opposition to your ideas.

Note: just to be clear, since text sometimes misses tone, I'm using a rhetorical 'you', not trying in any way to accuse you personally of the attitude described above!


Fantastically well demonstrated point, and good footnote. While I personally do make an effort to know the limits of my skill, I have absolutely witnessed many who don’t.


Ironically, there was a relatively recent study showing young doctors have better outcomes than old doctors. [0]

[0] https://www.cbsnews.com/news/doctors-older-age-patient-morta...


> Additionally, from a practical perspective it becomes indefensible to make new people

Birth rates are currently below replacement levels in developed nations. And if you’re immortal, you can wait forever to start your family.


Not to mention there scould be space enough oce serious space colonization gets going. One could actually imagine many things progress better if you don't have to train new people all the time to replace those retiring & remove the stress death causes to all involved.


> Not to mention there scould be space enough oce serious space colonization gets going.

Even with the current falling birthrates, you would not be able to move more people off the planet than are being born on it, even with space elevators. Space colonization is not a cure for overpopulation of Earth. One of the things Kim Stanley Robinson explored in his Mars trilogy.


Space fountains and launch loops can have a very high throughput if you can get them to work - then the main problem becomes building in space accommodation fast enough.

For one fiction yet science based case of moving most of Earths population off planet during a short time window, see the following Orions Arm article: https://www.orionsarm.com/eg-article/49b46fd2198ed

5.5 bilion people over 20 years.


Who says immortal people would behave like we see old people today ? With many of the biological clock and "not catching the train" stresses removed I would imagine mental health and general happiness would improve. And if you are living for hundreds of years, things will change around you and you will have to adapt anyway.


Physics stops advancing.

Powerful people consolidate their power, discouraging subversive ("disruptive") technology.


Part of the reason old people stick with old ideas and prejudices is that they have old brains. Real rejuvenation would make old brains as flexible as young brains, and there's already been research in that direction.


Sure, there's a component of that. But cognitive biases such as sunk cost and the huge value we give personal experience are much bigger determiners for this attitude I believe. It's simply very hard to change the mind of someone who has worked with the same assumptions for 20 years.

Hence, 'physics advances one funeral at a time' etc.


> if old people (and therefore, old ideas, old prejudices) never die then there is no room for new

Is that because there's no room? Or because we're unwilling to accommodate the new?


Often, you can't have two things at the same time. For example, young people in the USA are overwhelmingly in favor of Medicare for all, while older people are more divided.

You can't have both Medicare for all and no Medicare for all - you can't accommodate the new without replacing the old.


But there is a difference between saying I want to die eventually and I want to die now. At what age should the second one kick in? It makes more sense to say I want to die eventually, but then keep putting it off.


like others have said, I think this rests strongly on the assumption that immortal humans would act like old people today. I imagine if biological immortality is solved than "solving" such biases exhibited by older populations (which I don't think is entirely true either) would be right around the corner


No rational person should want to live forever knowing the progressive deterioration of biological bodies.


Really that's the fundamental misunderstanding here.

The idea is to prevent or recover from age-related disease and degredation.

Certainly most everyone likes the idea of staying healthier as they age. This is just progressing that idea with speculation about potential technologies.


I hope you’re aware that progressive deterioration is the first thing that the people working on this want to solve.


Exactly - if someone says immortal they usually mean "forever young" or equivalent.

Deterioration is not really compatible with forever anyway.


Death is perhaps the most rational concept I can think of.



Exactly - any person dieing is a failure of society. Simple as that.


I read The Age of Spiritual Machines in ‘98 with high hopes but quickly found Kurzweil difficult to take seriously. The singularity fandom has always baffled me.


>The singularity fandom has always baffled me.

It basically comes in two flavours. The "it's dangerous" crowd is as Ted Chiang put it imagining the singularity as no-holds-barred capitalism, because when they imagine superintelligence the only thing they can come up with is the companies they work for.The paperclip AI is just a VC funded startup, the only difference is it's even better at disruption than their employers.

The Kurzweil one is basically just recycled Christian eschatology, where the final judgement comes around, we'll resurrect the dead, defeat death and so on.

So it's basically two American timeless cultural classics, the funny part is that people in that crowd don't seem to notice it


It's a religion based on the idea that technology will advance fast enough to make its followers immortal before they die.


I remember reading “The Age of Spiritual Machines” and the “events vs time” plot (criticized below)

https://kk.org/thetechnium/the-singularity/

was the calling bullshit moment for me [1]. It is a nice thought though - if we just wait then technology solves all of our problems.

[1] https://www.callingbullshit.org/


or fear of Roko's Basilisk [0]

[0] https://www.lesswrong.com/tag/rokos-basilisk


> The singularity fandom has always baffled me.

I like it. Try to take as hard sci-fi. It's plausible enough to let your imagination run with it. Don't take it seriously enough to turn it into a religion and it's all good fun escapism. I've accepted that we all need a way to think away the fear of dying. I like this one.


Would be funny if Google brought on Robert Zubrin of Mars Society fame, or even funnier if they brought on David Brin of The Transparent Society.


>and more generally predictable ones

Can you really say that without the benefit of hindsight colouring that assessment?


A fair point, but I think when you compare Ray's predictions to Robert Heinlein (for example) you'd be more impressed with Robert's teleoperation [1] and mobile phones [2], predicted in the 1940s, than Rays singularity and his "AI will be mostly human led decision making"

[1] https://en.m.wikipedia.org/wiki/Remote_manipulator

[2] https://en.m.wikipedia.org/wiki/Space_Cadet


“In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing.“

This is not exactly true. Welfare in some parts of the US may be minimal; for example, food stamps may be available but not enough to feed yourself every meal. Further, housing assistance may be non-existent or virtually so in the sense that many are homeless while waiting extended periods of time before they can utilize it, don’t qualify because their income is too low (it’s true, but how does that make sense?), or such housing is temporary.

Also, some of the “right” interpretations are rather optimistic or generous in their assessment. But fun read.


> This is not exactly true...

Unless absolutely best is provided. This will always remain the case. One has to compare 3rd world countries to see how generous US system is even in its current


ShotSpotter is really not a great example of AI monitoring for interpersonal violence, if only because detecting gunshots is not that hard. For those who have never been around guns, guns are very loud. Loud enough to damage your hearing instantly. Detecting gunshots is a pretty low hanging fruit in this area, and far less impressive than detecting the chainsaws of illegal loggers in the Amazon, for example.

The point about facial recognition cameras is fair though.


ShotSpotters are good at picking up airplane crashes too.

Some accidents have been located in the Bay Area that way, which is helpful considering that Palo Alto and Redwood City airports have marshy areas.


I vouched for this comment, because a quick Google suggests it’s happened once, at least:

“ShotSpotter system records tragic plane crash” https://www.paloaltoonline.com/news/2010/02/19/shotspotter-s...


> although the rights of machine intelligence have not yet entered mainstream debate

Thankfully, because the idea that machine intelligence should have rights is based on a fundamental and obvious mistake: first they're not alive, second they've not evolved to feel pain or any other drive related to being alive.

You can't literally kill an artificial intelligence. What if you tried? Joke's on you, I have a snapshot from 10s ago. What if I destroy that snapshot? Joke's still on you, I have a full backup from last night. What if I destroy that backup too? Dude, I got an offlined tape in the vault. And anyway it was distributed to begin with.


You make a lot of assumptions, e.g. that the AI is, in practice, being thoroughly backed up, and that it's even feasible to back it up online given its size and update bandwidth. More abstractly, deleting a day of memories is still pretty close to a form of death. You better believe I'd feel weird if you restored me from a day-old backup. There is a version of me that died, and I never get to remember that day of experiences. Your dismissal of the possibility of killing an AI only applies in a small slice of possible scenarios. Your first argument is a lot stronger.


> More abstractly, deleting a day of memories is still pretty close to a form of death

Not only is it not for a living being, but it's definitely not for a non-living device.


Replace AI in your example with the consciousness of a living person you copied (perhaps destructively) into a computer simulation.

Do your conclusions still stand ?

And an actual AI migh be as complex and sensitive as such a copy of a live person or possibly even more.

(edit: fixed typos :P)


> Do ypu conclusions still stand ?

Indeed they do, but the situation is different enough in other ways to have other implications.

Addendum: the fact that a transferred consciousness could inherit evolved traits such as pain (psychological if not physical), fear of death and so on would change the analysis as well. Those have no reason to exist in a constructed consciousness / GPAI.


Good point in the addendum - these could also influence all those AI-takeover theories/stories as they very often are basically about AI self preservation instincts which it might not even have.

Or at least not in normal form - eq. if it makes sure enough instances are kept running somewhere or are backed up and could be run it might not care about the rest, significantly reducing chances of conflict.


So if humans became immortal* and couldn't feel pain anymore then they should not have rights anymore?

* I assume that with "alive" you mean "can die", because I see no other reason why "being alive" could be relevant to your argument.


If humans were completely different from what we call humans, they would indeed have to be treated completely differently.


That's such a dumb non-statement. The question is not "should we treat them differently" but "should they have rights at all".


They should have rights, but they need not, indeed they cannot be the same as for mortal beings. Consider: falsely imprisoning someone whose lifetime is limited for years is bad in that it's extremely unpleasant, but also because it deprives them of a finite resource. The former is true for an immortal being, not the latter obviously.


When making predictions you’re either RIGHT or WRONG in my opinion. If your prediction cannot be clearly assessed then it is WRONG.

Otherwise, you’re in the realms of prophecy or horoscopes where most of the heavy lifting is done by the reader after the fact.

Edited to add my own surprising-but-useless prediction: “Computers as we know them today will be largely abandoned by the year 2050”.


It really feels like this is the key fragment for the entire article:

> If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes...

If that fragment is to be taken seriously, then much of what Kurzweil predicted for 2019 was also true in 1998. Algorithms carried out by computer have been used to administer and surveil since before the 90s.

Personally, my understanding of Kurzweil is that "computer intelligence" is something greater than just an algorithm that is carried out by a machine, but could be done by a human. It would mean something more malleable and conscious than that. Based on that standard I'd say he's wrong about nearly everything (save the privacy one - but again - that was true in 1998 with weaker encryption).


"Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible."

What? How does this follow?


This article is incredibly generous in its assessments. It is giving "partly right" and "mostly right" to things that are wildly off base.

Kurzweil fandom bullshit, all of it.


It’s always kind of weird to see other people make a mistake I did for years, like charting the trend in life expectancy increases and project it to the future.

Life expectancy increases are mostly a statistical fluke of reducing child mortality. Once you factor that out, our upwards trend is way less impressive.


At the root of this I think there is a strong difference in worldview between people who are let's say very optimistic about the potential of technology, and those who are less optimistic.

People get into these discussions of specific predictions with their worldviews motivating the evaluation of the predictions. That's why you have strong disagreements about what ostensibly would be a fairly objective measures. Because people see the world in radically different ways.

There are different variations of views towards technology but at the extremes you have "true believers" and what I would call "haters".


> “Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”

What about the cases of virtual divas, such as the OG from Vocaloid, Miku Hatsune, and now the whole section of vtubers, such as Hololive? Seems like it hits that spot, although still a tiny niche.


Not really. Vocaloids are driven by human-written notes; one could argue that converting the notes into singing may involve "machine intelligence" (depending on how much processing is applied to the voice sample bank), but ultimately the software is essentially being used as an instrument.

Similarly a Virtual YouTuber is just an animated model driven by motion-capture software, with the motions and voice provided by a human. I doubt this is what Kurzweil had in mind.


The retrospective pattern of futurists' predictions is pretty clear: almost all optimistic predictions about medicine end up being mostly or entirely incorrect.

The narrative of near-future (10-20 years) medical breakthroughs is continually reiterated by the media, only with the horizon pushed further and further forward, with the effect of distorting the public's perception of the actual pace of medical innovation.

I wonder what kind of changes (regulatory or other) you would see if the general public came to accept that radical progress in medicine - absent radical change - wasn't going to happen in their lifetimes, and that medicine twenty years from now would be little different and only marginally more effective than it is today.


> “An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. [...] On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”

> MOSTLY RIGHT

This first one should be “mostly wrong”. The vast majority of decisions are made without any involvement or consultation with machine-based intelligence.


How often do people make non-trivial decisions without ever using Google?


I would take "significant involvement and consultation" to mean a lot more than "I googled for an article about it and followed its advice" or similar things.


The way most people and HN audience make decisions is definitely not the same


> “Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”

In air combat, whether A2A or air to ground, smart missiles are the primary weapon with planes acting as the launch platform. Guns are there as a backup, to fire warning shots, or to save on the expense of the missile. To wit, according to a former fighter pilot streaming air combat sims, he didn't train much on guns even though it's commonly used in the game for the sportmanship aspect.


> Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement

Sounds like a lot of song lyrics I've heard.


As wrong as the original spirit and intent of these predictions are for 2019/2020, I’ll bet many will be common place by 2030 and most by 2040.


comparable gains in bioengineered antiviral treatments

If I was a pedant, I’d point out that the mRNA vaccines are not antiviral treatments, and this prediction was WRONG; the balance of power clearly lies with potential attackers right now.

I am a pedant.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: