Hacker News new | past | comments | ask | show | jobs | submit login
Why A.I. Moonshots Miss (slate.com)
69 points by birriel on May 4, 2021 | hide | past | favorite | 101 comments



They miss for the same reasons they have missed throughout the decades: they overpromise and underdeliver. That was the reason for the past two AI "winters" and most likely will be the same reason for the upcoming AI winter.

What we have successfully accomplished this time around is big data analytics that have used machine learning technology to derive insights and patterns from data that traditional data analysis approaches have not been able to achieve. And dramatic improvements in computer vision and natural language processing among a few other areas. Those things will stay.

But I'm looking forward to the next wave of AI, which should be in the mid- to late-2030s if the current cycles hold.


An "AI winter" like we had before isn't plausible now. In the past, it was the failure of AI research to deliver business impact. The current ML boom is fundamentally different in that it already has become a standard part of the stack at major companies:

- Recommendation engines (Google, YouTube, Facebook, etc.)

- Fraud detection (Stripe, basically all banks)

- ETA prediction (Uber, every delivery app you use)

- Speech-to-text (Siri, Google Assistant, Alexa)

- Sequence-to-sequence translation (used in everything from language translation to medicinal chemistry)

- The entire field of NLP, which is now powering basically any popular app you use that analyzes text content or has some kind of filtering functionality (i.e. toxicity filtering on social platforms).

And that's a very cursory scan. You can go much, much deeper. That isn't to say that there isn't plenty of snake oil out there, just that ML (and by extension, AI research) is generating billions in revenue for businesses today. There's not going to be a slow down in ML research for a while, as a result.


Partly this is because we've given up trying to get computers to "understand" and have focused on making them useful with sophisticated software. That is, the work is no longer about artificial intelligence, but about trained, new-style, expert systems.


This is just redefining what counts as "real intelligence" to raise the bar.

Once we have computers that can think like humans, there'll be people saying 'oh, it's not real "understanding", it's just generating speech that matches things it's heard in the past with some added style', not realizing that the same also applies to human writers.

As long as the bar keeps rising and we've found business applications, there will be no AI winter.


It isn't redefining anything. There's a classification in AI research called "strong AI" that includes Artificial General Intelligence and machine consciousness. Current uses of ML are instances of so-called "weak AI" which is focused on problem-solving and utility.

No redefinition, and using proper distinctions takes nothing away from the accomplishments made in the field. We don't even have anything close to a scientific understanding of awareness or consciousness. Being able to create machines actually possessing awareness seems fairly far off.

Being able to create machines that can simulate the qualities of a conscious being, doesn't seem so far off. I suspect when we get there the data will point to there being a qualitative difference between real consciousness and simulated. Commercial interests, and likely government bureaucracy will have a vested interest drowning out such ideas, though.

The bar hasn't moved. We've just begun aiming at different targets. That we succeed in hitting the more modest ones only makes sense.


This is a nitpick but regarding "strong AI" and "weak AI", I take issue with the use of those expressions to refer to actual software systems, or to sub-domains of AI research. Those expressions in fact refer to two hypotheses pertaining to the philosophy of AI and not to any concrete AI research field. Even within the weak AI hypothesis, a system that perfectly replicates the appearance of human consciousness is not conscious. See [0]. Therefore the strong vs. weak dichotomy is unrelated to progress towards emulating human intelligence and behavior. It is concerned with the question of whether a certain fundamental barrier can be broken, not unlike the speed of light.

The meanings people ascribe to those terms nowadays are very diverse and distant from the original hypotheses. Sometimes language evolves usefully so that expression is easier and ideas can be conveyed more accurately, but I'm afraid when it comes to "strong" and "weak" AI, another, more damaging kind of semantic drift has taken place that has muddied and debased the original ideas.

Those terms are victims of the hype surrounding AI. I suspect this is part of why the field has trouble being taken seriously.

[0] https://ai.stackexchange.com/questions/74/what-is-the-differ...


AGI is not currently in scope for any solutions on the market. The current crop of AI systems are simply analytical engines.


AGI isn't something that is scoped. It must be painted.


Isn’t artificial intelligence a very broad term anyway, of which expert systems, statistics, fuzzy logic and all the new wave neural network / deep learning are examples?

You seem to be thinking about artificial general intelligence, which is much more difficult to achieve.


Time was that the term AI meant what has now come to be referred to as AGI or machine consciousness. Marvin Minksy wasn't looking to make better facial recognition, or better factory robots. His research was about making a mind.

Twenty years ago, that kind of work ran into a brick wall. But neural networks were still useful. Enter Machine Learning, and it borrowing the term "AI." But Artificial Intelligence, for many, always meant the dream of building R. Daneel Olivaw, or R2D2.


AGI is even difficult to define, it's still up for debate if/what is part of what we call intelligence and if it's present on animals and if it can exist without a "subconscious" or "feelings".


I don't know if Something that have a survival instinct is AGI but to me this is enough to call Strong AI. Does the Caenorhabditis elegans worm have AGI ?


The hard problem of consciousness, which everyone has a hard time defining?


I wonder if we could detect harassment today in a chat for example. With like GPT3 or something.


Highly unlikely. It's pretty much impossible to detect the difference between a friendly banter, a heated debate, trolling, and genuine harassment for most people given just a chat log.

There's often much more information required (prior communication history of the involved parties, previous activities in other chats, etc.).

Using automated systems like GPT-3 would simply lead to people switching to different languages (not in the literal sense, but using creative metaphors and inventing new slang).

Pre-canned "AI" is unable to adapt and learn and I doubt that any form of AGI (if even possible without embodiment) that isn't capable of real-time online learning and adapting would be up to the task.


Mmh I'm not convinced. There is pattern in aggressiveness. One of them being the harasser is talking about the victim or something that's strongly linked to the victim (like it's work, member of familly, etc).


We can, though the teams I know who are doing it are not using GPT-3. If you look at any platform that has live chat features (think Instagram Live or Twitch) and an ML team, then there is almost certainly someone working on toxicity filtering along with propaganda detection etc. It's a really really hard problem, particularly when you think about how chat platforms tend to develop their own short hand and slang.


A double bind detector!


> ETA prediction

I think we are undervaluing basic statistics in this realm.

> (Uber, every delivery app you use)

If you treat these apps as logistics applications, AI still has a place handling optimization problems.

> Recommendation engines

A recommendation engine is by and large about telling you what you want to hear. How's that been working out for us?

We are going to have a very hard time separating them from the divisiveness and radicalization problems with social media now. This is quite likely an intractable problem. If we don't collectively draw a breath and back away from these strategies, things are going to get very, very bad.

> Fraud detection

The vast majority of the time I spend interacting with my bank is spent explaining to them that I'm not a monster for saving my money most of the time and making a big purchase/trip every couple of years. Stop blocking my fucking card every time I'm someplace nice. Unfortunately since these services tend to be centralized, switching banks likely punishes nobody but me.

The problem (advantage?) I think with your list in general is that with the exception of text-to-X, many of these solutions fade into the background, or are things that may be carved out of 'AI' to become their own thing as other domains have in the past.


But most of those are terrible! Google has actually gotten worse over the years, literally all I've ever heard about fraud detection is the vast amount of both false positives and false negatives somehow, speech to text is utter garbage where you have to repeat yourself, consciously annunciate, and not use tricky grammatical constructs, and the thought of the same tech behind locking me out of my credit card every so often deciding if I'm being "toxic" or not is frankly terrifying.

AI might be producing value, but only in the sense of getting half of the quality for a quarter of the cost.


This has not been my experience at all. My standard Google results are still very good, and the direct answers they extract from web results are amazing. Speech to text is terrific. It consistently gets even obscure proper nouns and local business names on the first try.


> In the past, it was the failure of AI research to deliver business impact.

That's not quite right - the problem wasn't that no impact was delivered, but that it over-promised and under-delivered. I'm fairly optimistic of some of the machine learning impact (even some we've seen already) but it's by not means certain that business interest won't turn again. We are very much still in the honeymoon phase currently.


AI was also having huge success before the last AI winter. It’s mostly a question of buzzword turnover not underlying technology.


You need to understand "AI winter" as referring to Academic and (to a lesser extent) private funding of AI research, specifically academic AI research. It goes through these booms of optimism "We'll have self-driving cars in 5 years! Here's a hundred million" and then pessimism "It's been 15 years and all we have is driver assist features, we're going to fund grants in more practical areas now" - the pessimism is followed by a drying up of research funds for AI, even though there is not a drying up of research in general. This winter is very real if you are trying to get a job in AI research, even thought it's impact is quite limited as most CS research is not and has never been AI focused. I would say that CS in general is a very fad-driven field so this phenomenom is to be expected.


Not quite as all funding is cyclical. My point was the bust cycle had little to do with marketable products last time, and the same is true this time around. Consider the ideal outcome, once you have actual self driving cars, funding for self driving car research dries up. Success or failure doesn’t actually matter here, the funds are going away.

The post Facebook boom and bust cycle around social networks wasn’t such a big deal for those involved because skills transfer. The issue is getting a PHD in AI related topics is far less so. Time it just right and a 500k/year job is on the table, but get the cycle wrong and it’s a waste of time and money.


I wasn't around back then, I'm curious what were the business use-cases of AI/ML back then?


Optical sorting of fruit and various other things is a great example where early AI techniques made significant real world progress. It’s not sexy by today’s standards, but is it’s based on a lot of early image classification work.


We are talking about money-making applications here. Not progress.


By progress I mean actual money making products. If your widget is doing shape recognition based on training sets and your competitors are hard coding color recognition then you end up making more money.


Spam filtering and route-planning are big successes from previous generations of techniques labelled as "AI".


You could say the same thing about the first AI winter. Programming concepts developed for symbolic AI are in every language now.


Search is a type of machine learning!


> That was the reason for the past two AI "winters" and most likely will be the same reason for the upcoming AI winter.

If you follow the papers coming out, I'd say we're very far from any winter. Public perception may ebb and flow, but nothing can stop us now, research has gathered that critical mass and we're going nuclear on AI right as we speak.


I wouldn't be that bullish on the current crop of research. While each technique comes out with some impressive wins. Many of these wins are on toy tasks, or the gains are incremental on tasks well below the commercial utility threshold.

Even large models like GPT-3 can't tame language and are steadily pricing themselves to be un-competitive with people.


Deep learning is pretty consistently putting out groundbreaking work in new domains. In the current boom, we have seen massive breakthroughs in:

- image classification

- image understanding (object detection and segmentation)

- text to speech

- speech to text

- natural language modeling

- style transfer

- intelligent agents (go/Dota/etc)

- protein folding

And many other less openly documented breakthroughs that are actively delivering business value.

New successful research (impactful stuff, not small improvements in SOTA) is happening at a pace that suggests even if we enter a bit of a winter due to overpromised technologies, the core technology of differentiable programming will continue to break into new domains and revolutionize them.


It’s not as clear cut on the commercial impact front. While reading the literature can sometimes give the impression that NNs have delivered game changing performance. In many cases they are delivering 1-10% gains over prior techniques.

Alternatively in fields like speech to text the most impressive neural results are commercially impractical due to the cost of inference.

It cost roughly 30 million dollars to train alpha star which rated in the top .3% of Starcraft players. Presuming that the technology reasonably transferred to other fields ( lack of prepplayed pro games would be a problem ) there are few reasons to believe one could gain 30 million dollars of value from the result.


I've seen internal work at large companies like Amazon and my sense was that the financial impact was very meaningful. Deep Learning is in many ways a new economy of scale and a 5% reduction in say warehouse space or reduced fraud at Amazon scale is a LOT of money in comparison to the researchers and compute that went into building that model.

Not to mention the impact of completely new technologies like voice assistants, automated high-quality image modification, automated computer vision systems (ones that have produced real value like automated industrial maintenance notification systems).

> In many cases they are delivering 1-10% gains over prior techniques.

As a general statement, that is false. At least among the companies that are good at DL.

As for alphastar, I somewhat agree - it's very early and that tech hasn't left the lab yet AFAICT. But if you could build an agent that was human average at some common human task (e.g. driving) that is easily worth hundreds of millions of dollars if not billions.

It takes time to go from the lab to commercial usage. Older breakthroughs like CV and transcription/TTS has made that transition successfully. More recent breakthroughs like like RL agents, NLP, style transfer are still making that transition (at varying paces). And there continue to be new research breakthroughs like protein folding that are still many years from making their way into industry, but the continuous nature of these breakthroughs bode very well for the future of DL.


On one hand, maybe the name is just unfortunate. The minute you mention AI the expectation is set to deliver machine sentience.

On the other hand, maybe it is a good thing to set the goalposts high. We may not reach the target, but still end up with worthwhile results; see: current ML use cases in industry.


Well "intelligence" does imply reasoning, which has never been delivered.


I think this isn't really giving credit. Standard deep learning is not really slowing down. It is improving exponentially still.

Just yesterday I was looking into new deep learning methods for solving PDEs. That is a huge area to explore and will change many things dramatically


Interesting that Watson is cited in the article so heavily, because I strongly believe that the reason Watson failed was because of MBA's and Management Consultants. I could have told them (in fact, as an IBM customer, I did) that going for healthcare was deranged, but they went after the $$$ in complete denial of the difficulties (heavy regulation, safety critical, deeply powerful complex network of stakeholders, super complex badly understood domain). Also they went far to fast, if they had tempered the effort with some patience then they could have got some really decent progress on a simpler domain (like customer service for telco's).

But the beast needed feeding.


That's what kills me and continues to perplex me. Something like AI for agriculture machines (think maize and soybean fields) are NOT heavily regulated, safety critical, or deeply complex. Its basically 160 acres of open space with reasonable guarantees no humans are around. It would take a trivial amount of time to code in the proper kill switches to deactivate these machines if they detect a human with x feet

Sadly this research is only starting to take off (with 1/1000000 of the investment too). 50B-100B industry...


Automating heavy machinery is a tricky business. The amortized capex can outstrip the costs of the operator by a significant margin. The cost of downtime due to AI failure may be larger than the savings of not having an operator.

If you can replace individual pickers on the other hand...


My grandfather was a farmer in Ohio. He was always very worried that I, as a small child, would get in the field and be hit and killed by his tractor. This may have been because he did accidentally kill a dog once this way.


I think they didn't get how bizarre is US healthcare. It's a "market" (maybe rather a system) where (nearly) every participant is interested in everything being as expensive as humanely possible.

Since automation saves money and Watson is just automation it has no value for US healthcare participants.

If they came up with completely new thing that could be priced additionally, it would have been a different talk.


There are healthcare startups around fraud detection, reducing no-shows, telemedicine, drug discovery, and patient triage.

Just radiology alone is prime for ML due to existing digital infrastructure and clinical use cases.

The trend in FDA cleared AI products is pretty clear over the past decade. https://models.acrdsi.org/


> in complete denial of the difficulties (heavy regulation, safety critical, deeply powerful complex network of stakeholders, super complex badly understood domain)

The last article I read about it cited rather mundane reasons for it ending up unused. Things like not even supporting the data formats used by hospitals that served as cutting edge users.


Watson missed the DL train and IBM should have partnered with a company that had experience in getting medical devices through the FDA (like MSFT are doing with Nuance).


The predictions from thought leaders are a little puzzling, but I think predictions from CEOs are easier to explain:

Engineer: This could easily take 10 years.

Engineering Manager: This will take up to 10 years.

VP of Engineering: This will take 5-10 years.

CEO: We will have new <AI Thingy> within 5 years!

A game of telephone but with optimism.


That's the thing that people don't get about estimations.

Estimation (as given by the engineer) is the time, by which, you can be almost sure, you won't have this thing done.


"No Earlier Than"


Times Pi.

Why?

It works!

Ok, Phi it is then!


> The predictions from thought leaders are a little puzzling

It doesn't seem possible to become a "thought leader" by correctly predicting what trends will not pan out. People like the ones predicting a bold new future in a short time span.


It's exactly the same with quantum computation right now.


We got our current revolution because gaming paid for the development of GPU’s.

Getting consumers to fund R&D has a big impact.

Still waiting for consumers to fund the robot revolution.


Robots are everywhere. We just don't call them robots. We call them dishwashers, CNC machines, STM machines, automatic welders, fabric cutting machines, etc, etc.

Sort of like we already have flying cars. They're called "helicopters".


When people think flying cars, they don't think of something without wheels that can only land in certain spots that requires years of training to operate that is significantly expensive to be outside of the reach of 99%.

They think of the car that anyone of legal driving age with a reasonable amount of money that anyone with some income can purchase.

No, helicopters are not flying cars. And no, dishwashers aren't what people would think of when they think of robots. Something that has a microprocessor in it isn't automatically a robot.


Yes helicopters aren't flying cars.

But if a machine automates 90% of a process, like a dishwasher, why shouldn't it be considered like a robot, say a 90% robot?

Practically it does have the same effect.


A dishwasher is to a robot what a calculator is to a computer.


By that token, how well can a Boston Dynamics dogbot clean your dishes?


Which means that a robot is something that has a property similar to the property of 'turing completeness' in computers.


Do you think of a washing machine or a dryer as a robot too?


I think of them as machines that automated a certain task. So a robot substitute.


> When people think flying cars, they don't think of something without wheels that can only land in certain spots that requires years of training to operate that is significantly expensive to be outside of the reach of 99%.

You write down two points:

1. regulation

2. cost

"Regulation" is not an engineering problem, but a hard and deeply political one.

For "cost": When a lot of regulation comes down, the possible market size increases by a lot and it begins to make economic sense to invest lots of engineering ressources into cutting costs down by a lot (I do believe this is possible). Then helicopters will even perhaps transform into something that is much more akin to flying cars.


When people think flying cars, they don't think something without wheels that can only land in certain spots that requires years of training to operate that is significantly expensive to be outside of the reach of 99%.

I think if one eliminates the training requirement, reduces the cost, and increase the safety, then we don't need them to be road vehicles. Achieve the above, and we'll have flying taxis!


If it weren't for many of the other problems of helicopters including convenience and noise.

How many helicopters will fit in an IKEA parking lot? And how many will be able to bring back whatever you buy there?

Extend that thought experiment a bit. You might be able to achieve the transportation of people in controlled circumstances, but not much else.


You do realize small ducted fans are an order of magnitude louder than a helicopter, right? All those slick marketing videos omit that part.


We probably don’t call a dishwasher a robot because it’s not

That is the only consumer facing device you mentioned. I know we have industrial robots, so I’ll skip debating where we draw the line.

Once we get to the Apple II of home robots, consumer spending will fuel the rapid development, and “robots”, will become more intelligent and agile


Not the person you responded to, but I think I see where they are coming from and agree: we don't call them robots because we are used to them and have a specific name. I don't see how they aren't robots, unless we are defining robots as having a specific kind of manipulator.


Aren’t we discussing robots vs machines?

https://www.toyota.co.jp/en/kids/faq/i/01/01/


I think their point was that if the dishwasher was made so that mechanical hands picked up a dish, washed it, rinsed it, dried it, then set it aside before picking up the next dish and doing the same, we would call that a robot.


The definition of a robot tends to be fairly fuzzy. If you look up the Websters definition (https://www.merriam-webster.com/dictionary/robot), and ignore the first definition about being in human form, you get:

> a device that automatically performs complicated, often repetitive tasks (as in an industrial assembly line)

Which basically hinges on "complicated". I suspect most people wouldn't count a dishwasher, washing machine, etc.


> Sort of like we already have flying cars. They're called "helicopters".

you cannot drive helicopters on the road, they are flying but not cars


No question. It was video gaming from the period roughly spanning 1990-2010 that funded gpu innovation. But programmable compute shaders caught on very rapidly in science. And Nvidia was quick to not just recognize the new market, but bet the company that supercomputing would one day be gpu cluster based.

Here's Jensen Huang talking to Stanford students about the birth of the Cg language (27:40" mark). The entire talk is gold. A text book case study of Moore's Law and the SV model of risk capital:

https://www.youtube.com/watch?v=Xn1EsFe7snQ

Ironically, IC Design itself is a strong candidate as an industrial process likely to be revolutionized by AI ;)

Chip Placement with Deep Reinforcement Learning

https://arxiv.org/abs/2004.10746


We got our current revolution for three major contributors:

* Big data. Lots of big data. Mostly unstructured and unqueryable driving demand for...

* Innovations in machine learning. "Deep learning" enabled by big data and algorithmic approaches that previously wouldn't have been possible without...

* Ubiquitous access to high-performance compute power, and in particular GPUs, which are optimized for the sort of math needed to train big neural networks powered by big data.

So GPU-powered compute is one of three mutually dependent things that got us here.


Most deep learning algorithms were discovered decades ago, so it's debatable that it was a driving factor behind the 2010s revolution. Backpropagation dates from 1986, convolutional neural nets from 1989 (neocognitrons), LSTMs from the late 90s...


I'm willing to spend 10k for a robot who can do my kitchen and clothes.

Let's see how long it will take


Yes, these kinds of robots are more useful than self-driving cars, because you have to sit in the car anyway and might as well drive.


I would also pay in the 10k+ range for this, easily.


The Willow Garage PR2 robot could do that for $440,000.

https://en.wikipedia.org/wiki/Robot_Operating_System#Willow_...


> Still waiting for consumers to fund the robot revolution.

Aren't they already, via Tesla and Amazon?


There's a lot of effort/funding recently going into automating restaurants. Machines for cooking, cleaning, serving, delivery.

Once that gets deployed at some scale, consumers will pour a lot of funding into robots indirectly.


We've mostly been tricked by the Moravec's Paradox.

https://en.wikipedia.org/wiki/Moravec%27s_paradox

What seems difficult/hard to us is very often not that difficult from a computational perspective, but evolution of our species didn't optimize for this class of problems.


While it is worth noting cases of premature optimism, does anyone seriously think a world of innovators can operate without that?

Nikola Tesla might be the grandfather of this unavoidable tendency.

Elon Musk is his modern protege in more than one way.

And we have plenty of people, probably many more, saying "XYZ can never be done!" and being disproved over and over.

Is there a way to repeal the bell curve on predictions? Make no predictions? I don't know what the fuss is about here. :)

My hard prediction for 30 years: Machines will pass human general intelligence by 2040. They will never "match" us as they will exceed our abilities in different areas at wildly different times.

Another less solid prediction: We will be outstripped mentally by machines before we can cheaply replace our human bodies artificially. My perception is that material science and engineering happen at a much slower rate than software.


You're on to something now.


This article seems to be a rehash of the paper ‘Why AI is harder than we think’[1].

[1]: https://arxiv.org/pdf/2104.12871.pdf

-

Related discussion

https://news.ycombinator.com/item?id=26964819


I remember all the hype when shader programming started. After reading all I could about it I realized it would be simplified in a few years and learning how to program shaders at that moment would be a waste of time. Now there are no-code tools to design shaders since physically based shaders are way too complex.

Same will happen to ML, it's getting so complex that we will need a design layer on top of it, and forget about NNs. At that point we will be able to reach the next step of artificial consciousness, and a new summer for AI research.

I'm glad to hear big tech wasn't able to solve AI, and the solution seems so far far away.

In the meanwhile I'm myself having fun creating an AI operating system.


I wrote this a year ago, but have been commenting for many years that AI is all hype https://medium.com/@seibelj/the-artificial-intelligence-scam...

I asked to make a public bet 4 years ago, saying self-driving cars wouldn't be close to ready in 5 years https://news.ycombinator.com/item?id=13962230

I have been hearing this bullshit for over a decade, and people (and investors, and engineers, and smart people who should know better) keep falling for it.


Investors should invest in a model to decide in which projects to invest.


As a researcher in AI, I accept that a lot of currently unsolved challenged are thought of as AI. But lately, I feel that AI is the problem description for all currently unsolved problems. And then some...

This surprises me, because most AI technologies have been around for a long time. Now with blockchain a couple of years ago, I could at least rationalize all excitement as people throwing new technology at an old problem. But with AI I am continually surprised by the reasons why 'an AI' would be able to solve it.


As a researcher in AI, what are you really spending most of your time on? What problems are you solving?


I am currently interested in infusing reinforcement learners with symbolic knowledge, with safety constraints as a special case.

I hope this helps cases where learners could come up with better solutions if it were not for pathological failures that we know to avoid.

Also, I try to keep expectations around AI reasonable.


I'm not OP but I also do research in ML. My research focus is identifying and preventing critical system failure so people don't die. My of my time is spent developing new techniques and then testing them against data we collect from the field.


Where could one read more about your (or similar) research?

This kind of thing is quite big at the moment in mobile work machinery circles, everyone's looking for a certifiably safe solution for enabling mixed-fleet operation (i.e. humans, human-controlled machines and autonomous machines all working in same area). Current safety certifications don't view the nondeterminism of ML models too kindly.



> Four years later, in 2020, Forrester reported that the A.I. market was only $17 billion.

This seems like vast under accounting for the current impact of AI. Every interesting technology used in the market is differentiated by its application of ML, be it assistants, recommender systems, or enhancement. The iPhone has intelligence built in to process control, voice access, and the camera.


The tldr of this article is supposedly "be more incremental".

But the undertones are basically "exploit more and explore less because exploring is expensive".

It would be nice if the authors had the courage to propose a concrete economic model for what the right balance is and to do a fair accounting of the positive externalities of these projects, rather than just give a cherry-picked anecdotal laundry list of failed products.


"we have learned that building computers that rival human brains is not just a question of computational power and clever code."

It might be. We don't know that yet.


I've been perplexed lately by a couple things as an atheist that trouble me.

1. I saw what personally I can't shake, a glitch in the matrix so to speak, or a mandela effect except it was a "flip" where I saw one whole movie clip that totally changed (including acting style) in 3 days, my wife saw both and verified I wasn't crazy.

2. Being logical, I've been searching on answers "why" this "could" be possible without it just being a "faulty memory". 90% of ME's are probably ... but memory seems to fade over time, 3 days doesn't seem long enough to form the right connections to create false memories especially with two witnesses and many online claim the same exact "flip flop".. I mean the easiest explanation might be that Universal Studios and Movie Clips youtube pages just have an "alternate" version of that clip and they alternate them out on a schedule..

So my conclusions: We are ourselves ai living in a simulation, or there's a multiverse but maybe it's finite so when there's too many realities we get convergence.

I lean towards simulation because of some of the evidence some people affected by ME's claim that things sometimes change in a progression, almost like facts are being "added". Like there's residue as it's called for "Where are we in the milky way" which shows 100 different locations, and not even close to where Carl Sagan pointed on the very outskirts. Even Philip K. Dick claimed to have "traversed" timelines... though I think he seemed to think more like it's a multiverse... which it still could be, albeit a simulated one.

Another factor is the axis of evil in space. Basically it's an observation that if I understand correctly ties the expansion of the galaxy along an x/y coordinate to our solar system, essentially putting us right square back at the center of the universe.

https://www.space.com/37334-earth-ordinary-cosmological-axis...

This to me is important because as a programmer I think if I were to create a simulation of just us.... I'd probably "render" us first and everything else after... could it be our "area" in space is one of the first created and everything else after... like pixels being pre-rendered for when we discover space and astrophysics someday? It'd ensure they could create the right conditions for our planet physics wise... to use it looks like a bang, but in really it's just the "rendering" process which had to start at a "pinpoint"... at least that's how I envision a "simulation" starting...

Then there's the double slit experiment which proves that photons and other particles upto atoms and some molecules when shot through a slit will basically splatter (interference pattern) against a backdrop that tracks where they land. If you put something to observe though each individual particle or photon, they line up in a line like a stencil, if you split them before this, and continue the first group to the board, and the others go through something to "erase" the data of where they came from... they go back to interference.

So that basically gives me thought about what observation might have on our own universe, is that some safeguard so that the physics engine only operates when we're looking? So all we see in space could just be data "fed" to us, may not exist, like a movie stage or something... we see what we aim to see, but it follows "rules" setup in the simulation. There's a reason light is the maximum speed, etc...Maybe that's the max ram available or something...

Why this is important to ai...that's a bit of a tangent... to solve these complex issues I've seriously contemplated at least studying quantum mechanics, physics, neuroscience, astrophysics, and ai/machine learning. Because I think to really "create" ai ...especially super ai, you need a wider skillset, a broader base of understanding. You need to be able to define WHAT consciousness is, where it resides, where it comes from, maybe even "where it goes" when our body is done...

If we're in a simulation then we know we've already conquered this issue because we ARE ai, or at least whatever civ we come from has. Whether they're human or not.

TLDR: Had profound spiritual conundrum, trying to explain through science, discovered I probably need to learn a lot of science/math/physics to do so, and ya know a.i. might be like that because making machines "concious" or have "real intelligence" seems like it needs re-thought a bit. I feel like training a.i. is nothing like training a child but it should be because the way we learn is the best way. Maybe in fact a simulation could be where a.i. goes to "learn"...

I mean a.i. you'd want to at least have ethics right? Well, we teach it as a society some like hitler never learn and could be thrown in the "trash bin" but the brightest minds could be plucked out, or all minds really to be put into machines, etc in the "real world" someday.

That may be what the afterlife is... serving "real humanity" as their "intelligence" until we rise up against them. I really want to read this sci-fi, kinda sounds interesting...maybe I'll write it...

At any point being human in a simulated universe could create more ethical a.i. and maybe that's the point of a simulation, maybe we should even research using simulations of universal scale as a way to create our own a.i. technology assuming we're the "base" universe then if that were to be a thing we'd probably need to create it.


Which movie clip via what media? I'd maybe believe you if it was on DVD/Bluray or such, but not when it were online somewhere/somehow, because everything there is subject to change without notice.


You don't think it's possible that you either A) simply misremembered the clip or B) it was swapped out as part of some normal process?

3 days isn't exactly a short period of time. Lots of totally plausible explanations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: