What we have successfully accomplished this time around is big data analytics that have used machine learning technology to derive insights and patterns from data that traditional data analysis approaches have not been able to achieve. And dramatic improvements in computer vision and natural language processing among a few other areas. Those things will stay.
But I'm looking forward to the next wave of AI, which should be in the mid- to late-2030s if the current cycles hold.
- Recommendation engines (Google, YouTube, Facebook, etc.)
- Fraud detection (Stripe, basically all banks)
- ETA prediction (Uber, every delivery app you use)
- Speech-to-text (Siri, Google Assistant, Alexa)
- Sequence-to-sequence translation (used in everything from language translation to medicinal chemistry)
- The entire field of NLP, which is now powering basically any popular app you use that analyzes text content or has some kind of filtering functionality (i.e. toxicity filtering on social platforms).
And that's a very cursory scan. You can go much, much deeper. That isn't to say that there isn't plenty of snake oil out there, just that ML (and by extension, AI research) is generating billions in revenue for businesses today. There's not going to be a slow down in ML research for a while, as a result.
Once we have computers that can think like humans, there'll be people saying 'oh, it's not real "understanding", it's just generating speech that matches things it's heard in the past with some added style', not realizing that the same also applies to human writers.
As long as the bar keeps rising and we've found business applications, there will be no AI winter.
No redefinition, and using proper distinctions takes nothing away from the accomplishments made in the field. We don't even have anything close to a scientific understanding of awareness or consciousness. Being able to create machines actually possessing awareness seems fairly far off.
Being able to create machines that can simulate the qualities of a conscious being, doesn't seem so far off. I suspect when we get there the data will point to there being a qualitative difference between real consciousness and simulated. Commercial interests, and likely government bureaucracy will have a vested interest drowning out such ideas, though.
The bar hasn't moved. We've just begun aiming at different targets. That we succeed in hitting the more modest ones only makes sense.
The meanings people ascribe to those terms nowadays are very diverse and distant from the original hypotheses. Sometimes language evolves usefully so that expression is easier and ideas can be conveyed more accurately, but I'm afraid when it comes to "strong" and "weak" AI, another, more damaging kind of semantic drift has taken place that has muddied and debased the original ideas.
Those terms are victims of the hype surrounding AI. I suspect this is part of why the field has trouble being taken seriously.
You seem to be thinking about artificial general intelligence, which is much more difficult to achieve.
Twenty years ago, that kind of work ran into a brick wall. But neural networks were still useful. Enter Machine Learning, and it borrowing the term "AI." But Artificial Intelligence, for many, always meant the dream of building R. Daneel Olivaw, or R2D2.
There's often much more information required (prior communication history of the involved parties, previous activities in other chats, etc.).
Using automated systems like GPT-3 would simply lead to people switching to different languages (not in the literal sense, but using creative metaphors and inventing new slang).
Pre-canned "AI" is unable to adapt and learn and I doubt that any form of AGI (if even possible without embodiment) that isn't capable of real-time online learning and adapting would be up to the task.
I think we are undervaluing basic statistics in this realm.
> (Uber, every delivery app you use)
If you treat these apps as logistics applications, AI still has a place handling optimization problems.
> Recommendation engines
A recommendation engine is by and large about telling you what you want to hear. How's that been working out for us?
We are going to have a very hard time separating them from the divisiveness and radicalization problems with social media now. This is quite likely an intractable problem. If we don't collectively draw a breath and back away from these strategies, things are going to get very, very bad.
> Fraud detection
The vast majority of the time I spend interacting with my bank is spent explaining to them that I'm not a monster for saving my money most of the time and making a big purchase/trip every couple of years. Stop blocking my fucking card every time I'm someplace nice. Unfortunately since these services tend to be centralized, switching banks likely punishes nobody but me.
The problem (advantage?) I think with your list in general is that with the exception of text-to-X, many of these solutions fade into the background, or are things that may be carved out of 'AI' to become their own thing as other domains have in the past.
AI might be producing value, but only in the sense of getting half of the quality for a quarter of the cost.
That's not quite right - the problem wasn't that no impact was delivered, but that it over-promised and under-delivered. I'm fairly optimistic of some of the machine learning impact (even some we've seen already) but it's by not means certain that business interest won't turn again. We are very much still in the honeymoon phase currently.
The post Facebook boom and bust cycle around social networks wasn’t such a big deal for those involved because skills transfer. The issue is getting a PHD in AI related topics is far less so. Time it just right and a 500k/year job is on the table, but get the cycle wrong and it’s a waste of time and money.
If you follow the papers coming out, I'd say we're very far from any winter. Public perception may ebb and flow, but nothing can stop us now, research has gathered that critical mass and we're going nuclear on AI right as we speak.
Even large models like GPT-3 can't tame language and are steadily pricing themselves to be un-competitive with people.
- image classification
- image understanding (object detection and segmentation)
- text to speech
- speech to text
- natural language modeling
- style transfer
- intelligent agents (go/Dota/etc)
- protein folding
And many other less openly documented breakthroughs that are actively delivering business value.
New successful research (impactful stuff, not small improvements in SOTA) is happening at a pace that suggests even if we enter a bit of a winter due to overpromised technologies, the core technology of differentiable programming will continue to break into new domains and revolutionize them.
Alternatively in fields like speech to text the most impressive neural results are commercially impractical due to the cost of inference.
It cost roughly 30 million dollars to train alpha star which rated in the top .3% of Starcraft players. Presuming that the technology reasonably transferred to other fields ( lack of prepplayed pro games would be a problem ) there are few reasons to believe one could gain 30 million dollars of value from the result.
Not to mention the impact of completely new technologies like voice assistants, automated high-quality image modification, automated computer vision systems (ones that have produced real value like automated industrial maintenance notification systems).
> In many cases they are delivering 1-10% gains over prior techniques.
As a general statement, that is false. At least among the companies that are good at DL.
As for alphastar, I somewhat agree - it's very early and that tech hasn't left the lab yet AFAICT. But if you could build an agent that was human average at some common human task (e.g. driving) that is easily worth hundreds of millions of dollars if not billions.
It takes time to go from the lab to commercial usage. Older breakthroughs like CV and transcription/TTS has made that transition successfully. More recent breakthroughs like like RL agents, NLP, style transfer are still making that transition (at varying paces). And there continue to be new research breakthroughs like protein folding that are still many years from making their way into industry, but the continuous nature of these breakthroughs bode very well for the future of DL.
On the other hand, maybe it is a good thing to set the goalposts high. We may not reach the target, but still end up with worthwhile results; see: current ML use cases in industry.
Just yesterday I was looking into new deep learning methods for solving PDEs. That is a huge area to explore and will change many things dramatically
But the beast needed feeding.
Sadly this research is only starting to take off (with 1/1000000 of the investment too). 50B-100B industry...
If you can replace individual pickers on the other hand...
Since automation saves money and Watson is just automation it has no value for US healthcare participants.
If they came up with completely new thing that could be priced additionally, it would have been a different talk.
Just radiology alone is prime for ML due to existing digital infrastructure and clinical use cases.
The trend in FDA cleared AI products is pretty clear over the past decade. https://models.acrdsi.org/
The last article I read about it cited rather mundane reasons for it ending up unused. Things like not even supporting the data formats used by hospitals that served as cutting edge users.
Engineer: This could easily take 10 years.
Engineering Manager: This will take up to 10 years.
VP of Engineering: This will take 5-10 years.
CEO: We will have new <AI Thingy> within 5 years!
A game of telephone but with optimism.
Estimation (as given by the engineer) is the time, by which, you can be almost sure, you won't have this thing done.
Ok, Phi it is then!
It doesn't seem possible to become a "thought leader" by correctly predicting what trends will not pan out. People like the ones predicting a bold new future in a short time span.
Getting consumers to fund R&D has a big impact.
Still waiting for consumers to fund the robot revolution.
Sort of like we already have flying cars. They're called "helicopters".
They think of the car that anyone of legal driving age with a reasonable amount of money that anyone with some income can purchase.
No, helicopters are not flying cars. And no, dishwashers aren't what people would think of when they think of robots. Something that has a microprocessor in it isn't automatically a robot.
But if a machine automates 90% of a process, like a dishwasher, why shouldn't it be considered like a robot, say a 90% robot?
Practically it does have the same effect.
You write down two points:
"Regulation" is not an engineering problem, but a hard and deeply political one.
For "cost": When a lot of regulation comes down, the possible market size increases by a lot and it begins to make economic sense to invest lots of engineering ressources into cutting costs down by a lot (I do believe this is possible). Then helicopters will even perhaps transform into something that is much more akin to flying cars.
I think if one eliminates the training requirement, reduces the cost, and increase the safety, then we don't need them to be road vehicles. Achieve the above, and we'll have flying taxis!
How many helicopters will fit in an IKEA parking lot? And how many will be able to bring back whatever you buy there?
Extend that thought experiment a bit. You might be able to achieve the transportation of people in controlled circumstances, but not much else.
That is the only consumer facing device you mentioned. I know we have industrial robots, so I’ll skip debating where we draw the line.
Once we get to the Apple II of home robots, consumer spending will fuel the rapid development, and “robots”, will become more intelligent and agile
> a device that automatically performs complicated, often repetitive tasks (as in an industrial assembly line)
Which basically hinges on "complicated". I suspect most people wouldn't count a dishwasher, washing machine, etc.
you cannot drive helicopters on the road, they are flying but not cars
Here's Jensen Huang talking to Stanford students about the birth of the Cg language (27:40" mark). The entire talk is gold. A text book case study of Moore's Law and the SV model of risk capital:
Ironically, IC Design itself is a strong candidate as an industrial process likely to be revolutionized by AI ;)
Chip Placement with Deep Reinforcement Learning
* Big data. Lots of big data. Mostly unstructured and unqueryable driving demand for...
* Innovations in machine learning. "Deep learning" enabled by big data and algorithmic approaches that previously wouldn't have been possible without...
* Ubiquitous access to high-performance compute power, and in particular GPUs, which are optimized for the sort of math needed to train big neural networks powered by big data.
So GPU-powered compute is one of three mutually dependent things that got us here.
Let's see how long it will take
Aren't they already, via Tesla and Amazon?
Once that gets deployed at some scale, consumers will pour a lot of funding into robots indirectly.
What seems difficult/hard to us is very often not that difficult from a computational perspective, but evolution of our species didn't optimize for this class of problems.
Nikola Tesla might be the grandfather of this unavoidable tendency.
Elon Musk is his modern protege in more than one way.
And we have plenty of people, probably many more, saying "XYZ can never be done!" and being disproved over and over.
Is there a way to repeal the bell curve on predictions? Make no predictions? I don't know what the fuss is about here. :)
My hard prediction for 30 years: Machines will pass human general intelligence by 2040. They will never "match" us as they will exceed our abilities in different areas at wildly different times.
Another less solid prediction: We will be outstripped mentally by machines before we can cheaply replace our human bodies artificially. My perception is that material science and engineering happen at a much slower rate than software.
Same will happen to ML, it's getting so complex that we will need a design layer on top of it, and forget about NNs. At that point we will be able to reach the next step of artificial consciousness, and a new summer for AI research.
I'm glad to hear big tech wasn't able to solve AI, and the solution seems so far far away.
In the meanwhile I'm myself having fun creating an AI operating system.
I asked to make a public bet 4 years ago, saying self-driving cars wouldn't be close to ready in 5 years https://news.ycombinator.com/item?id=13962230
I have been hearing this bullshit for over a decade, and people (and investors, and engineers, and smart people who should know better) keep falling for it.
This surprises me, because most AI technologies have been around for a long time. Now with blockchain a couple of years ago, I could at least rationalize all excitement as people throwing new technology at an old problem. But with AI I am continually surprised by the reasons why 'an AI' would be able to solve it.
I hope this helps cases where learners could come up with better solutions if it were not for pathological failures that we know to avoid.
Also, I try to keep expectations around AI reasonable.
This kind of thing is quite big at the moment in mobile work machinery circles, everyone's looking for a certifiably safe solution for enabling mixed-fleet operation (i.e. humans, human-controlled machines and autonomous machines all working in same area). Current safety certifications don't view the nondeterminism of ML models too kindly.
This seems like vast under accounting for the current impact of AI. Every interesting technology used in the market is differentiated by its application of ML, be it assistants, recommender systems, or enhancement. The iPhone has intelligence built in to process control, voice access, and the camera.
But the undertones are basically "exploit more and explore less because exploring is expensive".
It would be nice if the authors had the courage to propose a concrete economic model for what the right balance is and to do a fair accounting of the positive externalities of these projects, rather than just give a cherry-picked anecdotal laundry list of failed products.
It might be. We don't know that yet.
1. I saw what personally I can't shake, a glitch in the matrix so to speak, or a mandela effect except it was a "flip" where I saw one whole movie clip that totally changed (including acting style) in 3 days, my wife saw both and verified I wasn't crazy.
2. Being logical, I've been searching on answers "why" this "could" be possible without it just being a "faulty memory". 90% of ME's are probably ... but memory seems to fade over time, 3 days doesn't seem long enough to form the right connections to create false memories especially with two witnesses and many online claim the same exact "flip flop".. I mean the easiest explanation might be that Universal Studios and Movie Clips youtube pages just have an "alternate" version of that clip and they alternate them out on a schedule..
So my conclusions: We are ourselves ai living in a simulation, or there's a multiverse but maybe it's finite so when there's too many realities we get convergence.
I lean towards simulation because of some of the evidence some people affected by ME's claim that things sometimes change in a progression, almost like facts are being "added". Like there's residue as it's called for "Where are we in the milky way" which shows 100 different locations, and not even close to where Carl Sagan pointed on the very outskirts. Even Philip K. Dick claimed to have "traversed" timelines... though I think he seemed to think more like it's a multiverse... which it still could be, albeit a simulated one.
Another factor is the axis of evil in space. Basically it's an observation that if I understand correctly ties the expansion of the galaxy along an x/y coordinate to our solar system, essentially putting us right square back at the center of the universe.
This to me is important because as a programmer I think if I were to create a simulation of just us.... I'd probably "render" us first and everything else after... could it be our "area" in space is one of the first created and everything else after... like pixels being pre-rendered for when we discover space and astrophysics someday? It'd ensure they could create the right conditions for our planet physics wise... to use it looks like a bang, but in really it's just the "rendering" process which had to start at a "pinpoint"... at least that's how I envision a "simulation" starting...
Then there's the double slit experiment which proves that photons and other particles upto atoms and some molecules when shot through a slit will basically splatter (interference pattern) against a backdrop that tracks where they land. If you put something to observe though each individual particle or photon, they line up in a line like a stencil, if you split them before this, and continue the first group to the board, and the others go through something to "erase" the data of where they came from... they go back to interference.
So that basically gives me thought about what observation might have on our own universe, is that some safeguard so that the physics engine only operates when we're looking? So all we see in space could just be data "fed" to us, may not exist, like a movie stage or something... we see what we aim to see, but it follows "rules" setup in the simulation. There's a reason light is the maximum speed, etc...Maybe that's the max ram available or something...
Why this is important to ai...that's a bit of a tangent... to solve these complex issues I've seriously contemplated at least studying quantum mechanics, physics, neuroscience, astrophysics, and ai/machine learning. Because I think to really "create" ai ...especially super ai, you need a wider skillset, a broader base of understanding. You need to be able to define WHAT consciousness is, where it resides, where it comes from, maybe even "where it goes" when our body is done...
If we're in a simulation then we know we've already conquered this issue because we ARE ai, or at least whatever civ we come from has. Whether they're human or not.
TLDR: Had profound spiritual conundrum, trying to explain through science, discovered I probably need to learn a lot of science/math/physics to do so, and ya know a.i. might be like that because making machines "concious" or have "real intelligence" seems like it needs re-thought a bit. I feel like training a.i. is nothing like training a child but it should be because the way we learn is the best way. Maybe in fact a simulation could be where a.i. goes to "learn"...
I mean a.i. you'd want to at least have ethics right? Well, we teach it as a society some like hitler never learn and could be thrown in the "trash bin" but the brightest minds could be plucked out, or all minds really to be put into machines, etc in the "real world" someday.
That may be what the afterlife is... serving "real humanity" as their "intelligence" until we rise up against them. I really want to read this sci-fi, kinda sounds interesting...maybe I'll write it...
At any point being human in a simulated universe could create more ethical a.i. and maybe that's the point of a simulation, maybe we should even research using simulations of universal scale as a way to create our own a.i. technology assuming we're the "base" universe then if that were to be a thing we'd probably need to create it.
3 days isn't exactly a short period of time. Lots of totally plausible explanations.