Hacker News new | past | comments | ask | show | jobs | submit login

An "AI winter" like we had before isn't plausible now. In the past, it was the failure of AI research to deliver business impact. The current ML boom is fundamentally different in that it already has become a standard part of the stack at major companies:

- Recommendation engines (Google, YouTube, Facebook, etc.)

- Fraud detection (Stripe, basically all banks)

- ETA prediction (Uber, every delivery app you use)

- Speech-to-text (Siri, Google Assistant, Alexa)

- Sequence-to-sequence translation (used in everything from language translation to medicinal chemistry)

- The entire field of NLP, which is now powering basically any popular app you use that analyzes text content or has some kind of filtering functionality (i.e. toxicity filtering on social platforms).

And that's a very cursory scan. You can go much, much deeper. That isn't to say that there isn't plenty of snake oil out there, just that ML (and by extension, AI research) is generating billions in revenue for businesses today. There's not going to be a slow down in ML research for a while, as a result.




Partly this is because we've given up trying to get computers to "understand" and have focused on making them useful with sophisticated software. That is, the work is no longer about artificial intelligence, but about trained, new-style, expert systems.


This is just redefining what counts as "real intelligence" to raise the bar.

Once we have computers that can think like humans, there'll be people saying 'oh, it's not real "understanding", it's just generating speech that matches things it's heard in the past with some added style', not realizing that the same also applies to human writers.

As long as the bar keeps rising and we've found business applications, there will be no AI winter.


It isn't redefining anything. There's a classification in AI research called "strong AI" that includes Artificial General Intelligence and machine consciousness. Current uses of ML are instances of so-called "weak AI" which is focused on problem-solving and utility.

No redefinition, and using proper distinctions takes nothing away from the accomplishments made in the field. We don't even have anything close to a scientific understanding of awareness or consciousness. Being able to create machines actually possessing awareness seems fairly far off.

Being able to create machines that can simulate the qualities of a conscious being, doesn't seem so far off. I suspect when we get there the data will point to there being a qualitative difference between real consciousness and simulated. Commercial interests, and likely government bureaucracy will have a vested interest drowning out such ideas, though.

The bar hasn't moved. We've just begun aiming at different targets. That we succeed in hitting the more modest ones only makes sense.


This is a nitpick but regarding "strong AI" and "weak AI", I take issue with the use of those expressions to refer to actual software systems, or to sub-domains of AI research. Those expressions in fact refer to two hypotheses pertaining to the philosophy of AI and not to any concrete AI research field. Even within the weak AI hypothesis, a system that perfectly replicates the appearance of human consciousness is not conscious. See [0]. Therefore the strong vs. weak dichotomy is unrelated to progress towards emulating human intelligence and behavior. It is concerned with the question of whether a certain fundamental barrier can be broken, not unlike the speed of light.

The meanings people ascribe to those terms nowadays are very diverse and distant from the original hypotheses. Sometimes language evolves usefully so that expression is easier and ideas can be conveyed more accurately, but I'm afraid when it comes to "strong" and "weak" AI, another, more damaging kind of semantic drift has taken place that has muddied and debased the original ideas.

Those terms are victims of the hype surrounding AI. I suspect this is part of why the field has trouble being taken seriously.

[0] https://ai.stackexchange.com/questions/74/what-is-the-differ...


AGI is not currently in scope for any solutions on the market. The current crop of AI systems are simply analytical engines.


AGI isn't something that is scoped. It must be painted.


Isn’t artificial intelligence a very broad term anyway, of which expert systems, statistics, fuzzy logic and all the new wave neural network / deep learning are examples?

You seem to be thinking about artificial general intelligence, which is much more difficult to achieve.


Time was that the term AI meant what has now come to be referred to as AGI or machine consciousness. Marvin Minksy wasn't looking to make better facial recognition, or better factory robots. His research was about making a mind.

Twenty years ago, that kind of work ran into a brick wall. But neural networks were still useful. Enter Machine Learning, and it borrowing the term "AI." But Artificial Intelligence, for many, always meant the dream of building R. Daneel Olivaw, or R2D2.


AGI is even difficult to define, it's still up for debate if/what is part of what we call intelligence and if it's present on animals and if it can exist without a "subconscious" or "feelings".


I don't know if Something that have a survival instinct is AGI but to me this is enough to call Strong AI. Does the Caenorhabditis elegans worm have AGI ?


The hard problem of consciousness, which everyone has a hard time defining?


I wonder if we could detect harassment today in a chat for example. With like GPT3 or something.


Highly unlikely. It's pretty much impossible to detect the difference between a friendly banter, a heated debate, trolling, and genuine harassment for most people given just a chat log.

There's often much more information required (prior communication history of the involved parties, previous activities in other chats, etc.).

Using automated systems like GPT-3 would simply lead to people switching to different languages (not in the literal sense, but using creative metaphors and inventing new slang).

Pre-canned "AI" is unable to adapt and learn and I doubt that any form of AGI (if even possible without embodiment) that isn't capable of real-time online learning and adapting would be up to the task.


Mmh I'm not convinced. There is pattern in aggressiveness. One of them being the harasser is talking about the victim or something that's strongly linked to the victim (like it's work, member of familly, etc).


We can, though the teams I know who are doing it are not using GPT-3. If you look at any platform that has live chat features (think Instagram Live or Twitch) and an ML team, then there is almost certainly someone working on toxicity filtering along with propaganda detection etc. It's a really really hard problem, particularly when you think about how chat platforms tend to develop their own short hand and slang.


A double bind detector!


> ETA prediction

I think we are undervaluing basic statistics in this realm.

> (Uber, every delivery app you use)

If you treat these apps as logistics applications, AI still has a place handling optimization problems.

> Recommendation engines

A recommendation engine is by and large about telling you what you want to hear. How's that been working out for us?

We are going to have a very hard time separating them from the divisiveness and radicalization problems with social media now. This is quite likely an intractable problem. If we don't collectively draw a breath and back away from these strategies, things are going to get very, very bad.

> Fraud detection

The vast majority of the time I spend interacting with my bank is spent explaining to them that I'm not a monster for saving my money most of the time and making a big purchase/trip every couple of years. Stop blocking my fucking card every time I'm someplace nice. Unfortunately since these services tend to be centralized, switching banks likely punishes nobody but me.

The problem (advantage?) I think with your list in general is that with the exception of text-to-X, many of these solutions fade into the background, or are things that may be carved out of 'AI' to become their own thing as other domains have in the past.


But most of those are terrible! Google has actually gotten worse over the years, literally all I've ever heard about fraud detection is the vast amount of both false positives and false negatives somehow, speech to text is utter garbage where you have to repeat yourself, consciously annunciate, and not use tricky grammatical constructs, and the thought of the same tech behind locking me out of my credit card every so often deciding if I'm being "toxic" or not is frankly terrifying.

AI might be producing value, but only in the sense of getting half of the quality for a quarter of the cost.


This has not been my experience at all. My standard Google results are still very good, and the direct answers they extract from web results are amazing. Speech to text is terrific. It consistently gets even obscure proper nouns and local business names on the first try.


> In the past, it was the failure of AI research to deliver business impact.

That's not quite right - the problem wasn't that no impact was delivered, but that it over-promised and under-delivered. I'm fairly optimistic of some of the machine learning impact (even some we've seen already) but it's by not means certain that business interest won't turn again. We are very much still in the honeymoon phase currently.


AI was also having huge success before the last AI winter. It’s mostly a question of buzzword turnover not underlying technology.


You need to understand "AI winter" as referring to Academic and (to a lesser extent) private funding of AI research, specifically academic AI research. It goes through these booms of optimism "We'll have self-driving cars in 5 years! Here's a hundred million" and then pessimism "It's been 15 years and all we have is driver assist features, we're going to fund grants in more practical areas now" - the pessimism is followed by a drying up of research funds for AI, even though there is not a drying up of research in general. This winter is very real if you are trying to get a job in AI research, even thought it's impact is quite limited as most CS research is not and has never been AI focused. I would say that CS in general is a very fad-driven field so this phenomenom is to be expected.


Not quite as all funding is cyclical. My point was the bust cycle had little to do with marketable products last time, and the same is true this time around. Consider the ideal outcome, once you have actual self driving cars, funding for self driving car research dries up. Success or failure doesn’t actually matter here, the funds are going away.

The post Facebook boom and bust cycle around social networks wasn’t such a big deal for those involved because skills transfer. The issue is getting a PHD in AI related topics is far less so. Time it just right and a 500k/year job is on the table, but get the cycle wrong and it’s a waste of time and money.


I wasn't around back then, I'm curious what were the business use-cases of AI/ML back then?


Optical sorting of fruit and various other things is a great example where early AI techniques made significant real world progress. It’s not sexy by today’s standards, but is it’s based on a lot of early image classification work.


We are talking about money-making applications here. Not progress.


By progress I mean actual money making products. If your widget is doing shape recognition based on training sets and your competitors are hard coding color recognition then you end up making more money.


Spam filtering and route-planning are big successes from previous generations of techniques labelled as "AI".


You could say the same thing about the first AI winter. Programming concepts developed for symbolic AI are in every language now.


Search is a type of machine learning!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: