Hacker News new | past | comments | ask | show | jobs | submit login

The talk about "AI" is not just misinterpreted in the press, but the ML/AI practitioner themselves too.

I can understand academia & researcher hyping it up, they need to chase the funding. But code boot-camp "ML" engineers being all over-hyping is really, really annoying and contribute to the problem far worse.

These people are easy to spot: go to an ML/AI meetup or community that also shares latest tech. Since those does not understand the underlying math, they cant follow the paper, the talk or the discussion, so they quickly try to shift the discussion to "AI ethics" or "AI will take over the world" hypothetical useless discussions. These seems to also be the first to run out to promise big things with "AI".




You might underestimate the speed and velocity of advancement in the field. A few years ago creating artificial data or images that are indistinguishable from real data was impossible. Now machines generate information, like entire videos that are real but fake. This can, and has already (look at facebook and fake profiles), led to big problems and in a few years the world will change quite drastically because of further advances and automation. This can go in good and bad directions... I'm just worried that downplaying it will cause pivoting towards undesirable outcomes.


I don't think I underestimate the speed nor velocity of the advancement, just being realistic about the scope of the advancements. I follow conferences and read ML papers (DL & DL/ML application in Robotics) regularly so I think I have a some grasp on the advancement in the field. GAN sure have been a big step, and have found a great usage. (deepfake etc) but I'd say it is still highly overblown. Like GPT3 it will see wide adaptation enough to make visible improvement (and problems) in certain industries, just as better classifiers changed a lot.

I also don't like the blanket term of "automation" because it is like AI: too vague and often used by someone who don't fully grasp HOW hard automation tasks are (and how the complexity scale and differs from the task ). Just as optimism in Self-driving car is now more on low-fire.

The over-promise of ML is something akin to people expecting "it to just get better" similar to Moore's law. If you take a look at what direction the algorithms are improving, what incremental advancements are being made the scope of the speed and velocity become much more clear; and just because cars get faster, doesn't mean it will one day fly to space like a rocket. If we one day in short term future see great improvement, it will probably not be the ML improvements, but due external factors (hardware, network speed, etc)


I don't think ML over promises anything, its just a new way to help create software to solve problems that can't really be solved with traditional programming.


I appreciate your enthusiasm but we're still not there. The videos look almost perfect, with "almost" still oscillating around the uncanny valley. Self-driving cars are almost totally self-driving and are almost not killing people. GPT3 uses fluent English and can be used to generate almost meaningful content in most contexts. Speech recognition progressed enormously and can be used to transcribe most well-pronouncing native English speakers in favorable circumstances. It seems like we're slowly getting there, but the last mile is the hardest to finish.

For many companies this is not a problem because they can afford not to care. When you operate at the scale of Google, you can not afford not to use some form of ML to detect abuse, and you don't care about false positives because they're meaningless at this scale - you just get a couple of grumpy folks complaining they lost their accounts. But when you do something more meaningful, for example translation, you can't depend on machine translation - you can very well use it for the first draft, but if you delivered it to your client in that form, you'd be laughed at and you'd quickly be out of business. So yes, great advancements, still far from being there.


The interesting thing is that in many ways we are already there - but it's difficult to see because we are part of the shift.

I really enjoyed (think it was Joscha Bach) take on the idea that companies are basically AI with humans in the loop.

The more our life and work is online and digital the more computers and algorithms will control what we see and experience, and what opinions we have.

A company has a cost function to optimize (like revenue), something that machines can optimize towards as well.

So maybe AI is just like a company but with humans out of the loop. It might also be that the human way of "intelligence" is not the most optimal approach.

I mean we cant even define what it means to be intelligent, which is why we are having these discussions.


Always love it when a thoughtful comment is downvoted without reply - it usually shows that you are on to something


In some cases it means "I not only disagree with this specific comment, I recognize your user name and past history has convinced me you are an ass and not worth engaging in debate. So I'm going to anonymously express my opinion that you are wrong here while avoiding having to deal with your crap."

That's one of the reasons I use the downvote button without replying.

(I have no idea who in heck you are. Your name does not ring a bell for me. I'm just saying there are other explanations for downvoting without replying than the one you want to assert here.)


A downvote is less expensive than a comment explaining why they think you are wrong, that's all.


This is what I've started thinking of as the ‘pretty dumb for a human’ take on AI.

If your requirement for believing something is on the path to AGI is for it to practically already be AGI, it's no longer predictively meaningful. This sort of argument suffices to dismiss, for example, the possibility that today's humans evolved from apes, or even that today's humans could have come from pre-language, pre-civilization humans.


Problem is that the AI we have is nothing like biological intelligence. Our super computers can't even compete with the intelligence of an ant, how am I to believe that replicating human intelligence is around the corner or that we are even moving the right direction to get there?


> Our super computers can't even compete with the intelligence of an ant

I don't see what might spur anyone to claim this. It's obviously ridiculous.


Can we put machines with grip claws in a random forest, no human oversight and make them build a house? Including foraging the lumber, foraging fuel etc? You need a huge amount of intelligence to navigate complex 3d environments, identify different kinds of objects in nature, identify different beings as threats of friends, identify suitable spots for building, handle errors when stuff you did failed etc. We are nowhere close to that level.


So therefore protein complexes are intelligent?

This is literally the watchmaker argument with the nouns swapped out.


What? Our computers can't even do the tasks that even the simplest of general purpose intelligences in the wild can, how can we say that we are making progress towards general purpose AI? If we were making progress, like first fruit fly, then ant, then a mouse etc, then I'd see it. But now? Its just toys. Maybe I'm wrong, but it is pretty easy to not be impressed when they still can't do simple things that insects can do.


Our computers also don't know how to do all sorts of complex biological manufacturing that small groups of proteins can do inside cells. Thus your measure of intelligence also says that protein complexes are intelligent, but they aren't, so your argument is invalid.

As I said, this is literally the watchmaker argument with the nouns swapped out. Ant minds are not general intelligences, they are programs created by natural evolution. Their pathfinding and identification and anthill-making and communication are all fixed programs in their genetics, not learnt during life. They cannot execute other programs.


> Ant minds are not general intelligences

The fact that you don't consider ants as general intelligences means you agree that we aren't making progress towards general intelligences, since ants intelligence is way closer to general intelligence than any computer program we have.

> Their pathfinding and identification and anthill-making and communication are all fixed programs in their genetics, not learnt during life.

They only born with some bases, they intelligently comes up with fine tuned parameters to make them work in practice. Hence they learn. If you don't call this "intelligence" they you can't consider the "AI" we do today "intelligent". I mean, every model a computer executes was invented by a human, the computer only optimized the parameters.

I mean, insects can learn to recognize arbitrary objects without anyone giving them training data or a reward function, they just do it on their own. That is a part of general intelligence. The way we do AI can never achieve that, since it always relies on human intelligence to decide all of those things meaning we just get out some optimized function that can fuzzy match input data.


> The fact that you don't consider ants as general intelligences means you agree that we aren't making progress towards general intelligences, since ants intelligence is way closer to general intelligence than any computer program we have.

Please don't argue like this. It's your job to explain what you think is true, and my job to explain what I think is true. Up to this point in this conversation I explicitly haven't taken a stance about how generally intelligent our best AI are, because it's pointless to argue that until the epistemics are valid. In fact I didn't intend to argue past that point at all; I'm objecting to bad arguments, not the position per se.

> The way we do AI can never achieve that

This is not meaningfully true. We have unsupervised image recognition, for example, and there's plenty of work on all sort other unsupervised learning of all sorts of things.

The idea that insects ‘don't have a reward function’ is not really valid; a reward function is just the function you're optimizing on.


I still don't see your point. The way we do AI today is that we have a static dataset, train a fuzzy function and then let it do work in production. The production model doesn't identify when it makes mistakes and update itself like every animal does. Ants does this. They don't do it perfectly, but they do it.

So the way I see it until AI starts meaning only models that updates themselves as they evaluate input it is a pointless term. Currently basically every single instance of AI you see people talk about are just dumb immutable functions.

Edit: And to clarify, I believe it is impossible to reach human level intelligence without having a model that mutates itself as it tries to solve the problem. Humans are smart since we learn while doing, not that we first take a long course before we start doing.


There is plenty of research on exactly this sort of learning-while-doing, termed ‘online learning’.

It's easiest to train on IID data, and also common to train in specific RL environments, because those paper over some issues with catastrophic forgetting and overfitting, and it's also preferable to do all the compute-intensive updates during training for efficiency. However, lots of people agree that online learning is fundamental to cracking AGI, and it's neither neglected nor unsuccessful.


> There is plenty of research on exactly this sort of learning-while-doing, termed ‘online learning’.

For some definition of plenty. I'd argue there isn't since almost all resources poured into AI are spent getting better at optimizing immutable functions. I don't see progress in getting better at optimizing immutable functions as progress towards general intelligence, hence I don't see almost anything people do in AI today as progress towards general intelligence.

Do you understand now? I don't say nobody works on the things I think is important, just that it is mostly neglected compared to the enormous amount of resources spent on dumb models.


Progress on online learning builds on progress on offline learning, because it's the same optimization procedure.

I don't really agree with your stance on it, but I was not objecting to the claim that people should focus more on online learning. I was objecting to the claims “super computers can't even compete with the intelligence of an ant”, which is false, since the impressive behaviours an ant displays are not learned, and the learned aspects aren't competitive with what computers can do, and that “[the] way we do AI can never achieve that, since it always relies on human intelligence to decide all of those things”, which is also false, because online learning research exists and works, if imperfectly.


> Progress on online learning builds on progress on offline learning, because it's the same optimization procedure.

I disagree with this assertion, treating online learning like offline learning means you aren't making any progress. The main difference is that online learning lets you interact with the problem and seek more information, offline learning doesn't. Offline learning doesn't help you at all solving that problem.


Exploration is a major part of RL already, see AlphaGo for example.


Not OP, but to be honest there isn't much at all that suggests significant steps have been made towards AGI. GPT-3 isn't AGI, it's the opposite of AGI.

To be really frank, I doubt AGI is more than a teleological narrative.


I disagree, but that wasn't the argument I was objecting to.


> The talk about "AI" is not just misinterpreted in the press, but the ML/AI practitioner themselves too.

Of course, for example when you use the phrase "ML/AI".


The so called “AI” as they are practicing it now, is not intelligent at all.

Sure, they found a new trick to recognize cat pictures. But this technique was discovered decades ago. It just took this long for computers to speed up, for it to be usable.

Instead, they should change the moniker from Artificial Intelligence, to Artificial Insight.

Because all it is doing is to comb through all that random data, to look for interesting signals. But, the algorithm doesn’t even know if what it found, is interesting. That’s up to the human to decide.

Hence, the algorithm can provide Insight. But, it is the human in the loop, that will provide the intelligence.


>Instead, they should change the moniker from Artificial Intelligence, to Artificial Insight.

Several times when I spoke to people about this point I got empty faces. Some seem to like the idea of automating things more than augmenting human thought with better tools.

Related article about Moldable Development:

>We can trace both models back to the early years of the first electronic computers. However, the struggle between these two worlds is still present today. For instance, Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop. You can read an excellent essay (Shan Carter and Michael Nielsen) about this topic.

via: https://osoco.es/thoughts/2019/05/designing-media-for-though...


> But this technique was discovered decades ago. It just took this long for computers to speed up, for it to be usable.

This isn't really true. Old techniques don't work well even with modern compute. Now, try replacing sigmoid with ReLU in old networks and see it improve.


Many 90’s era attempts that failed do work vastly better on modern hardware. There are many tweaks that showed up once people could fine tune using vast amounts of processing power, but I would put ~80% of the advancements from HW and 20% from algorithms.

Having a million or more times the processing power really makes or breaks modern ML.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: