Hacker News new | past | comments | ask | show | jobs | submit login
Talk about “AI” in the press these days tends to conflate two things (nostalgebraist.tumblr.com)
69 points by luu on Dec 12, 2020 | hide | past | favorite | 77 comments



The talk about "AI" is not just misinterpreted in the press, but the ML/AI practitioner themselves too.

I can understand academia & researcher hyping it up, they need to chase the funding. But code boot-camp "ML" engineers being all over-hyping is really, really annoying and contribute to the problem far worse.

These people are easy to spot: go to an ML/AI meetup or community that also shares latest tech. Since those does not understand the underlying math, they cant follow the paper, the talk or the discussion, so they quickly try to shift the discussion to "AI ethics" or "AI will take over the world" hypothetical useless discussions. These seems to also be the first to run out to promise big things with "AI".


You might underestimate the speed and velocity of advancement in the field. A few years ago creating artificial data or images that are indistinguishable from real data was impossible. Now machines generate information, like entire videos that are real but fake. This can, and has already (look at facebook and fake profiles), led to big problems and in a few years the world will change quite drastically because of further advances and automation. This can go in good and bad directions... I'm just worried that downplaying it will cause pivoting towards undesirable outcomes.


I don't think I underestimate the speed nor velocity of the advancement, just being realistic about the scope of the advancements. I follow conferences and read ML papers (DL & DL/ML application in Robotics) regularly so I think I have a some grasp on the advancement in the field. GAN sure have been a big step, and have found a great usage. (deepfake etc) but I'd say it is still highly overblown. Like GPT3 it will see wide adaptation enough to make visible improvement (and problems) in certain industries, just as better classifiers changed a lot.

I also don't like the blanket term of "automation" because it is like AI: too vague and often used by someone who don't fully grasp HOW hard automation tasks are (and how the complexity scale and differs from the task ). Just as optimism in Self-driving car is now more on low-fire.

The over-promise of ML is something akin to people expecting "it to just get better" similar to Moore's law. If you take a look at what direction the algorithms are improving, what incremental advancements are being made the scope of the speed and velocity become much more clear; and just because cars get faster, doesn't mean it will one day fly to space like a rocket. If we one day in short term future see great improvement, it will probably not be the ML improvements, but due external factors (hardware, network speed, etc)


I don't think ML over promises anything, its just a new way to help create software to solve problems that can't really be solved with traditional programming.


I appreciate your enthusiasm but we're still not there. The videos look almost perfect, with "almost" still oscillating around the uncanny valley. Self-driving cars are almost totally self-driving and are almost not killing people. GPT3 uses fluent English and can be used to generate almost meaningful content in most contexts. Speech recognition progressed enormously and can be used to transcribe most well-pronouncing native English speakers in favorable circumstances. It seems like we're slowly getting there, but the last mile is the hardest to finish.

For many companies this is not a problem because they can afford not to care. When you operate at the scale of Google, you can not afford not to use some form of ML to detect abuse, and you don't care about false positives because they're meaningless at this scale - you just get a couple of grumpy folks complaining they lost their accounts. But when you do something more meaningful, for example translation, you can't depend on machine translation - you can very well use it for the first draft, but if you delivered it to your client in that form, you'd be laughed at and you'd quickly be out of business. So yes, great advancements, still far from being there.


The interesting thing is that in many ways we are already there - but it's difficult to see because we are part of the shift.

I really enjoyed (think it was Joscha Bach) take on the idea that companies are basically AI with humans in the loop.

The more our life and work is online and digital the more computers and algorithms will control what we see and experience, and what opinions we have.

A company has a cost function to optimize (like revenue), something that machines can optimize towards as well.

So maybe AI is just like a company but with humans out of the loop. It might also be that the human way of "intelligence" is not the most optimal approach.

I mean we cant even define what it means to be intelligent, which is why we are having these discussions.


Always love it when a thoughtful comment is downvoted without reply - it usually shows that you are on to something


In some cases it means "I not only disagree with this specific comment, I recognize your user name and past history has convinced me you are an ass and not worth engaging in debate. So I'm going to anonymously express my opinion that you are wrong here while avoiding having to deal with your crap."

That's one of the reasons I use the downvote button without replying.

(I have no idea who in heck you are. Your name does not ring a bell for me. I'm just saying there are other explanations for downvoting without replying than the one you want to assert here.)


A downvote is less expensive than a comment explaining why they think you are wrong, that's all.


This is what I've started thinking of as the ‘pretty dumb for a human’ take on AI.

If your requirement for believing something is on the path to AGI is for it to practically already be AGI, it's no longer predictively meaningful. This sort of argument suffices to dismiss, for example, the possibility that today's humans evolved from apes, or even that today's humans could have come from pre-language, pre-civilization humans.


Problem is that the AI we have is nothing like biological intelligence. Our super computers can't even compete with the intelligence of an ant, how am I to believe that replicating human intelligence is around the corner or that we are even moving the right direction to get there?


> Our super computers can't even compete with the intelligence of an ant

I don't see what might spur anyone to claim this. It's obviously ridiculous.


Can we put machines with grip claws in a random forest, no human oversight and make them build a house? Including foraging the lumber, foraging fuel etc? You need a huge amount of intelligence to navigate complex 3d environments, identify different kinds of objects in nature, identify different beings as threats of friends, identify suitable spots for building, handle errors when stuff you did failed etc. We are nowhere close to that level.


So therefore protein complexes are intelligent?

This is literally the watchmaker argument with the nouns swapped out.


What? Our computers can't even do the tasks that even the simplest of general purpose intelligences in the wild can, how can we say that we are making progress towards general purpose AI? If we were making progress, like first fruit fly, then ant, then a mouse etc, then I'd see it. But now? Its just toys. Maybe I'm wrong, but it is pretty easy to not be impressed when they still can't do simple things that insects can do.


Our computers also don't know how to do all sorts of complex biological manufacturing that small groups of proteins can do inside cells. Thus your measure of intelligence also says that protein complexes are intelligent, but they aren't, so your argument is invalid.

As I said, this is literally the watchmaker argument with the nouns swapped out. Ant minds are not general intelligences, they are programs created by natural evolution. Their pathfinding and identification and anthill-making and communication are all fixed programs in their genetics, not learnt during life. They cannot execute other programs.


> Ant minds are not general intelligences

The fact that you don't consider ants as general intelligences means you agree that we aren't making progress towards general intelligences, since ants intelligence is way closer to general intelligence than any computer program we have.

> Their pathfinding and identification and anthill-making and communication are all fixed programs in their genetics, not learnt during life.

They only born with some bases, they intelligently comes up with fine tuned parameters to make them work in practice. Hence they learn. If you don't call this "intelligence" they you can't consider the "AI" we do today "intelligent". I mean, every model a computer executes was invented by a human, the computer only optimized the parameters.

I mean, insects can learn to recognize arbitrary objects without anyone giving them training data or a reward function, they just do it on their own. That is a part of general intelligence. The way we do AI can never achieve that, since it always relies on human intelligence to decide all of those things meaning we just get out some optimized function that can fuzzy match input data.


> The fact that you don't consider ants as general intelligences means you agree that we aren't making progress towards general intelligences, since ants intelligence is way closer to general intelligence than any computer program we have.

Please don't argue like this. It's your job to explain what you think is true, and my job to explain what I think is true. Up to this point in this conversation I explicitly haven't taken a stance about how generally intelligent our best AI are, because it's pointless to argue that until the epistemics are valid. In fact I didn't intend to argue past that point at all; I'm objecting to bad arguments, not the position per se.

> The way we do AI can never achieve that

This is not meaningfully true. We have unsupervised image recognition, for example, and there's plenty of work on all sort other unsupervised learning of all sorts of things.

The idea that insects ‘don't have a reward function’ is not really valid; a reward function is just the function you're optimizing on.


I still don't see your point. The way we do AI today is that we have a static dataset, train a fuzzy function and then let it do work in production. The production model doesn't identify when it makes mistakes and update itself like every animal does. Ants does this. They don't do it perfectly, but they do it.

So the way I see it until AI starts meaning only models that updates themselves as they evaluate input it is a pointless term. Currently basically every single instance of AI you see people talk about are just dumb immutable functions.

Edit: And to clarify, I believe it is impossible to reach human level intelligence without having a model that mutates itself as it tries to solve the problem. Humans are smart since we learn while doing, not that we first take a long course before we start doing.


There is plenty of research on exactly this sort of learning-while-doing, termed ‘online learning’.

It's easiest to train on IID data, and also common to train in specific RL environments, because those paper over some issues with catastrophic forgetting and overfitting, and it's also preferable to do all the compute-intensive updates during training for efficiency. However, lots of people agree that online learning is fundamental to cracking AGI, and it's neither neglected nor unsuccessful.


> There is plenty of research on exactly this sort of learning-while-doing, termed ‘online learning’.

For some definition of plenty. I'd argue there isn't since almost all resources poured into AI are spent getting better at optimizing immutable functions. I don't see progress in getting better at optimizing immutable functions as progress towards general intelligence, hence I don't see almost anything people do in AI today as progress towards general intelligence.

Do you understand now? I don't say nobody works on the things I think is important, just that it is mostly neglected compared to the enormous amount of resources spent on dumb models.


Progress on online learning builds on progress on offline learning, because it's the same optimization procedure.

I don't really agree with your stance on it, but I was not objecting to the claim that people should focus more on online learning. I was objecting to the claims “super computers can't even compete with the intelligence of an ant”, which is false, since the impressive behaviours an ant displays are not learned, and the learned aspects aren't competitive with what computers can do, and that “[the] way we do AI can never achieve that, since it always relies on human intelligence to decide all of those things”, which is also false, because online learning research exists and works, if imperfectly.


> Progress on online learning builds on progress on offline learning, because it's the same optimization procedure.

I disagree with this assertion, treating online learning like offline learning means you aren't making any progress. The main difference is that online learning lets you interact with the problem and seek more information, offline learning doesn't. Offline learning doesn't help you at all solving that problem.


Exploration is a major part of RL already, see AlphaGo for example.


Not OP, but to be honest there isn't much at all that suggests significant steps have been made towards AGI. GPT-3 isn't AGI, it's the opposite of AGI.

To be really frank, I doubt AGI is more than a teleological narrative.


I disagree, but that wasn't the argument I was objecting to.


> The talk about "AI" is not just misinterpreted in the press, but the ML/AI practitioner themselves too.

Of course, for example when you use the phrase "ML/AI".


The so called “AI” as they are practicing it now, is not intelligent at all.

Sure, they found a new trick to recognize cat pictures. But this technique was discovered decades ago. It just took this long for computers to speed up, for it to be usable.

Instead, they should change the moniker from Artificial Intelligence, to Artificial Insight.

Because all it is doing is to comb through all that random data, to look for interesting signals. But, the algorithm doesn’t even know if what it found, is interesting. That’s up to the human to decide.

Hence, the algorithm can provide Insight. But, it is the human in the loop, that will provide the intelligence.


>Instead, they should change the moniker from Artificial Intelligence, to Artificial Insight.

Several times when I spoke to people about this point I got empty faces. Some seem to like the idea of automating things more than augmenting human thought with better tools.

Related article about Moldable Development:

>We can trace both models back to the early years of the first electronic computers. However, the struggle between these two worlds is still present today. For instance, Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop. You can read an excellent essay (Shan Carter and Michael Nielsen) about this topic.

via: https://osoco.es/thoughts/2019/05/designing-media-for-though...


> But this technique was discovered decades ago. It just took this long for computers to speed up, for it to be usable.

This isn't really true. Old techniques don't work well even with modern compute. Now, try replacing sigmoid with ReLU in old networks and see it improve.


Many 90’s era attempts that failed do work vastly better on modern hardware. There are many tweaks that showed up once people could fine tune using vast amounts of processing power, but I would put ~80% of the advancements from HW and 20% from algorithms.

Having a million or more times the processing power really makes or breaks modern ML.


From the highly recommended classic "AI Meets Natural Stupidity"[1], 1976:

    As a field, artificial intelligence has always been on the border of respectability, and therefore on the border of crackpottery

    ...

    Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them. However, to say anything good about anyone is beyond the scope of this paper.
[1] - https://www.csee.umbc.edu/courses/graduate/691/fall19/07/pap...


I have certainly noticed this but am largely unsurprised by it. I've been much more shocked by the number of people I've worked with in the tech industry who believe that #2 is far more advanced than current evidence suggest and that it will lead inevitably to #1.

I'm not opposed to the hypothesis but the number of people I've encountered who really believe current heirarcical neural nets are the end all tech is truly head scratching. Especially when so many technologies such as the Internet have not produced certain expected results and completely surprised with such unexpected ones.


I find a lot of the media (here in Japan anyways) call literally every automation by computers or machines AI. Example, there’s an automated gate that only opens if a scanner finds your body temperature is not too high. AI Gate. The other day was something related to automatic checkout at shops or something where it all boiled down to some computer reading QR codes


Yea In Japan the AI hype (or fad) arrived with like 3-4 years of delay compared to the west. Thing is it works so well for startups to get funding, it's not just media but investors too. Worked for a startup which sold "AI" interact with customer by detecting human face (and age, gender etc) and having targeted conversation. Turns out it's just a facial recognition and replay of prerecorded voices. I felt really dumb when customers, investors & the CTO somehow were impressed by this basic functionality.


The other day I read an article about "uncovering tax fraud with algorithms and AI". The innovation: examining tax records not just from an individual perspective, but from a family perspective. Essentially a split-apply-combine. AI!


I read an article about a large firm of lawyers removing gendered language from their pro-forma documents. The article claimed they were going to use AI to achieve this. It's not even worth pointing out what is the obvious method for this task.


That reminds me how in South Africa the traffic lights are called robots.


A quite good story riffing off that: http://clarkesworldmagazine.com/okorafor-kahiu_10_16/.


That's actually a special case of a general phenomenon, which I seen here on a link on HN being called the Conway or Neumann problem (in any case, that blog post did purposefully misattribute for a joke, so I am not too concerned in getting the name right). The thing is, you're reading an newspaper article on anything where you can evaluate the quality with any kind of confidence, and it is invariably garbage, much too much simplified, glances over the interesting bits, and obviously the journalist did not understand the topic. And then you're turning the page, and expect the article on anything else to be much better.


You're referring to the "Gell-Mann Amnesia" effect.


I agree with the main thrust of the article. However, I don't think this is a particularly new idea. AI has always been this vague, amorphous idea in people's minds. Whenever it becomes concrete progress it generally stops being called AI and gets given a more specific and useful name.


I wish more people would use ML if it's ML instead of AI.


And I wish more people would call it what it really is: optimization(regression) & statistics. The sooner the buzzword "AI / ML" dies, the earlier we can shift the wrongly allocated investment to more promising things, (just as blockchain caused all the weird money flow in one direction).

Unlike the blockchain, "AI" brigade seems to just have too many people that benefit from keeping it over-hyped so I guess it won't come soon enough.


Isn’t Machine Learning even a bit of an exaggeration, though?


Why? The machine learns via essentially trial-and-error to produce a model that best fits the data provided to it. The hope is that the training data is representative of "unseen" data the model later encounters in order to be useful.

Seems pretty accurate given the broadness of the topic and variety of use cases.


Because it’s regression and statistical analysis.

Nothing special about ML except applying it to large(r) datasets, particularly in areas where it works well (various recommendation based models for example).


A lot of current ML/AI algorithms come from the computer science departments in universities. They use different words for the same ideas that exist in the statistics community. There is more of a focus on prediction versus inference. The interpretability of these algorithms come second. Here’s a little translation mapping: https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/...

Nothing really to get one’s panties in a knot about though.


I don't see a problem with machine learning as a term. Especially since it incorporates more than regression methods - it's useful to have a general term without having to specify the type of algorithm each time.


Fair enough. Although I would prefer a less sensational term, I liked “Big Data” a lot in that regard.


The terms big data and machine learning describe very different things though. You can't use them interchangeably. I would also argue that big data is the more sensational one of the two. Machine learning can be easily defined as a group of algorithms. Big data on the other hand is a vague notion that doesn't make it clear what is happening at all and this ambiguity allows sensationalizing.


Your neurons also learn by processes similar to regression and statistical analysis.

Why would it be OK to call it learning in a human or an animal, but not in software?


A human can learn maths and do pure analysis (if it was smart enough).

A machine would struggle with simply machine learning to answer pure math questions.

Although the M1 chip from Apple.... ;)


How does this differ from you "learning" words of a foreign language?


It’s not “learning” anything though. Difference is causation vs correlation:

If I learn how to boil water for example, there is close to 100% causation that heating water causes it to boil (notwithstanding we do not know everything about Physics, eg before we would say the earth is flat because we learnt it so, but that was incorrect, ie not everything we learn is true, it’s just a super high level of correlation)

However if a machine learning model says the probability of water boiling is 99% after heating water, that is actually correlation. So I would distinguish it from learning

In this example there’s not much different. But when you apply to more complex examples, the difference between causation and correlation become clearer


You think people's ability to learn is really distinguished by an ability to discern causation and correlation? That does not comport with modern behavioral science.


To some degree yes. For example, can the fields of mathematics and advanced physics or programming be studied to the degree they are with an understanding of causation?

For some things, correlation can be sufficient, so I can see where you are coming from.

As a better counterpoint: if we could not distinguish between causation and correlation, why is general AI so hard?


> can the fields of mathematics and advanced physics or programming be studied to the degree they are with an understanding of causation?

Yes, pretty much. Newton's laws are a perfect example of something that establishes a correlation but are useful absent accurate causation.

They did just fine until we needed something better. Einstein provided this but can we be so sure gravity as a result of curved space is "true" and not just a really good model? Perhaps the hunt for what quantum gravity will one day tell.

> As a better counterpoint: if we could not distinguish between causation and correlation, why is general AI so hard?

I think general AI is hard because for one it might be a fiction. For general AI to be real, we'd have to assume people posses "general intelligence". Or at the very least, were it beyond us, we'd have to assume we could tell if we had created it.

What we are really trying to do with "general intelligence" is reproduce something that took 3.5 billion years of natural selection to design. We have only recently become capable of even creating machines with the kinds of part counts that biology comes with. Were we to produce "general intelligence" in the next thousand years I'd be surprised it came so easy.


I see where you are coming from and don’t disagree on many points. From a layman’s perspective, the causation vs correlation distinction I think works well still and I also think “machine learning” is a misrepresentation in its current use and limitations. But this is now getting into subjective opinion so it’s neither right or wrong. Nice talking to you on this topic :)


According to Hume neither the model nor you would know/see the causation. It would be correlation in both (an all other) cases


Try to prove electronic computers/TPU works like an organic human.


Why does it have to? If you define “learning” as “being like an organic human”, then sure perhaps it isn’t “learning” under that definition, but that seems like a poor definition, and I don’t think it is how people use the word.

What does “learning” mean?


Good. Ponder on that question.


The question is: Is the machine learning, or is the machine intelligent.

The answer is its neither. But we've taken them to be useful analogies and loosely designated them to mean different things. Neither one really applies because in truth, the machine isn't learning. Because we've deemed some types of organs optimizers, others dynamic programing and others learning. It's all arbitrary.


I don't believe it is entirely arbitrary. Just as there are categorizations of machine, turing machine vs push down automata vs finite state machines, which determine their ability to solve certain problems, there are differences between categories of algorithm and their ability to solve certain problems.

There are many "learning" algorithms beyond what is popularly called ML today but they have in common that they use information about their own success or failure to inform future results.


I'm sorry, this is a pointless discussion.

First off AI and MI are used almost synonymously[0] and MI very rarely:

  - "Artificial Intelligence" 192M hits
  - "Machine Intelligence" 4.8M hits
  - "Machine Learning" 153M hits
> The word “AI” in these ads has a recognizable meaning, and it is not the same meaning used in the sentence “Elon Musk founded OpenAI because he was worried AI might cause human extinction.”

These is not that different, they are the same thing on different time scales. The first usage names the current (moving definition) state, which moves toward the second which is a singular (theoretical) point in development.

Finally, is this just a PSA? We already know this. If it doesn't say ML, deep learning, CNN, or other specific term it's a fluff piece, or just trying to get eyeballs on research (which is rational).

[0] https://www.dataversity.net/ai-vs-machine-learning-vs-deep-l...


On the other hand, we are today daily using tools that 20 years ago certainly was studied as AI, such as speech recognition and image classification. Maybe they are not using general artificial intelligence to do it, but I think that is beside the point. They are accomplishing something that used to be thought to require some kind of intelligence to accomplish. But as soon we succeed with one of these tasks, the magic disappears among the practitioners, and the goalpost for what is allowed to be called AI is moved. Our understanding what intelligence really is, is very vague to begin with, so what’s wrong with calling progress in machine learning AI even after we actually succeeded with something?


There's nothing wrong with it, it can simply lead to confusion. When computers started, people called them 'thinking machines' and, say, multiplication was something that required some intelligent behavior on the part of the person doing it. There's nothing wrong with calling a calculator a thinking machine that implements intelligent behavior, but it helps if everyone is aware that at other times when people talk about 'thinking' and 'intelligent' they mean something quite different. This is not uncommon; when people use the word 'bank' they refer to very different things: river ones, park ones, central ones, rotations of flying objects etc. But everyone should be aware that getting better at building comfortable banks to sit on in the park is not the same as getting better at figuring out the optimal rate of interest or turning planes. It certainly might, but that is a separate argument and terminology can obscure that.


I agree there is a moving goalpost for what counts as good enough to call "AI". But with the recent revolution in machine learning the goalpost kind of went backwards. The successes in machine learning problems have tended to come from "adjacent" fields like pattern recognition, information retrieval, plain old statistics, etc. Meanwhile the AI researchers were pursuing more ambitious directions that didn't pan out. Then AI researchers basically "annexed" these lower level methods that worked.


The press is parroting the tech industry where AI, ML, and Deep Learning are all conflated and claimed as the magical future of everything, in an effort to boost valuations.


The two that are conflated according to this author: Machine intelligence in general, and A specific bundle of technologies.

But the goal is specifically to conflate these two, isn't it? If I create a machine that beats a chess grandmaster, does it matter whether or not the machine "knows" how to play chess? If I create an app that takes you to a great restaurant for the mood you're in, does it matter whether or not it understands "mood" or "restaurant"? At some point, the stringing together of specific bundles of technologies becomes indistinguishable to the average person from real intelligence. (This probably happens far, far sooner for the average person than it does tech folks and researchers)

At that point, we've created something that for a specific domain _is_ intelligent: it reasons about things in an internal way that is opaque to us and provides what we perceive to be interactive value. Then, that thing just becomes another bundled to be assembled in an even larger chain. True AGI would be able to create and assemble the bundles.

But even then, the same end-state occurs: at some point whatever we're calling AGI will be able to do things with bundles of technologies that we perceive to provide interactive value. Will it matter at that point whether or not the machine is "truly" intelligent? (Apologies for the scare quotes, but many of these terms are quite suspect in this conversation and are used in all sorts of ways by authors)

To put it bluntly, the goal of AI work is to do cool stuff for reasons we're not exactly sure about. Otherwise it'd just be programming. We are using a lot of programming tools, like NN, to do this. At some point, various groups will fake themselves out and stop working in that area. Whether or not that's intelligence or not, whether or not anything new will ever happen in that field, is beside the point and not in the (commercial) scope of work. Aside from all the formal work and really cool stuff happening, in the end, when it gets used somewhere, this is a "looks good enough to me" situation. We're not looking to create intelligence, we're looking to create an uber-duber supreme version of Eliza for some given problem. The we find other, more complex and interesting problems. Then if we need to we'll join them together (There is a lot of detail that I have ignored, including how GANs play into my argument)


My own distinction between ML and AI (which I agree with the article that marketing buzzheads do not care about) is that one is a set of optimization techniques not necessarily involving an agent, and the other involves an agent, who needs to make (intelligent) decisions. The decisions themselves may rely on said ML-based optimization techniques, or they might not. Similarly, optimization techniques may employ an agent-based approach, but equally they may not.


There’s a great Wikipedia article which is apt here.

https://en.wikipedia.org/wiki/AI_effect?wprov=sfti1

It’s perhaps best encapsulated by Douglas Hofstadter’s quip: “AI is whatever hasn't been done yet”.

The point being, AI is a moving target, and the threshold for reaching it migrates further which each new accomplishment in the field. If you showed the ML of today to someone in the sixties, you’d have a hard time convincing them it wasn’t AI.

But I don’t believe this is an issue reserved only for the press. At least anecdotally, practitioners don’t think of it this way either.



Difference between machine learning and AI:

If it is written in Python, it's probably machine learning

If it is written in PowerPoint, it's probably AI


#2 is like how Tesla sells "Auto-pilot".


"AI" will be there is almost everything is few years time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: