Hacker News new | past | comments | ask | show | jobs | submit login

In my field it's accepted (by some) that you write "AI" for your grant proposal and say "ML" when you talk to colleagues and want to be taken seriously.

It feels a bit wrong to me, because as you say it's arguably a grift, in this case on the taxpayer who funds science grants. More charitably it might just be the applicant admitting that they have no idea what they are doing, and the funding agency seeing this as a good chance to explore the unknown. Still, unless the field is AI research (mine isn't) it seems like funding agencies should giving money to people who understand their tools.




Most people outside of academia understand AI to include way more than just ML. People refer to the bots in video games as AI and they are probably a few hundred lines of straightforward code.

I don't think there is anything wrong with using the colloquial definition of the term when communicating with funding agencies/the public.


Would those topics that "outside academia understands AI to include" be covered in http://aima.cs.berkeley.edu ?

When you say "bots in video games as AI" that's covered in the book titled Artificial Intelligence: A Modern Approach, 4th US ed. :

    II Problem-solving 
        3 Solving Problems by Searching    ...  63 
        4 Search in Complex Environments   ... 110 
        5 Adversarial Search and Games     ... 146 
        6 Constraint Satisfaction Problems ... 180 
Those topics would be in chapter 5.

Sure, it may be a few hundred lines of code, but it's still something that a Berkley written AI textbook covers.

Spelled out more for that section:

    Chapter 5   Adversarial Search and Games ... 146

    5.1   Game Theory ... 146 
        5.1.1   Two-player zero-sum games ... 147 
    5.2   Optimal Decisions in Games ... 148 
        5.2.1   The minimax search algorithm ... 149 
        5.2.2   Optimal decisions in multiplayer games ... 151 
        5.2.3   Alpha--Beta Pruning ... 152 
        5.2.4   Move ordering ... 153 
    5.3   Heuristic Alpha--Beta Tree Search ... 156 
        5.3.1   Evaluation functions ... 156 
        5.3.2   Cutting off search ... 158 
        5.3.3   Forward pruning ... 159 
        5.3.4   Search versus lookup ... 160 
    5.4   Monte Carlo Tree Search ... 161 
    5.5   Stochastic Games ... 164 
        5.5.1   Evaluation functions for games of chance ... 166 
    5.6   Partially Observable Games ... 168 
        5.6.1   Kriegspiel: Partially observable chess ... 168 
        5.6.2   Card games ... 171 
    5.7   Limitations of Game Search Algorithms ... 173


I think I have an original edition of that book somewhere. Good Old Fashioned AI.


My assignments (different book) for Intro to AI class were:

Boolean algebra simplifier. Given a LISP expression - for example (AND A (OR C D)) write a function to return the variables needed to make the entire expression TRUE. Return NIL if the expression is a paradox such as (AND A (NOT A)). The expressions that we were to resolve had on the order of 100-200 operators and were deeply nested. I recall that I wrote a function as part of it that I called HAMLET-P that identified terms of the form (OR 2B (NOT 2B)) and rapidly simplified them to TRUE.

Not-brute-force job scheduler. The job-shop scheduling problem ( https://en.wikipedia.org/wiki/Job-shop_scheduling ) with in order processing of multiple tasks that had dependencies. Any worker could do any task but could only do one task at a time.

The third one I don't remember what it was. I know it was there since the class had four assignments... (digging... must have been something with Prolog)

The last assignment was written in any language (I did it in C++ having had enough of LISP and I had a good model for how to do it in my head in C++). A 19,19,5 game ( https://en.wikipedia.org/wiki/M,n,k-game ). Similar to go-maku or pente. This didn't have any constraints that go-maku has or captures that pente has. It was to use a two ply min-max tree with alpha beta pruning. It would beat me 7 out of 10 times. I could get a draw 2 out of 10 and win 1 out of 10. For fun I also learned ncurses and made it so that I could play the game with the arrow keys rather than as '10,9... oh crap, I meant 9,10'.

And I still consider all of those problems and homework assignments as "AI".

From the digging, I found a later year of the class that I took. They added a bit of neural nets in it, but other topics were still there.

By way of https://web.archive.org/web/19970214064228/http://www.cs.wis... to the professors's home page and classes taught - https://web.archive.org/web/19970224221107/http://www.cs.wis...

Professor Dryer taught a different section https://web.archive.org/web/19970508190550/http://www.cs.wis...

The domain of the AI research group at that time: https://web.archive.org/web/19970508113626/http://www.cs.wis...


I agree that using a colloquial definition is fine. And I don't mean to be too harsh on people who use buzzwords in their grant proposal: it's just sort of the sea you swim in.

But I only wish we could say that a few hundred lines of code was "AI": that would mean funding for a lot of desperately needed software infrastructure. Instead AI is taken as synonymous with ML, and more specifically deep neural networks, for the most part.


I think there’s nuance to be had here. Terms have been overloaded, and individuals aren’t necessarily acting in bad faith. ML can be considered to be a subset of AI.

That being said, ML is extremely boring to me, and I really do think a lot of the research is an enormous grift. Hop on the bandwagon, read a stats book, flagrantly plagiarize it, submit to CS journal that no statisticians read, publish and don’t perish, rinse, repeat.

It feels like society has spent billions of dollars on bad academics continuously reinventing applied statistics over and over again, but now with Big Data and a brand refresh! It’s like a whole generation of academics watched one too many terrible Hollywood remakes. It broke their brains, and now they’re only doing remakes too.

They ran out of statistics content to steal, so now the latest and greatest thing is plagiarizing classical AI works from the late 20th century and calling it “reinforcement learning.”

It’s all very frustrating. We could’ve funded a Manhattan project for fusion power, but instead thousands of our most brilliant people are wasting their time and humanity’s carbon budget to create the most powerful spambot ever.


I think you're entirely wrong about this. Using the term AI or artificial intelligence directly invokes several centuries of cultural baggage about golems, robots, Terminators, androids and cyborgs and Matrix-squid.

Saying "large language models" does not. Saying "giant correlation networks" does not. Not to be too Sapir-Whorfian, but the terminology we use influences our conversations: terrorists, guerillas, rebels, revolutionaries, freedom-fighters.


Should a nuclear power station rebrand itself to avoid being associated with Hiroshima? I really don't get what you are trying to say.


If you choose your words carelessly, you get unintended results.

Telling me about the AI in your HR system that hunts for the best candidates brings along the cultural context of stories about AI. Telling me about the rules engine that ranks incoming CVs does not.

"terrorists, guerillas, rebels, revolutionaries, freedom-fighters" are all the same group of people being referred to in different ways depending on how the speaker wants you to feel about them. Once you start using a particular word, you adopt the same viewpoint.

"AI" is too loaded with cultural contexts which will cause people to make mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: