Hacker News new | past | comments | ask | show | jobs | submit login

"The widespread misimpression that data + neural networks is a universal formula has real consequences: [...] in what students choose to study, and in what universities choose to teach."

I think this is a problem that deserves more attention. A large portion of CS students nowadays choose to focus on Machine Learning, many brilliant CS students decide to get a PhD in ML in hope of contributing to a field that will continue to develop at a pace similar to the progress we have seen in the last 7 years. This view, I think, is not justified at all. The challenges that AI faces such as incorporating a world model into the techniques that work well will be significantly harder than what most researchers openly admit, probably to the detriment of other CS research areas.




This is a pet peeve of mine as well. A lot of undergrads are not really looking around any more. Universities offer their first data science courses in year one. Undergrads mapping out their paths towards becoming research scientists with dollar signs in their eyes.

I think this is not wise career bet at all. There will always be spots for a select few 'pure ML' grads, but at the post phd level the really good jobs are getting pretty thin unless you come from the right adviser + right school + right publication track.

On the other hand, if you add a secondary useful skill, like having a really good understanding of distributed systems, or networking, or databases, or embedded systems, or being 'just' a solid overall software engineer, you (i) have something to fall back on if ML dreams do not work out, and (ii) have an edge by being just the right fit for specific teams, while also still competing in the general pool.

I think there will be a rude awakening on this in 3-4 years for many.


That's true, but how many people in the field started out with the right specialty to begin with?

They might be disappointed graduating post-ML bubble, especially if they have their hearts set on half a million dollar salaries, but they'll do fine with a PhD in CS.

Then again, I'm not part of the SV world, so YMMV.


> That's true, but how many people in the field started out with the right specialty to begin with?

What's hot in technology changes all the time. The smart strategy for an undergraduate is to:

1. cover the fundamentals

2. take the honors track (where the professor and the students want to be there)

3. avoid "hands-on" classes, which you can learn on the job as required

4. take at least 3 years of math. Really. Don't avoid it. If you can't do math, you can't do engineering or science.

As for me, my degree is in Mechanical Engineering, with a side of Aero/Astro. And yet I work on compilers :-)


Yes, always play dual-class, so to say. For every pair of important fields X and Y, there are numerous people who can do X very well but can't do any Y, and vice versa. If you do both X and Y well enough, you are at a huge advantage.

Trivially, being good at frontend coding and at UX design (cognitive aspects, etc) helps. Being good at designing software and being good at talking the language of your business is super helpful, this what's required at the director / VP Eng level.

And yes, do take the math. After just 2 years of university math (as an embedded systems major, I had to later study myself to grab some missing algebra, some category theory, etc, to grasp SICP on one hand, and Haskell on another. This helped me become seriously better at writing and understanding code in industrial languages.


I will say that I noticed sometimes the honors courses, in college, were taught by the absolute worst professors.

I have a theory it was because, "they are honors students, they'll learn the subject regardless, and we need to put this bad teacher SOMEWHERE".

I once got out of a differential equations honors class because it was very obvious the first week, that the teacher was awful. Later, my fellow honors students told me how "lucky" I was to have switched. They were suffering through the course, while I was learning and enjoying my new class.


Never went to college but I can agree with everything you said. I would add to actually go out and build something. A portfolio is immense and has opened doors for me otherwise that never would be.


Could you expound on #4 some more?

Would you recommend a specific branch of math?(discrete, linear algebra, etc.)

Also, which math or general skills/fundamentals would you say has helped you the most when implementing compilers?


I've followed the same path, out of curiosity do you mind if I PM you asking a few questions? I'm a couple months out of undergrad and I'd like to pick your brain a bit.


sure


> A lot of undergrads are not really looking around any more.

My friend is doing a CS degree now and is being told by profs that you won't get a job without an ML background.


Speaking as faculty myself, the only job advice that you should listen to from an academic is how to get a job in academia. Even the ones that had a job in industry prior to their academic job have a reason for preferring their academic job. Their firsthand knowledge of the industry is stale at best, and their current information is all hearsay.

If you want to get a job in industry, you need to get advice from alumni that have (recently) gotten a job in the industry you are interested in.


The newly hired faculty at my school have mostly been ML/AI people. While it seems completely wrong that ML is needed to break into industry, is it true that research roles strongly favor ML right now?


I'd say that with some caveats, a freshly minted PhD in CS/stat/math/etc. with an ML dissertation will have an easy time finding an academic job. The caveats are that you might not get the job you really want. You might have to do a post doc first. Even if you don't have to do a post doc, you might not be at the most prestigious university, might not start out as tenure track, and might not even be in a CS/stat/math/etc. department.

You could end up at a medical school or a business school, a ton of mediocre PhD's take these jobs because Stanford and MIT never called them back. The medical and business schools think they hit the jackpot- they know they got someone mediocre, but at least they found someone. The mediocre PhDs usually end up happy because they found a job and medical and business schools usually pay well.

I'd say that a lot of the jobs that favor ML are from schools/departments that are playing catch up and trying to cash in because they saw a wave of grant money going to ML research. They are either smaller and less prestigious, or interested in the application to their field, rather than any direct interest in ML itself.

An ML focus probably isn't the boost you may have thought if you want a job at Stanford. The big schools have their pick of the litter and are already flush with ML talent, so being "an ML guy" isn't a magic serum that hides all your other flaws. You still have to be smart enough and positioned well (good school, good advisor) to get a job at Stanford.


I am in such a situation now. Soon to graduate with a PhD in ML about Bayesian networks, and do not know what to do in future.

Originally I learned programming, so I can write classical software, working from home. Never wanted to do anything else, but that does not pay. Only for AI research I could get funding


I don't think that's true. The vast majority of software development jobs require no ML at all. I don't see that changing in the next 10 to 20 years.


Even further the majority of work in ml driven products is not related to the ML model.


this 1000%.


I don't think it is true either. But my friend tells me that multiple profs tell him that he needs to learn ML.


I learned ML in school 13 years ago. I've been happily employed since then at a major tech company with increasing responsibility and the only time I've really needed that background was to call BS on a given project needing a custom ML component.


Maybe those profs are just huge fans of ML–you know, the original ML, i.e., Standard ML, OCaml, etc.


“Who needs a neural net? OCaml can already match patterns!”


That's ridiculous. There are tons of jobs in everything from devops to pure frontend to writing shit in COBOL.

ML is sexy and pays a lot, sure, but it's far from the only thing going.


Another tactic for an undergrad looking for a leg up in their career might be to do a double major of CS + some other engineering field or finance. You can really outperform your peers and become a unique asset in a given industry by having an expert understanding of the domain paired with expertise in software development.


Good combinations - CS + Finance, CS + Operations Research, CS + Economics, CS + Physics


>CS + Operations Research

this is me. I can't recommend it enough.


I loved doing OR for my final project at Uni, it was great solving the travelling salesman problem for a taxi network routing it to the correct people and picking people up in realtime around an imaginary map of points. I remember thinking as I took a cab into halls from the station "wouldn't this be amazing if I could have a device that told me exactly where each cab was at any point and maybe hook into the satnav somehow". In 1999 [1].

[1] https://www.google.com/search?rlz=1C5CHFA_enGB720GB720&sxsrf...


talk about having the right idea at the wrong time!


Ah, I would never in a million years been enough of a shitbag to build Uber so it's a moot point. But still.


In my school all CS students took at least one OR class.

It was one of the best classes I've taken and some of the things still feel like magic.


Was it linear programming?


Do you have any suggestions for people who want to self-study in OR? I would like to know more in general but am specifically interested in applications to healthcare


My recent previous employment was in healthcare (on the insurance side) so I can give you some relevant insight

I do want to mention that on top of any self-studying, try to attend a talk or two or start following some feeds online that are close to actual healthcare operations. Operational teams are the ones who have to figure out what to do even when there is no good answer. the easiest way to keep up with which topics are most valuable right now is inside knowledge.

There is no purity in the OR field outside of phd's doing their research - it is entirely about getting shit done efficiently, however possible, to the extent that the operations team can understand. That last part of that sentence is a big catch. For example, if your 'solution' has interns making judgement calls on data entry (because moving the work upstream is efficient!), you are fucked if you assume that data will be accurate.

BUT there's obviously plenty of skillset stuff you can learn to help you in a general way so here are some important areas: 1: Linear & Nonlinear programming (tool: AMPL) 2: Markov Chains (good knowledge) 3: Statistics & probability (necessary knowledge) 4: Simulation (start with monte-carlo [it is easy and you will be surprised it has a name]) 5: databases: SQL / JSON / NoSQL 6: data structures and algorithms (big O notation / complexity)

OR work in general overlaps a lot with business analysis. The core stuff they teach you in school is listed above.

Healthcare right now has a big focus on Natural Language Processing, and applying Standardized Codes to medical charts - and then working with those codes. The most common coding standard in US is ICD-10 I believe.

Other than that it is mostly solved logistical items like inventory control systems that need a qualified operator. You do not want your hospital running out of syringes. You do not want your supplier to be unable to fulfill your order of syringes because you need too many at once. You do not want to go over budget on syringes because there's a hundred other items that need to be managed as well.

Now the important thing to keep in mind is that almost all operations problems at existing companies/facilities have solved their problems to SOME extent since if they didnt then their operations teams would have fallen apart and failed. So in practice it is rare that you are going to implement some system from scratch, or create some big model. Youre probably going to work with a random assortment of tools and mix data together between them on a day to day basis to keep track of when you need to notify people of things. With a lot of moving parts, you will have to task yourself with finding improvements and justifying them. Expenses are very likely to be higher than optimal, and you can earn extra value for yourself by finding incremental improvements.

No one is going to say: "work up a statistical model for me". They are just going to be doing something inefficiently with no idea how to do it better, and you are going to have to prove to some extent why doing it another way will be better - and also be worth the cost of training people to do it a new way. It will be monumentally difficult to convince anyone to make any sort of major change unless the operations team is redlining, so your best skill will be in being resourceful and adapting to the way things are no matter and improving THAT mess - not creating a new way of doing things.

Databases house the data you need to make your case. SQL was the norm, but a lot of stuff is JSON now. You might need to work with an API, add django to the req list.

Simulations let you test multiple scenarios of resource allocation on a complex system with moving parts (for example, resources includes # of hospital rooms as well as employees and their work schedules). Statistical analysis lets you verify the output of your simulations as meaningful or not. There are proprietary simulation programs that do the A-Z of this for you if you know how to configure it (ARENA), and theres pysim + numpy + pandas + ...

Markov chains are related to building a model for your system. It's theory stuff, but helps wire your brain for it. Laplace transforms are "relevant" somewhere in this category

(non)linear programming is the calculator of the fields distribution problems. In practice you create 2 files: a model file and a data file. Model file expects the data file to be a certain format, and is a programming language for processing the dataset.

For example, if you manufacture doors and windows at 3 locations, and sell them at 10 other locations, and you have a budget of $10000: how much wood and glass do you buy for each manufacturing location and how many doors and windows do you make and ship to each selling spot from each factory? The answer depends on data - price each sells at, cost for each plant to produce, capacity of each plant to produce, cost of transporting from factory to selling location, etc. So you make a model file for your problem and then you put all the numbers in a data file. You can change a number, run the model again, see if the result changed. You can script this to test many different possible changes

Data structures and algorithms: There are a lot of different optimization algorithms, all with different use-cases and theres no real upper limit on learning them; so this area can be a good time sink... since someone else will have already coded the implementation but you are providing value in knowing how to use it. Therefor - you dont need to learn how to make the algorithms or what magic they perform outside of whatever helps you understand what its good at. Outside of research, its unlikely this stuff will really get you anything other than maybe being able to impress at an interview - BUT who knows, maybe you find a use-case for some random alg that is career-defining.

I know a ranted a bit, and I didnt proof read, but I hope there was some helpful info in there


Thank you for the really in-depth reply! This is very helpful and gives me a lot to think about.


No problem! I'm glad it was helpful


My double major was CS + Applied Mathematics. Highly recommended.


I did CS + Finance, then got an MBA. It has set me up very nicely.


CS + Something medical, at least in the US


I majored in geophysics and it doesn't really help me as a software engineer in Houston (although I haven't tried to really exploit it for the most money, because I felt guilty about working in oil and left). What's killer though is having a master's in a field and programming experience.


I'm not in Houston or oil/gas, but trying to find a niche with the same background. I'm not seeing a great demand for this combo commercially unfortunately.


It's super easy to get a general programming job, though.


Did you have to leave Houston too?


Nope, happily employed here. What prompted you to leave?


Nope, haven’t arrived yet. Just scoping it out.


Nah, Houston is fine and there are plenty of enterprise jobs, and they are constantly hiring. If you need a reference, send me a PM and I'll give you my email address.


I’m not sure how to PM you on here...


Yeah, sorry, I'm used to reddit. Leave your email? Leave a temporary email? I would list mine but I don't like linking my online identities to my real one.


gmail is my HN handle


This is true. My first job as a dev was in fintech, but I know literally nothing about finance, nor do I care to. I was not a very good developer in that arena, because you really need domain knowledge to be truly effective in some fields.


> large portion of CS students nowadays choose to focus on Machine Learning, many brilliant CS students decide to get a PhD in ML

These claims are massively exaggerated. Students don't "choose", "decide" etc. In the US, lets be generous & say there are 50 top universities with good CS/ML depts. After students beg plead & submit their GRE scores & recos & transcripts etc, each dept chooses on average 20 students for incoming PhD cohort. Of them easily 20% will wash out for sure. So atmost 800 top students graduating with a PhD in ML each year. Let me caveat by saying there are nowhere close to 50 top univs offering ML PhDs or having an intake of 20 per dept. So overall, the number is probably close to 200-300 students for the whole USA, not 800.

So all this handwringing for what some 200 kids will do ? I don't care if they do a PhD in basketweaving, in the grand scheme of things, 200 is not even a drop in bucket.


I agree that it might not be a large portion of all CS students, but it does appear that many of the incoming CS PhDs have chosen to go into ML (of my class, maybe 1/3), which serious distracts from interest in other subfields. It's actually a joke in my department that all of the new students are pursuing ML, while almost no one is pursuing theoretical CS or other less popular subfields.

I'm also not sure about why the exact number matters - 200 kids matters a lot when future professors are drawn from the pool of students who have successfully completed a PhD.


I think the problem applies to non-PhDs students as well. I'm seeing a lot of interest from recent non-PhD grads in subjects other than CS wanting to steer their career toward data mining. My concern is that this re-focusing will probably lead them down paths that may leave them undistinguished relative to peers who stay within the more stable but less shiny domains where they're better prepared to succeed.

I saw the same thing happen in the late 1990s as everybody and his dog got into web design while it was hot. Few of those folks are doing that now, nor did that skill translate well into other roles since the skill set isn't fundamental to other careers.

I'm not sure ML is any different, especially deep learning, since few companies have anywhere near the necessary amount of data to successfully play that game and win.


I don’t know much about industry but it seems to me there are two ways to look at ML. First is you learn some of optimization, stats, math, parallel computing, numerical methods. The second is that you learn a hell of a lot about fiddling with different network architectures and applying things to specific problems. I wonder whether the first (more fundamental) approach doesn’t have different prospects. At least in this case it can lead to career paths like a national lab.


The first path lets you become a quantitative problem solver, which is widely employable. The second path leads to being very good at specific deep learning tasks that I don't think will be in large demand in 5 years IMO, at least outside the largest tech companies with all the data. Other companies will fulfill their business needs with fewer ML-specialists, AutoML and pretrained models.


I've also noticed a trend of college graduates who in interview struggle with general software engineering practices and more fundamental coding skills and CS knowledge, but have knowledge and proficiency of ML. If you're a CS BA or MS, and don't plan to do an ML PhD, I'd suggest asking yourself if you want a data science job or a software engineering one. If the latter, remember to focus more on that. Surface knowledge of ML is a bonus, but the rest is more important.


My partner has ditched crosstraining into data science and ML because of how ridiculously 'ductape and string' the entire sub-indistry is. Tooling that is pre-2000 quality if it even exists, people with math skills yet only the most rudimentary ability in best practice around data storage and maintenance, coding, domain knowledge and understanding of how much bias they're introducing into results. Her view is its just a free for all of people who have little to no need to justify their output because everyone is waiting and hoping for the magic to happen.


Larger organizations (by size and longevity) tend to have staff that predate the data science and ML/AI hype train and have therefore worked out more robust tooling. Unfortunately the popularity of ML is driving a lot of adoption of trendy but not necessarily best practices.


There was a lot of this sort of thing with the web application boom as well. Lots and lots of decrying how JS libraries and frameworks were ruining the software industry.

I've not following that very much in a decade though. Did that whole area end up maturing?


I've noticed this myself but wasn't sure if it was just me. Almost every CV we get has a ML slant but the candidates struggle to write a simple SQL query.

From a pure numbers perspective 95% of jobs in our industry have nothing to do with any reasonable definition of machine learning. Even those applications of ML have a large amount of traditional CS going into acquiring, reformatting and storing large datasets.


That’s definitely something I’ve seen too. Most of the cvs I get through, especially from younger devs, talk about ML. I’ve got my work cut out for me keeping my existing codebase understandably without deliberately introducing opacity!


Most of the ML useful in industry is getting commoditized at a fast pace, and not enough people understand that. The focus on modeling for applying ML in a business environment is as incongruous as people believing programming is graph algo and compiler techniques. It sometimes is, but rarely.

It is much easier to learn the basics of ML if you have good software engineering skills than the opposite in my experience. The one thing that needs time learning is experimental design and quantitative analysis, and this is rarely taught well at university before PhD.


It's also a problem that students are focusing so much on neural-network-based approaches, neglecting other techniques. There's what, like, 10 places in the world where you'll have enough compute to be able to train cutting-edge neural networks? Real world problems are just as well solved with other techniques like XG-Boost, which wins pretty often in Kaggle, for example.


>There's what, like, 10 places in the world where you'll have enough compute to be able to train cutting-edge neural networks

You can just use a smaller amount of compute with transfer learning style stuff.

I can probably fine-tune a transformer model with my pocket money and beat any NLP solution from two years ago.


It may not be ideal for fundamental research but demand for ML practitioners is only likely to increase over time.

ML is pure alchemy for a business operating at scale. It’s as if a coal plant could turn its waste emissions into profit. You have this “data exhaust” from all the activity happening within your system that can be used to optimize your system Atleast a few percentage points beyond what is other wise possible. A team of 5 ML engineers can improve an ad targeting system by 5 % and if the company is google that’s billions.

ML creates feedback loops of improvements in product that improve usage that lead to more data which further strengthens the moat a business has.

It totally makes sense to jump on this train. It won’t solve AI but will make a lot of people wealthy optimizing systems.


I think that there are a pretty large number of companies that put out "data exhaust" as you put it that could increase their efficiency by 5% or so, sure. Not every company, but some.

It's very unclear to me that there are a large number of companies that can increase their efficiency by much more than 5% through ML. And it's not clear to me that there is more than a couple-year's-worth of projects in turning data exhaust into money for each company.

So if I were a new grad looking to get into ML, that might somewhat concern me.


>> A team of 5 ML engineers can improve an ad targeting system by 5 % and if the company is google that’s billions.

Not going to pick up on your arbitrary 5% constant here, but please elaborate how these ML engineers are any different from anyone with a quantitative background?


I think that's the key here. It's not ML magic that's driving success, it data literacy. Steve Levitt (of Freakanomics fame) said his advice to students would be to ensure they have base knowledge in understanding data analytics regardless of their domain. ML is just the sexy subset that gets all the attention


And ML is a tiny part of the whole data science toolchain and process. Getting the data to the point where you can use it for something interesting is probably the hardest part.


My point was that cutting edge ML can add a bit more on top of what data literacy can achieve. At scale that’s worth a lot.


What I meant was for eg applying deep neural networks on large data sets for eg can give you a few percentage points improvements on systems implemented without them.

At scale it makes a big difference.


>> It may not be ideal for fundamental research but demand for ML practitioners is only likely to increase over time.

If research stagnates, application willl also stagnate. For the industry money to keep pouring in there has to be growth and for there to be growth there has to be progress- scientific progress.

So if something is "not ideal for fundamental research" it is also "not ideal" for business.


not everyone wants, or needs, an ad targeting system


> " The challenges that AI faces"

Isn't this a good thing? I would think that If it were easy we wouldn't need graduate degrees


There are certainly challenges, but "incorporating a world model" has been going well recently: "Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model"

https://arxiv.org/abs/1911.08265


Not exactly. Incorporating a world or domain model usually means taking pre-existing declarative knowledge in some form (e.g. from a semantic net) and using it to aid learning.

To quote the article, "Model-based reinforcement learning aims to address this issue by first learning a model of the environment’s dynamics, and then planning with respect to the learned model."

So they've sped up learning from examples by learning a model first, but it's still learning from examples.


That seems a fairly specific meaning of the term that may not be in wider use, see eg: https://arxiv.org/abs/1803.10122


i wouldn't underestimate the power of good tools here. All the software libraries for ML are very easy to get started with, and make it very easy to prototype cool things. It seems like in other applied areas it's a lot more work to get less results.


I think that's exactly the problem with ML. You can get interesting looking results fast with very little effort. But then, getting from an impressive demo to something actually useful in the real world is much harder, and will lead you down endless rabbit-holes as you try to improve your results, but things only get worse, not better...


After 5 years of growth in data mining at my giant pharma, this is exactly what I'm seeing. Most ML projects remain toys while the number of them that advance into something useful can be counted on one hand.

(Of course it's a bit hard to assess the impact of a revolution like ML (esp DL) when your company already has hundreds of statisticians who have been employing similar data/experiment analysis techniques for decades, thereby diluting the signal of how disruptive novel forms of ML are within the enterprise.)


Cpus have hit their limits and not growing. Software has stagnated and recycles concepts from 50 years ago, and not growing. What's left to study? Either world-scale clouds or the growing field of NNs. Out of the two i 'd pick the 2nd cause it's either more fun or more unpredictable. Students are making a rational choice


Even if those fields completely stagnated, there's great progress to be made in high performance computing, security, bioinformatics, and human computer interaction (just to name a few).


Deep learning still at least doubles every two years, it can be estimated by https://scholar.google.com/citations?user=kukA0LcAAAAJ .


One thing is though that lots of deep learning is becoming simplified to the point of where anyone can do it, with tools like Azure ML studio and whatever the AWS offering is called (I forget).

Of course, there will always be a need for people with deep expertise, but unless you work in a very technical or prestigious field, how many compiler experts have you worked with?


"how many compiler experts have you worked with"

you meant CS PHDs? because it is a better analogy. My answer would be with many, because I am one of them.


Honest question: how common is this in tech hubs? Is the talent level that much higher? I work in the enterprise world and few people have advanced degrees.

I mean, I know there are some shops that are doing really advanced stuff, but I'm talking about your normal tech hub employer.


I don't think that people are ranked by talent. This is also true for academia. You don't have to be more talented than the next guy to show better results according to some metric. Right place + right time + avoid confrontations = a huge boost for your success.

See http://data-mining.philippe-fournier-viger.com/too-many-mach...

Roughly speaking, about 50k papers are published every year in arxiv/ML by perhaps 20k different researchers. Since some people still don't publish there, you may say there exists about 50k ML/AI/DL researchers.

Say 5k of them are pushing the field forward, most of them have a PhD. The remaining 45k deliver various local optimizations/adaptations, many of them don't have a PhD. Now it depends on the size of your normal tech hub employer. If it is big, then you have a few people from 5k and many from 45k. Otherwise, a few people from 45k.


I don't work in a tech hub but in a city with a world-leading university. Having worked for a local consultancy and a startup, I'd say over half of my colleagues have had technical PhDs and it's rare to meet people without at least a Masters. I now work for a remote company and it's probably closer to 65% with over 200 staff, but we are CS research oriented.

I'm currently studying a part-time MSc in CS because not having one is notable here (and being around academic people has inspired me to learn more).


PhD doesn't imply talent level is higher, just knowledge.


Pretty rare at your average non-tech company. Increasingly common as you get into major regional tech hubs and software as the product companies.


But how do you estimate the quality of those publications? Is knowledge about DL exploding, or is it saturating?


This person is the most cited scientist in machine learning (when measured by # citations per year) and is about to become the most cited scientist in the whole world. He has a lab of about 100 people and most of his works are at least good. I just provided an estimated. Once you see his citations count has saturated (probably in 5-10 years), you may guess that the hype is over.


But I suppose that the amount of citations can be large even if the underlying research outcomes approach a horizontal asymptote.


It can be. Still, he made some contribution to the progress in that field. Personally, I don't like the 'Canadian Mafia', I tend to prefer Schmidhuber.


Sounds like someone reeeeeallly good at nudging envelopes and writing grant proposals. Will be mostly forgotten a couple hundred years from now. Academic playwrights thought Shakespeare was a joke in his day.


"a couple hundred years from now" by whom? He contributed to AI, will not be forgotten by AI.

Edit: if you don't get what I mean by "not be forgotten", he will be the most cited scientist so why AI will exclude all his papers from some of its training sets (assuming that at least some stage of AI's development it will learn how humans do research)? Sounds unlikely.


By human beings. And he did not contribute to AI capable of contemplating human research like a human does, because no such AI exists yet. When it does exist (if ever), it will surely not consider this person or his hundred lab workers to have contributed to AI. It will consider them to have contributed to applied statistics.

Edit: I realized these comments might be very mean to you (assuming you're the lab director with the hundred workers). I should disclaim that what I say is something I assume is true about most people who gloat about their citation counts, but it's certainly not true about all such people, and might not be true about you. Myself, I'm a bit resentful because I'm so bad at the whole academic game, so that probably makes me quite biased.


>> assuming you're the lab director with the hundred workers

You make wrong assumptions and proceed. I am not that person and thus I don't get why you attack me personally.

While AI does not exist yet, that guy contributed to its development independently of your opinion about it.

Don't attack people on HN.


>Don't attack people on HN.

Don't attack people on HN.


True. I edited it to be nicer.


What do you mean by "Deep learning still at least doubles?"



This narriative has also taken over the VC world.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: