Hacker News new | past | comments | ask | show | jobs | submit login

"Two large YC companies (both with machine learning teams) have told us that they consider interest in ML a negative signal."

I wonder why this is? Since ML/AI are currently "hot" those programmers may be trend followers? Or maybe interest in ML is correlated with being a junior programmer (those that are more senior specialized when ML/AI were not so cool and consequently are in different domains)?




Not at a YC company, but yeah my guess would be that it's hard to get the trend followers on board with other stuff. I've anecdotally seen myself a lot of candidates (esp at the recent-university/recent-MS level) who've taken some ML courses cause it's trendy and sounds interesting but (a) don't have a serious enough interest in it or knowledge of how to apply it well enough to be a good fit on our ML teams and (b) aren't open to other roles because they sound less cool or have a perception that the day to day work will be more tedious.


Author of the post. I think that this is exactly right. I don't know what motivated the companies to put that policy in place (they just told us that they had this preference). But I can speculate. There is an epidemic of interest in ML. Four out of 5 college grads we speak to list it as an interest. I think that interest has grown to the point where it's no longer any kind of signal about technical strength, and perhaps a signal that the candidates will not be flexible about what they work on.


I'd be curious to hear about the inverse. Have you found there are skills/disciplines that companies are highly interested in but no candidates are?


What about we academic programmers who have real experience and knowledge about ML. Is that still a negative sign? Or does the academic part make it worse :)


Has age, race and gender discrimination been looked at?


Totally agree with this.

Many candidates I see that have a "strong interest" in machine learning have no idea wtf machine learning really entails; they are listing it as an interest because it is a buzzword and "sounds hard". Most of them have just used scikit-learn once or twice, and have no idea about statistics.

(also not at a YC company)


Most likely because if you are doing ML and do not have a PhD (or previous experience), you are just looking at calling a library function that you do not understand. The majority of 'machine learning meetups' (not in the Bay Area), are attended by programmers that are looking to figure out how to call an R package to give them recommendations or similar items in a list (clustering).

edit I just read the other replies to this post. I believe that most startups with Machine Learning teams are doing more than just calling R-libraries; most development work that I've done for myself and teams has been for tooling and operationalizing data infrastructure (i.e. data engineering, not data science). However if you need a simple recommendation for an app then calling the library methods without 100% understanding may be enough (but calling library methods without underlying understanding is a bad trait in a programmer (e.g. calling the sort() function without understanding quicksort)).


I am a statistician working in the data sciences. I see lots of programmers show interest but do not have the depth of knowledge in mathematics and statistics. They can apply libraries and do 80% fine but do not have the educational background to side step assumptions and pitfalls.


>Most likely because if you are doing ML and do not have a PhD (or previous experience), you are just looking at calling a library function that you do not understand.

I'm assuming that's your interpretation of the industry mindset, and not a view you personally hold.

IMO it's a naive assumption, and it would be trivial to test for it in an interview.

I have an interest in ML for a specific domain, no PhD, and it wouldn't cross my mind to try to use it as a set of shrink-wrapped library functions.


I can't speak on their behalf, but I can see how this would be interpreted as a negative signal. If someone is really excited about ML stuff, and you aren't going to hire him for your ML team, then I would be afraid that the person will be disappointed that the work we give him/her is less about solving complex problems and more about getting stuff done. There's also the issue that this person is probably going to jump ship the second he gets an offer to work on ML stuff.

As someone having a strong interest in math and theoretical computer science, I think their bias is fair. I think I'm a pretty good programmer, better than most I've met with similar experience, but I'll admit that I don't care as much as people who are really passionate about building stuff. They will write sloppy code sometimes, but they'll also focus on getting stuff shipped, whereas I naturally want to focus on solving interesting issues like that bug which only seems to happen 1/20 unit tests but no customer has reported.

It took me some time to learn how industry differs from university programming, and if I were recruiting, I don't know if I would want to deal with the hassle. Obviously now that I know I accept industry for what it is and make sure I do my best, even when it doesn't align with my own interests necessarily, but that takes some maturity (not that I am particularly mature), and I can see why hiring managers would rather avoid the risk when hiring and firing is very expensive and annoying.


Because ML/AI are feature enablers. They make a good product better, but they won't make a product successful. It's a signal that people are more interested in solving technical problems than solving business/product problems.


Also, a large percentage of ML/AI projects are scams. And even the people who aren't scammers tend to massively underestimate the amount of work required to make something good. It doesn't surprise me at all that interest in these technologies could be a red flag on multiple levels.

Sure there are plenty of great startups built with these technologies, but both also tend to be the 'and then a miracle occurs' of the tech industry.


So Google without good AI would be still as good because of the neat interface? Self driving cars would be just as good without good AI?

I get what you mean, and it might be true for a lot of products. But there are also products were good AI is the core.


Use cases with ML out front are rare. I would argue that "self-driving" is just a feature of an automobile; the expensive part is building and delivering a half-ton hunk of precision-engineered metal. People were buying cars long before they were self-driving; and I doubt that self-driving will add a whole lot to the cost of a vehicle. Even then, the hard part of building a business around autonomous cars is obtaining safety certification and improving public perception of autonomous driving. All the major self-driving algorithms will be largely public domain before that happens.

Likewise with Google; there were search engines long before Google. Hell, Google first appeared as the search technology powering Yahoo! long before they had their own presence. Granted; in this case ML enabled the "killer app" of generating relevant results and allowing ad targeting, but use cases where ML is as critical to the product as Google are rare. More typical are things like Netflix's recommendation engine - the value of the service is in the video library, the recommendation engine is just another avenue for content discovery. It is also being increasingly curated as opposed to automated for promotional reasons.

All of this matters. ML is great, but ML results are often so narrowly scoped that you need to identify your product scope first, then find an ML solution that helps. And even then, at small scale you can often "fake" the impact of ML via manual labor or "doing things that don't scale" (i.e. operating the service via manual labor at a loss with the hopes of adding an AI component to handle that function later in a scalable fashion). If the product doesn't resonate with the market, all the ML in the world won't help it succeed.


My experience is that programmers I've worked with who strongly identified as being interested in ML tended to be prone to wanting to build very complex ML-based systems when much simpler (albeit less sexy) solutions would suffice. Certainly this is a generalization, but I've seen it enough times to be wary.


I don't identify as an ML person and fight this type of complexity all the time. Start simple and then move complex. Starting complex is simply overkill for so many problems.


A lot of ML work is currently ad hoc. Not what you want in most software design and development. Accomplishment in ML and interest in ML are very different. With the hype these days an interest in ML is almost like an interest in making the computer magic.


I wanted to ask the same question.

Harj, is it people with ML/AI interests without experience? Does this also include PHDs? On that note, are Academic Programmers Comp Sci PHDs or from other fields?


Author of the post. We did not distinguish by experience level in the question that we asked about tech / product. But experience is VERY helpful in a job search, so an experienced ML/AI person would almost certainly have more interest than a product person fresh out of school. We've not seen much weight given to CS PhDs. Someone fresh out of a CS PhD program is viewed much like someone out of a BS program. Industry experience is a big help.


Hmm, I think you may find different responses based on experience level with something like ML in particular. I'm a product focused engineer (former PM) with a master's degree and an unfinished PhD in machine learning. I've also been a mentor at a 12-week boot camp where students did machine learning projects, so I totally get the "it's sexy but nobody actually bothers to understand it" argument. But with my experience level (several years real world experience and a Big Name) I think I've had more interest due to my specific background, not less.


They think ML people aren't interested in infrastructure. That's not entirely true. Many people with actual production ML experience understand and want to work on the data infrastructure because it's all to easy for someone that's not going to build products on top of it to fuck it up.


In addition to what's been said, I think that ML/AI, especially AI, attract people interested in the Big Question and aren't exactly detail people.


probably because ML is the hot new thing. People that are interested in a hot new method (especially if there "excited to learn!") instead of the results that that method can product (a better product) are prone to running down technical rabbit holes over shipping.


Maybe they are looking for more practical programmers at the moment?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: