Where I live, far away from tech, it's almost impossible to land a job in ML / AI / DS unless you have a (minimum) Masters degree in something relevant. Preferably a Ph.D and solid experience to show for - I know because I work in the field, and lots of F500 dinosaurs are just now waking up. But are also unfortunately clinging to their old ways of hiring people.
Schools all over are also picking up slack, starting to offer specialized graduate degrees in those domains. When I got my degree in ML, it was a sub-field at my schools engineering department, mixed up with signal processing and control theory groups.
When first trying to get a job, the main problem was to explain what I actually could bring and do, and a lot of the recruiters or managers had no idea what Machine Learning was. Then you said "It's basically Artificial Intelligence" and, and they were instantly wooed.
I'm a senior data scientist at vc-back startup. I'm in a hybrid data scientist/ machine learning engineering role, where I build and train ml and deep learning models and also build the scaffolding them to support their production usage. But my previous roles included being a business analyst, project manager, and research analyst. My undergrad education was in Creative Writing and the social sciences.
While I kind of accidentally transitioned into this career, how I got here is similar to most folks coming from a different background. Lot's of self-study and experimentation. I think one of the challenges to transitioning into ML and deep learning is that there are so many applications, domains, and input formats. It can be overwhelming to learn about vision, nlp, tabular, time-series and all other formats, applications and domains.
Things solidified for me when I found a space I found compelling and I was able to dive deep into it. You kind of learn the fundamentals along the way through experimentations and reflection. My pattern was pick up a model or architecture. Learn to apply it first to get familiar with it, experiment with different data, and then go back to build it from scratch to learn the fundamentals. That and I read a lot of papers related to problems I was interested in. After a while, I started developing intuitions around classes of problems and how to engage them (in DS you rarely ever solve the problem, there's always room to improve the model ...)
I have a serious question (not for bashing)
Can you please describe what part of your job CANNOT be automated?
Really none of it is really automatable. I'm working developing NLP features for our product (question answering, search, neural machine translation, dialog, etc). Our customer data is diverse, in different formats, and thier use cases are all distinct. So most of my work is novel applied research and development.
I do assume that the data format is different (alas I also assume that they are all some sort of a text file with known fields and types).
But after you setup the dataset definition and defined the schema, the rest can be based on neural search?
Moreover, isn't there a state of the art architecture for each of the task. E.g. Seq2Seq for machine translation. Can you just use that as a base line, and let the NAS engine search hyper param, etc?
Most of our problem don't cleanly map to existing NLP tasks. State of the art often isn't as high as you think in many tasks. For example, the machine translation in relation to beta feature we're building that lets you ask the question of arbitrary single tables (kind of like wiki-tables) but we don't the know the schemas in advance or the questions the user may ask about. Outside of having the issue of having quality annotated data (which we often don't - cold start problem), we need to do more than simple model tuning. It requires building custom architectures.
But even when you consider known tasks, state of the art models do not often produce those same results on real-world data. If you put aside data quality issues (which is another huge challenge for us), in the context of question answering, the training data rarely captures the distribution of the natural language in the wild. People ask questions differently and use language that doesn't match the content in our knowledge base.
I could go on. But short answer, it's not as straightforward as you think. Even at google scale, machine learning is not solved. For everyone else with fewer data and domain-specific use cases, it's even harder.
As you mentioned, some tasks in NLP like full conversation are not solved and will likely never be solved with deep learning by itself (at the level of the conversation). There should be some sort of symbolic AI or taxonomies/knowledge graph (like RDF) in combination with deep models.
Sure, but hyperparameter tuning and architecture selection takes such an insignificant amount of any competent ML practitioner's time so as to be pretty much irrelevant.
At least for me, my time is mostly spent:
1. Understanding (or designing) the process that generated the data.
2. Organizing the training schema.
3. Understanding the customer's business problem so that an appropriate ML system can be designed.
4. Doing an initial design of the ML system based on that understanding and then iteratively designing new components for said system based on customer feedback.
5. Developing or researching how to measure model performance.
6. Searching for alternative data sources.
7. Answering customer and stakeholder questions about the ML system
8. Implementing the ML system in code.
None of these can be automated with current technology, and there's a reason for that: if it was possible to automate a task then our team already would have.
I recently replaced a classifier at work that was using a neural net with a decision tree and some hand chosen features. It performs a bit better, it takes way less time to train and it’s significantly more explainable: my teammates asked why it sometimes miss-classifies a certain edge case, and because the features and model properties were so easy to understand, fixing the issue was a couple of hours work and not a case of “who knows”.
The costs of errors varies drastically in different domains and for different use cases, so something important is understanding how and why different models typically fail and making tradeoffs there.
If it's helpful, I dropped out of both schools — the vast majority of my knowledge is self taught!
This is so incredibly important for me and, based on my conversations, many others as well.
The other thing I struggle with is the feeling that many of the problems I wish to solve are likely also solvable with simpler statistical methods and that I'm just being a poser by trying to pound them home with the ML hammer.
(For reference, I’m an undergrad looking to get into this field)
I think when you shift into pure research, yes a deep probability, information theory, linear algebra, and calculus background are needed. But at the level, you're rarely writing code and more likely working at theoretical level.
1. Most of your time is spent transforming data. Very little is spent building models.
2. Most of the eye-grabbing stuff that makes headlines is inapplicable. My application involves decisions that are expensive and can be safety critical. The models themselves have to be simple enough to be reasoned about, or they're no use.
You might argue that this means what I'm actually doing is statistics.
It's also one critique I have to the world of academia. When learning ML in academia, 9 of 10 times you work with clean and neat toy datasets.
Then you go out in the "real world" and instantly get hit with reality: You're gonna spend 80% of the time fixing data.
With that said, I think that 10 year from now, ML is going to be almost exclusively SaaS with very high levels of abstraction, with very little coding for the average user. Maybe some light scripting here and there, but I mostly just drag'n drop stuff.
whats the difference?
Also, most folks I know that are making practical deep learning contributions are doing so by combining their pre-existing domain expertise with their new deep learning skills. E.g. a journalist analyzing a large corpus of text for a story, or an oil&gas analyst building models from well plots, etc.
Also I really appreciated that on of the training goals for ULMfit was to be trainable on a single gpu. With these large-capacity models, training is getting crazy expensive and out of hand. Any chance that your future work will still keep the single gpu training goal?
With math, on paper say, it is hard to tell if you are doing it right or wrong. You can trick yourself quite easily. A compelling proof can have a huge hole.
You can still trick yourself programming -- in a sense, that is what a bug is -- but it is much harder.
The upshot is, I think it is easier to teach yourself math that is applied to a computer program than math on a piece of paper.
Too many people going into ML could skew the supply/demand into making it a worse job option (more work, less pay), like game programming or academia.
The caveat is that I've worked on ML in the past and I think that the work is maybe less intellectual than software engineering - with complex enough models they become impossible to understand and you start to just try out ideas based on random intuitions. The thing I mostly like about it is the ability to use math and more independent style of work - no scrum, less need for cooperation with other team members etc..
What's happening at least in Australia now is that contract rates (a good indicator of the supply/demand ratio) has halved for ML engineers. Which means (a) a lot of people want to be ML engineers and (b) there aren't that many jobs for them.
It makes finding good positions really hard.
Also thanks for telling us how you became a practitioner. It's definitely relatable and not a humble brag at all.