This. Universities and online challenges provide clean labeled data, and score on model performance. The real world will provide you... “real data” and score you (hopefully) by impact. Real data work requires much more than modeling. Understanding the data, the business and value you create are important.
As per #6, better data and model infrastructure is crucial in keeping the time spent on these activities manageable, but I do think they’re important parts of the job.
I’ve seen data science teams at other companies working for years on topics that never see production because they only saw modeling as their responsibility. Even the best data and infrastructure in the world won’t help if data scientists do not feel co-responsible for the realization of measurable value for their business.
Training integrative data professionals could be a great opportunity for bootcamps. Universities will (understandably) focus on the academically interesting topic of models, while companies will increasingly realize they need people with skills across the data value chain. I know I would be interested in such profiles. :)
Most people figured that with such a simple assignment (not significantly harder than the first one, which was also easy-ish) they could put off doing it until the last moment.
Most people failed.
This real world data needed hours upon hours of cleaning before it was in any way useable. Of course, the teacher knew this, gave bonus points to the ones who did start in time, and then extended the deadline as he had expected to from the start.
Never again will I underestimate the dirtiness of real world data. One of the best teachers I had.
Internships are supposed to fill this gap but it'd be nice if all students could get a taste of real world systems and data. For tech, maybe if they could partner with the IT department at the school to get them exposed to real, messy data. Maybe there are some teaching datasets with over a billion rows that people could play around with.
This times 1,000.
The biggest surprise to me when I got out of school was how messy things were - data, systems, management, priorities...everything.
When I went back to grad school, we had arguments about the assumptions. It was a total 180 from undergrad, and much more useful. So when I came out of grad school, I was able to deal with the ambiguities - maybe even thrived because I understood them.
Hard to believe this. Don't these degrees require rigorous laboratory assignments where the student learns to differentiate best case scenario with real world uncertainties? STEM is not just some IT certification
The problem is that most real world problems take too much time to really solve to fit in any modern ciriculum.
In an ideal world, we'd have separate tracks for people entering industry versus academia/research, but that's a long way off.
Not to miss the point, but I don't think "All my students failed" is the mark of a good teacher. It sounds like the teacher failed to prepare their students for the nature of the assignment. Perhaps he was surprised as they were when they all failed, as I doubt failing most of his class was his intention.
I think what he meant is they 'failed' to get it completed on time and it was meant as a teaching lesson.
I don't get why building a model people consider to be the "fun" part. That's mostly spitting data in, watching a loading screen, and then observing the output.
That's not fun, that's boring. The fun part is looking at the data and gleaming all these potential patterns from it, seeing what potential is there and what could be. Likewise, learning the business side and seeing what is possible no one has considered is great fun too.
My favorite part is feature engineering. Pre-processing and cleaning is fun too, but morphing the data into formats that extract a diamond from coal is a lot of fun, and what data science is all about. Clicking go on some ML algo is just icing on the cake, seeing it reveal bits maybe even I overlooked in the data.
If you like ML why not be an MLE? That's what MLEs do, and they're a more desirable job. DS is all about the research, discovering and learning new information, and making the impossible possible.
Everywhere else, there is only DS, and it involves everything.
To answer your first question though, the training and testing of these models is fun because it feels like a puzzle game: did all my understanding and preparation of the data (and the business) pay off and the model does its job as expected? Is there something I’m missing? What’s the simplest model + configuration I can use that produces acceptable results and what does that say about the problem space? Can I combine models in some way to get the results? Is nothing working because it’s an ultimately fruitless exercise and our hypothesis is wrong? Or is there something we’re missing that is in turn the reason the model is missing something? Etc etc.
Then as the output you get something that ingests some data and then makes a decision with it! That’s cool to me.
Also, I know what is possible and impossible before I start writing code (if you don't count EDA code). There are exceptions, like it should be possible but it turns out the data is bad, but it didn't look bad from the EDA. Thankfully I've never had that. I always perform a Feasibility Assessment before anything else.
Not to imply what you're doing is somehow incorrect. Problems can vary quite a bit and I recognize that. For example, there have been times where I've had to mine to see if anything is there, doing ML over it to validate a hypothesis then using that information to create a new hypothesis, rinse and repeat. That's scary, because I could turn up nothing. I haven't done a lot of mining I admit though. Usually my problems are much more obvious from the get go, or much more research intensive.
One time I did three months of reading papers on arxiv.org just to figure out if something was feasible and how to best do it. Though that was definitely not a standard problem.
Exactly! This is the reason why I love my job. It gets even better when you uncover a non-intuitive insight.
This is a simple yet very powerful rule that helps us quickly disband ideas that:
1. Do not have a robust testing mechanism. No model is useful unless it performs in the real world. Measuring this is a severely non-trivial problem with multiple operational considerations.
For e.g. are you able to run manage true control/test groups? How do you build a “reverse” data pipeline to verify your models? And, if you are required to update model weights constantly, where and how will you update the model parameters?
2. Conversely, some of the most impactful products I worked on were probably delivered in simple excel sheets or had just under 20 lines in my Jupyter notebook. Not every business problem is demanding a deep learning network. For e.g. we worked on a data-driven capacity forecasting exercise for a call-centre. I can tell you that the sophistication of the model was the last thing on my mind as I had to work on careful interpretation and data collection.
3. Data Science departments should sit closer to business than what appears to be the trend correctly. At least business data science teams ( Apart from technical data teams focusing on product analytics to improve performance etc ). Courses and academic programs, I think, have developed a bias towards tools and techniques without the underlying analytical interpretative techniques needed to work with data. For e.g a new data scientist in my team delivered excellent code but she couldn’t detect logical misses in the data (for e.g losing some data during processing, using columns with almost all data missing)
On the other end of this spectrum, we are in the lagging end of the hype bubble still so there are many top leaders who are expecting to plug in “data science” and realise Billions of dollars in savings, new sales etc.
Now with "data science" you've taken a step further, and instead of applying the math to lab reports on meticulously filled out forms, you're going to aggregate all the messy sources you can get your hands on. Of course your headaches will multiply.
First homework assignment in the stats class I teach is to clean data that the class generated with directions they all perceived as clear. It's near about the most hated assignment I have ever given. Amazing how many ways there are to encode gender of a experimental participant.
Male, M, m, male, Man, ...
This article mentions that leadership often doesn't support data science, but I think it actually doesn't go far enough. Leadership doesn't just have to support the data scientists, it has to actually tell other teams to prioritize data science projects over what they are currently doing. Since these data science projects are riskier than standard projects, it makes sense that leadership doesn't often do this (and focusing on the standard projects could be the right call). However, it also means that it's very hard for data scientists to create business value.
The really interesting research type work (Bayesian modeling, convolutional neural networks, etc.) takes a long time to implement and may produce no useful results, which is a really bad outcome at a company that measures performance in six month units of work and highly values scheduled deliverables and concrete impact. Many of the data scientists I work with tend to stick to methods that are actually quite simple (e.g., logistic regression, ARIMA) because these at least deliver something quickly, despite the fact that many of my coworkers come from research-heavy backgrounds.
In my org, there's nothing stopping anyone from pursuing advanced machine learning; for the most part we set our own agenda (in fact, determining priorities is part of the job role). And some people do in fact go after state-of-the-art ML, with some really cool results to show for it. But in terms of career progression and job safety, the risk is just way too high, at least for me personally. I save the highly mathematical stuff for a hobby.
Edit: while this may sound a bit negative, I will add that my description of data science isn't a complaint per se; I am mainly trying to inform those who are seeking a career in data science of what to expect compared to what is often promised. The work that is most valuable to a business is not exciting all of the time, but I don't think there is another job in the tech industry that I would find more enjoyable than my current one at the moment.
I think the sad truth is that this is the reality of work no matter if you are a Data Scientist or not. What you thought you would be doing to show your worth and climb the ladder gets blurred in with KPIs you didn't set, politics you didn't create, goals and deadlines you had no input into, etc. One of the unique challenges you can face as a Data Scientist is that you may interface with people in many different groups, all of which have different goals which may be in conflict with each other. Compare this to other roles where you ultimately only follow the goals of the organization you report into.
It's often the case, I remember when that stupid Amazon infographic was going around about decreased load times meaning big upswings in conversions.
A client paid for a significant project to reduce load times, which we succeeded in to a huge degree with most of the pages going from 1.5-3 seconds secs down to 250-500 ms. Absolutely no meaningful swing in conversions at all. I've done this a few times since, but never seen conversion move at all when I've done performance improvements.
Nada, zilch. I honestly think it's absolute bullshit. I've always suspected since that it was someone massaging figures in Amazon to justify their job.
People played it mostly during breaks: lunch breaks (our peak load was during lunch hours in the US), "smoke breaks", etc. So they didn't have a goal, they just had time to spend doing something. Each gameplay took anywhere from 1-5 minutes. Users averaged to 5 plays per day. Our guess was the extra load time caused people to hit some exactly poor threshhold where they were able to play 1 less game during their time allotment.
Edit: we were curious and A/B tested it and saw the effect too. We didn't run it for too long, but a 15% difference is quick to verify when you're measuring something that happens 35 million times per day.
Well the first rule should be looking skeptically at someone whose "analysis" involves something their core business provides/sells. Facebook and Google have been pushing data driven narratives about how effective their advertising is, and yet as a data scientist working at a large Fortune 500 company, we never were able to show meaningful impact anywhere close to what was claimed. This was met with pushback, as before my team was created the company relied on external analytics vendors who always came back with results that were magically what everyone was expecting/hoping for. But when my team tried to recreate what they had done, they would withhold information claiming it they were "trade secrets", or what they did provide was riddled with egregious errors.
I actually think that is the biggest argument as to why every company should have some kind of data science team. There is certainly important predictive models and analytics to be done, but the most consistent ROI would be to keep the company grounded and not dropping huge sums of money on the trendiest snake-oil analytics/AI solutions being hawked by vendors.
This was why I left my last job managing a Data Science team at a large company. It's nearly impossible to complete with a slidedeck from an external vendor that shows exactly what people want to see. Especially when decision-makers and check-signers move on to different jobs in 2 years, so there is nobody to answer why that was done in the first place. Arguing against those vendors brings out the worst in the interested parties and you become the bad guy.
I tend to suspect that the effect of pricing will make a difference of 2.7 seconds in load time negligible. A 3 second load just isn't a large cost, even if you run into it repeatedly.
The use case you've described has a defined problem and a measurable metric. Problem: We think load times influence conversions: Metrics: Measure load times and see if they are correlated with conversions. Maybe in your case somebody decided to skip the research part and just pay to reduce load times.
Imagine a totally difference scenario. You work for a established (30+ years old) company that sells consumer goods. Executives approve a $25 Million budget to "improve the customer experience" over the next 3 years.
This directive goes to all the various organizations: Sales and Marketing, Product Development, Technology, Customer and Market Research, Customer Support, etc. The various orgs have 3 months to come back to executive management to justify how much budget they need and their execution strategy. Each org thinks thinks they are the key mover in improving customer experiences and wants as much of that budget as possible. Every org works at a difference speed and with different philosophies (e.g. all work is done in-house versus some or a lot of work done by external agencies).
Let's add some more reality into this. Even if the CXO of an org thinks they don't need to be in this process, it looks bad if they don't say they have a strategy and need budget. There's also a significant chance that 9 months into this project somebody will get restless and the whole initiative will get restructured with different timelines and goals.
I could do on, but armies of Analysts and Data Scientists will get pulled into this to drive "data driven decision-making." A lot of the expectations will that the "smart people" will show that everyone's particular biases will be the most important one and needs.
It's hardly an environment for anybody to do rigorous analysis or for anybody in an Analytical role to shine. Think the scenario sounds insane and made up? It's not. Welcome to big-co.
This probably describes just about every job in a for-profit business.
Apply for and interviewing for data science jobs is a total nightmare. You are competing against 100s or even 1000s of applicants for every job posting because someone said it was one of the sexiest careers of the 21st century. Further exacerbating this, Everyone believes that data is the new oil, and large profit multipliers are just waiting to be discovered in this virgin data that companies are sitting on. All that is missing is someone to run some neural network, or deep learning algo on it to discover the insights that nobody else can see.
The reality is that there is an army of people who know how to run these algos. MOOC's, blogs, youtube, etc have been teaching everyone how to use these python/R packages for years. The lucky few who get that coveted data science job can't wait to apply these libraries to the virgin data only to find that they have to do all kinda of data manipulating to make the algos even work, which takes days and weeks of mundane work. Finally they find out the data is so lacking that their deep learning model does very little in providing actual business value. It is overly complicated, computationally expensive, and in the back of your mind know you can get the same results using some simple logic.
Managers who don't understand data science fundamentals learn from the news and have their data scientist implement those buzz words so they can look good in front of their bosses.
I think there is a place for data scientists who understand the fundamentals of the models out there, and know when you should not use them. Data science is also increasingly a subset of software engineering and a good data science in a tech company should be able to code well. I also think that there is not some huge unmet demand for data scientists. Just a huge amount of hype and managers wanting to look good by saying they managed a data science team.
You see, decision-making involves (1) getting data, (2) summarizing and predicting, and (3) taking action. Continuous decision-making -- the kind that leads to impact -- involves doing this repeatedly in a principled fashion, which means creating a system around the decision process.
For systems thinkers, this is analogous to a feedback control loop which includes sensor measurements + filters, controllers and actuators.
(1) involves programmers/data engineers who have to create/manage/monitor data pipelines (that often break). This the sensor + filters part, which is ~40% of the system.
(2) involves data scientists creating a model that guides the decision-making process. This is the model of the controller (not even the controller itself!), which is ~20% of the system. Having the right model is great, but as most control engineers will tell you, even having the wrong model is not as terrible as most people think because the feedback loop is self-correcting. A good-enough model is all you need.
(3) involves business/front-line people who actually implement decisions in real-life. This is where impact is delivered. ~40% of the system. This is the controller + actuator part, which makes the decisions and carries them out.
Most data scientists think their value is in creating the most accurate model possible in Jupyter. This is nice, but in real-life not really that critical because the feedback-loop inherently moderates the error when deployed in a complex, stochastic environment. The right level of optimization would be to optimize the entire decision-making control feedback loop instead of just the small part that is "data science".
p.s. data scientists who have particularly low-impact are those who focus on producing once-off reports (like consultant reports). Reports are rarely read, and often forgotten. Real impact comes from continuous decision-making and implementing actions with feedback.
Source: practicing data scientist
That is a akin to a sensor input, and one that is surprisingly important. Without a good CRUD form, data either doesn’t get entered at all, or is entered in crude, unvalidated ways like as loose Excel files with formatting that is all over the place.
Yeah, well there's your problem, my dude. I've been doing what might be described as "data science" since I quit physics in 2004. Aka before the term existed. It's a great area to work in for intelligent people who want to use their brains to impact the real world; vastly better than what people get paid to do in physics. If customers don't know what the tools can do, it's because you as the data scientist have failed to explain it to the customer. If your work product isn't in front of the decision makers, you've also failed: they can tell the bottom line impact and will reward you accordingly. Sometimes there is no data in their data; they should know that up front.
As for whining about poor data quality: n00b. What do you think they're paying you for? Nobody gives a shit what people do in Kaggle competitions.
> If your work product isn't in front of the decision makers, you've also failed: they can tell the bottom line impact and will reward you accordingly.
This one in particular stood out. There is an aspect of salesmanship (or navigating corporate hierarchies) to the role. Things will not be obvious to the decision makers. Perhaps the data scientist has to take some responsibility in bringing their work to the fore.
The worst cases I have seen is when executives take a problem and ask data scientists to "do some of that data science" on the problem, looking for trends, patterns, automating workflows, making recommendations, etc. This is high-level pie in the sky stuff that works well in pitch meetings and client meetings, but when it comes down to brass tacks this leaves very little vision of what is trying to be achieved and even less on a viable execution path.
More successful deployments have had a few items in common
1. A reasonably solid understanding of what the data could and couldn't do. What can we actually expect our data to achieve? What does it do well? What does it do poorly? Will we need to add other data sets? Propagate new data? How will we get or generate that data?
2. The business case or user problem was understood up front. In our most successful project, we saw users continuously miscategorized items on input and built a model to make recommendations. It greatly improved the efficacy of our ingested user data.
3. Break it into small chunks and wins. Promising a mega-model that will do all the things is never a good way to deliver aspirational data goals. Little model wins were celebrated regularly and we found homes and utility for those wins in our codebase along the way.
4. Make is accessible to other members of the company. We always ensure our models have an API that can be accessed by any other services in our ecosystem, so other feature teams can tap into data science work. There's a big difference between "I can run this model on my computer, let me output the results" and "this model can be called anywhere at any time."
While not exhaustive, a few solid fundamentals like the above I think align data science capabilities to business objectives and let the organization get "smarter" as time goes on as to what is possible and not possible.
Not utterly without merit, but fairly blind fishing nonetheless.
I'm been in various external and internal facing Data Science roles for 8+ years and this is spot on. IME it's the #1 reason Data Science projects "fail." If you can replace "do some of that data science" with "do some of that black magic" that probably means nobody actually checked to make sure the data and problem made sense in the first place. But somebody somewhere already committed to it, so the Data Science team has to deliver it.
While I agree on the point, there's a case that's arguably worse: When those executives hire Data Scientists and then ask them: "So what can we do with Data Science?"
The point of data scientists and the related roles listed in the article are not to just churn out the fun stuff, but to wade through the institutional and technical muck and mire it takes to bring the fun stuff to bear on a relevant business problem and to communicate the results in a way that people of all walks can understand.
But even in this day and age with ML being the new hotness, you will find people who are quite happy to work on infrastructure and don't have a huge amount of interest in training models themselves, and it is probably a lot easier to hire them than people who can do both, and you may get better results from actual specialists.
I suspect, if there are lots of relatively simple ML problems, then a generalist with integration chops will be more effective in getting them out quickly and "good enough". The specialist may take too long on models that are too heavy and impractical.
If there's one big ML problem (Google search, Netflix recommender, Amazon search, etc), where 1% additional makes a difference, then yes, specialist DS/modeler is probably preferred.
Larger, older org/heavier existing infra/more specialized culture will also tilt the scale towards specialists.
I also think it's unfair to specialists to say they will always overcomplicate things more than others, I've seen plenty of generalists with researcher envy do the same thing.
The problem with all this data talk isn't just about implementation or bad structure, the limitations of putting all your bets on inductive reasoning are systemic.
The insights that economists had in the 70s and 80s was that reasoning from aggregated quantities is extremely limited. Without understanding at a structural level the generators of your data, trying to create policy based on outputs is like trying to reason about inhabitants of a city by looking at light pollution from the sky.
My guess why data science so rarely delivers what it promises is because you can't get any value from historical data if your circumstances change to the point where past data is irrelevant. Which in the world of business happens pretty quickly. To have a competitive advantage, one needs to figure out what has not been seen yet.
And trying to exploit signals suffers from the issue laid out above. There was a funny case of an AI hiring startup trying to predict good applicants, and the result was people putting "Oxford" in their application in a font matching the background color
In my mind I see more data scientists being ignored or turned into “yes men”(https://www.interviewquery.com/blog-do-they-want-a-data-scie...)
As other commenters here have posted, without the integration of data science into both the business needs and the rest of the existing tech stack it will remain a fun school course activity.
Can you please elaborate on this please?
At a high level, it argued that basing predictions on historical data is problematic. The details of the argument are somewhat specific to economics, but the principle is more general. That's also why people recommending stocks say "past performance is no guarantee of future results."
One of the key issues is that circumstances change, and information about such changes will often be external to a data set.
In the Lucas critique, policy changes are an example of this. You can't predict future economic performance based on past economic performance if relevant policies have changed. But any complex situation has such factors that are external to the data that one can easily collect about it.
This came under heavy attack during what is called the cognitive revolution, which put focus on understanding mental processes at a structural level (for the reasons outlined in the post above).
Economics went through a similar process. Up until the 70s Keynesianism was very dominant, which mostly focusses on using aggregate economic quantified data, i.e output, unemployment, capital and so on to make policy suggestions. This began to be attacked and supplemented with what's called 'micro-foundations', which aimed to not just look at quantified data, but to model, from the individual up, not just top-down, fundamental behaviour and interaction, i.e the actual entities that generate the aggregate data.
There was also a similar movement to this in linguistics starting (mostly) with Chomsky at about the same time applying the same criticism to how we model language.
This was certainly the vibe that I got from "design of experiments" when it was the statistical method du jour. Then from "Bayesian everything" and now "data science." I remember "design of experiments" studies being conducted with great fanfare and success theater, while producing zero results.
The long term theme is that science is hard for reasons that managers don't understand, can't manage, and are reluctant to reward.
Some other industries have been doing "data science" for ages. Credit Risk Modelling, insurance and so on.
Every time I read one of these articles, I feel it's just an individual who entered a kind of crummy situation and they're learning what it means to work in a corporate environment. Some are better than others. Some are more motivated than others. Some have better cultures than others. Some are more willing to make technology a key part of their business strategy. Some are more data driven than others.
My recommendation is to always ask the fundamental question before joining: what are you trying to achieve with data science, and is it actually achievable?
I agree wholeheartedly with your recommendation. Like any other job, each company has different needs and expectations and if you want something else out of the role you'd best avoid that company.
A lot of DS can be boiled down to some sort of statistical testing or inference (A/B test email marketing for example) or applied ml (classification, regression). I'd argue thats science (if done right).
Data Analysis, the plot a few charts and put it in a slide deck kind? Totally agree with you. Definitely not science.
I certainly do, but I've been doing data science for the better half of a decade. It seems starting around 2015 when Data Science became a sexy title, a lot of fresh blood has been overgrown software engineers wanting the title, not knowing what they're getting into, or having faulty expectations. I don't consider this class of "data scientist" a data scientist, which is why the community has started shifting its job title away from Data Science to Research Scientist to better differentiate.
The good side of this is I'll come into a company with them expecting me to be like an over blown software engineer, and it gives me the opportunity to show off and go above and beyond what companies expect, allowing me to come off like a super hero. Though, it's definitely an uphill battle, and knowing how to work with upper management is an absolute necessity.
Exploratory data analysis is often overlooked and underrated.
Doesn’t yield the right results? Clearly not enough data.
Still doesn’t work? Change to whatever the latest model google or fb is using and try again.
So yes, there definitely exist software engineers who require licensing by their state or country. Most of us just aren't actual software engineers, that's my point.
Actually I'd say that's as common as the failed ML projects
And no reasons relating to technical or data science know-how on the part of the team/company ;) I already knew coming in that the industry is technologically backwards (big healthcare co)
- It's essential to have/develop domain expertise in your industry.
- Beware plausible, but incorrect (or poorly interpreted) data that supports yours (or others') assumptions/biases.
- Add on to #4 - at least as bad as this is having well-intentioned people on your team who "know enough (a bit of SQL or low/no-code data tool") to be dangerous. Um, why are you joining unnecessary tables, or using a different alias for the same columns/tables in different queries, with no comments or standard formatting?
- Hold your nose, but anything you do in SQL/R/Python/even fancier programming tool/language is going to pass through MS Excel at least once sooner or later which can irreversibly bastardize CSVs (even just opening without saving!), truncate precision to 15 digits, change data types, etc.
- So glad for the callout in #7 - there are clearly devs/data folks out there who are happy to take on an "interesting programming project at a great paying job" - that isn't serving the best interests of humanity.
I'll just add one: the business absolutely doesn't care how you get your answer, only if they're reliable enough (hand grenade close is better than most companies have today).
While this seems obvious enough to anyone with a few years under their belt, to the new DS grad who has their time series analysis canned in favor of slapping a simple moving average in place and shipping it can be rather disillusioning.
Sometimes a young startup wants to advertise to the board, and they want you to make a presentation. I've made the mistake of showing near 100% accuracy solving a difficult problem important to the business, and expecting a strong positive result.
Instead I got a, "But, are we using deep neural networks?" type comments.
Sometimes a company just wants to market, be it to customers, or to the board. It's important to know your audience.
One of the challenges with this is that "reliable" can mean a lot of things when the goalposts of success are constantly moving in large projects with many stakeholders, all of which are clawing for attention. I've seen politics derail so many Data Science projects and destroy the morale of Data Scientists.
It's only natural that a lot of people will realize that a moving average that confirms what people wanted to see anyway will lead to more success (whatever that means).
That's pretty sad when you think about it, but it's painfully true.
Not sure this is true in practice. In some situations, Holt-Winters (ie. algorithms in the ETS family) may do better, and it's often a good idea to try both.
There's a claim that Holt-Winters is a special case of ARIMA (the claim is ARIMA is more general), but this is actually not the case. There is equivalence in only a subset of cases. 
I've fitted Holt-Winters models that beat ARIMA models. ARIMA models can have trouble generalizing from training data with long horizons because they tend to overfit to the distant past. Holt-Winters on the other hand has a natural "forgetting factor" built-in which moderates this.
As well, my experience is that stacked models with well-chosen exogeneous variables (if you have causal variables) tend to outperform pure time-series methods because they are anchored on more independent variables than just t. Pure time-series models bank on the assumption that patterns have a repeatable time-dependence, and most of the time this is just not true, so have to be augmented with other variables.
1). I am hearing about Data Science Teams being furloughed during these times. That isn’t happening in my function (Corporate Finance). I am glad to be secure even though I enjoy much of the data sci work.
2) I’m able to apply Data Science concepts in my current role, and it’s adding a lot of job security and providing me with exposure. I am much less interested now in moving to straight Data Science and instead am applying my learnings in my current role as a sort of in-house Data Science guy. But I have a lot to learn to be honest.
3). There seem to be a lot of “thought leaders” acting like they are big experts in the area and really don’t know anything many of us amateur scientists don’t know. They pull perfect clean datasets and show these magic transformations they just copy from others to get YouTube hits or Twitter followers. That just never happens in real life, and many leaders are seeing this and losing interest in this function in the returns they are getting from sole data science folks.
Neither of those quite match the articles title, perhaps it just refers to the author's personal expectations. Neither of them seem that specific to data science, or without parallels in other software jobs. And neither of the points read like a slight towards data science to me, like some of the other commenters here suggest.
Progress may only come slowly, ideally through products bought from 3rd parties whose results are understood and controlled by management.
-- write discovery requests
-- review production, and check out data and documentation
-- write supplementary discovery requests
[repeat as needed]
-- analyze data, and write deposition questions
-- help attorneys wring answers from deponents
-- analyze data, and produce required output
-- write parts of briefs and expert reports
I generally did that in consultation with testimonial experts and their data analysts. Sometimes that didn't happen until we'd documented the case enough to know that it was worth it. And occasionally small cases settled with just me as the "expert".
It's a small industry, and not easy to get into, unless you know key players at key firms. But the money's pretty good, and the work can be exciting. I loved being that guy in depositions whispering questions to the attorneys :)
This all involved pretty simple calculation of damages, through comparing what actually happened vs what would have happened but for the illegal behavior. But-for models were typically based on benchmarks.
After data cleanup in UltraEdit, I did most of the analysis in SQL Server. I used Excel for charting and final calculations.
But generally you're right. It was just simple calculations using sales records. But lots of records, at least several gigabytes, and sometimes several hundred gigabytes.
The second 90% is basic math at high speeds.
And yes, speed. Aggregating hundreds of gigabytes was nontrivial to do quickly. I started with Access, and then learned to manage and use SQL Server. And eventually a multi-Xeon server with lots of RAM and SAS-attached storage.
If its just data folk by themselves getting dumped with org data and told to find pirate gold...then its a crap shoot.
I know this because I've been on that journey. But there's no reason to expect some department head that's never been exposed to DS to know this. They just copy/paste some other company's job req. If you're more junior, here are my tips:
- If it's a "new DS team" that supports a variety of teams: beware. Bolt-on DS doesn't work well, as it's really hard to build a meaningful solution that's not deeply integrated.
- If it's an old company or in a conservative industry: beware. There are likely to be data silos and difficult ownership models that make it nearly impossible to get and join the data you need.
- If it's a small company: beware. You're likely going to need a broad set of knowledge that's won with several years of experience to be able to build end-to-end solutions that are integrated into the rest of the tech stack.
- If it's not an engineering-driven culture: beware. DS will often be used to provide evidence to someone else whose already made up their mind and pretend they're being data-driven, and you'll be the disrespected nerd that's expected to do what it takes to deliver the answer they want. Most companies claim to be "data-driven", few are, and even fewer understand data-driven isn't always possible or desirable.
Industry is still trying to figure out how to use ML and are still learning that it's not as easy as hiring someone that knows about all the algorithms, but rather it takes deep technological changes to data infrastructure to enable the datasets that can then be used by the ML experts. But you don't have to be the person that helps them figure this out the hard way (i.e. by being paid to not accomplish much due to problems outside of your control). Better to find a place with a healthy data science team that can help you learn and contribute. They exist.
I'm at a place that is both, and both are huge pains.
On the engineering side, it's a bit different though: technical roles are looked down on, and there is no engineering culture, eg, for data. Data is just a bunch of flat files everywhere, across many silos. No leadership to put it together into logical buckets for easy access and interoperability
For what it's worth, my first job was as a solo data scientist at a series B startup. It was a nightmare and I sucked, but boy did I learn a lot.
For example, I’m working on the tool to make data management easier and convert datasets into a structured representation. If you have experienced that you spend a lot of time on preparing and analyzing data, and it is tedious, please reach out to me michael at heartex.net, would love to get your feedback on the product we have built so far.
I agree. More so, I sometimes feel that in the end the field will break up once things start settling down. Some roles will migrate more towards engineering, some will go back towards data analysis.
The expectation that a data scientist is a funnel that can turn anything into magical insights and tools can't last forever.
You can't compress information until you have it in a format that is appropriate for compression.
You can't compress (apply/create algorithms) information (data) until you have it (instrumented data collection) in a format (schema) that is appropriate for efficient compression (structured logging/cleaning).
99% of that is Data Engineering and building good engineering practices which have good data practices as a priority.
For any organization that has more than a handful of employees and more than one product, that is a non trivial task and gets more difficult the larger the organization gets.
It's not quite 99% of the effort but close enough ;)
Search "data science hierarchy of needs"
As noted in the submission, there's a lot of flexibility in what a "data scientist" is. Normally that's good and healthy for the industry. However, it contradicts a lot of optimistic bootcamps/Medium/YouTube videos, and many won't be prepared for the difference.
It's overall hurting my ability to build my personal brand and seek roles that are a fit for my existing skillset and aspirations.
What exactly does 'ML Engineer' communicate to employers in terms of baseline skills? Is the role closer to that of a data engineer or an analyst?
-ML Engineers as building software infrastructure to scale machine learning inference and training.
-Data engineers focusing on data infrastructure and pipelining into either model inference, training, or other business intelligence platforms
-Analysts consume the product of the data engineer in the BI platform or excel, where the results would be consumed as a report in some form.
-And ML Researchers would be those inventing novel machine learning algorithms to deploy in the ML Infrastructure managed by the ML Engineers
-And data scientists to deploy well-known ML algorithms or statistical inference on varying datasets on the ML Infrascturue or as a slide deck.
Once you have 10s of datastores + pipelines, 100s of reports and a "data lake" in the TBs you'll likely be needing specialized people.
Unfortunately I'm at a point where I have 'jack of all trades master of none' syndrome and it's causing me to fall in between the cracks professionally. I'd like to move to a larger company where I can develop deep expertise in a narrow topic.
Most of those roles a software engineer can do.
High Data Scientist salaries and expectations combined with a shortage of qualified people often mean you're expected to be a one-person band, which I find to be miserable.
I feel this in my bones lol.
The frustration when the results of weeks/months of hard work are glossed over with a “oh that’s nice” in favour of endless praise for the front end team putting a picture backdrop on the search page or something.
Didn’t matter how many times we sold them on the benefits, or explained the work that went into it (at both executive summary level and detail) or did all those things you’re supposed to do, if it was more than one step away from directly causing it, or slightly more abstract than “we moved the button” it was wasted on leadership/management.
Spent a couple of weeks fixing data pipelines and ETL/database infrastructure and processes and now everything runs faster, and runs on a smaller and cheaper cluster and as a result managed to put together some analysis and modelling on customer behaviour that shows if you do xyz you’d expect to see uptick in this thing. Doesn’t matter, Bob changed where the button sits and we saw 20% more sales, good job Bob, everyone: be more like Bob.
I suspect a lot of people want the sexy Data Science job title, which is why there has been such a push for it, and why most new "data scientists" take the title but do Data Engineer / Infrastructure Software Engineer work or MLE work instead.
I think MLE is more sexy in a lot of ways, and it often pays better than DS work, so it's odd that many haven't flocked to that job title, but maybe the whole software engineer part turns people off for some sort of reason.
Me, I'm more a classic data scientist / research engineer, which involves a lot of digging through data and research and generalized learning, then presenting my findings. I'm not using any ML on the job right now, but often I have in the past. It's just a tool, not an end.
Nothing has summed up my entire working experience more than this, it’s almost painfully accurate.
On one hand it’s an exciting challenge, you learn a lot and you get good at adapting to these situations.
On the downside I have practically no senior data science people to turn to for help when I do need it, which is frustrating.
I don't mean manufacturing (which is doing really well), but companies like Microsoft, Google, Facebook (and even Apple) and others do encourage you to try to compete against their founders (or maybe society does that) rather than focusing on being solid mathematically. Yes, Google pays people well with those skills, but movies portray mostly their founders, emphasising how rich they are, while mathematicians are generally portrayed as weird. Society as a whole puts more emphasis on Bill Gates than on fundamental researchers.
In fact, if you really want to have a rich representative, you can pick the Simons guy. (See, I don't even know his name.) His Medallion hedge fund was built on mathematics. Ironically, Bill Gates is these days one of the biggest financial supporters of people with science skills that he doesn't have.
It is a fad to be a techie. Mathematics is not a fad, although it does have internal fads.
At least initially, DS was a lot about machine learning. While those methods may be statistical, it was the computer science field that drove and embraced the ML revolution. Currently, it’s mostly ML engineers who make the impact (deploy) ML and these are mostly CS folks. Statisticians still can’t code themselves out of a box (2013 MS Statistics here from top school)
Also, there has been a lot of innovation in managing data at scale (tools, infra , etc) This, again, has been done by engineers not statisticians. But this is still related to the science of data.
So the difference between the new (data science) and the old (stats) is about culture and about some of the methods for dealing with data at “scale”.
In other words, statistics is just a part of data science, but not the whole.
Data science more closely related to "statistical learning" and the knowledge required overlaps with but looks quite different with that of conventional statistics.
An easy way to get a sense of the difference is to compare the table of contents of a book like ISL (PDF free)  to the undergraduate curriculum of a statistics program. You'll find that that the focus and indeed culture of data science is really quite different from that of statistics.
Leo Breiman wrote about this in his paper "Statistical Modeling: the Two Cultures" . Conventional statistics belongs to one culture, and statistical learning/data science sort of veers toward to the other (though not completely).
Much has been made about how "data science" is just statistics dressed up to look new, but I'm not convinced this is true. I'm also not convinced that pure statisticians have the right training to be data scientists -- additional training and mindset changes are needed. The reverse is also true: most data scientists lack the rigor and epistemological training to be statisticians.
Korzybski formulated these principles, among other things.
Most of data science models are as wrong as astrology and numerology. They have no connection to reality, or rather inadequate.
This principle explains abysmal failures of all Model-based "sciences", stating from financial markets and up to virus spreading models.
Simulations of non-discrete, non-fully-observable (AI terminology) system has exactly the same relationships with underlying reality as a Disney cartoon to a real world.
This is why expectations will never be meet, except for natural (non-inaginary) pattern recognition.
A drop of proper philosophy worth years of virtue signalling.
The problem is that this can all too easily become motivated reasoning: one provides a stakeholder with support for the decision he already made. From his point of view, this is a valuable service, but it does the organisation a disservice: decisions should be made after considering the data, rather than consider only those data which support a decision.
Also, while ethical issues certainly arise, I think that Greyball is not a good example. Uber evading police enforcing the taxi monopoly is no more unethical than the Underground Railroad evading fugitive-slave agents. The taxi monopoly is itself unethical, and evading it increases the common good.
I have never had hopes about the potential impact of being a Data Scientist. I felt every company should be a “data company”, but everything I knew told me that companies are political institutions bounded by the pressures of late stage capitalism. Anyone who things different is dim, anyone who blogs about it is a moron.
My expectations did meet reality.
Where did my expectations come from?
I attended a four year Computer Science degree, followed by four and a half years of earning a Ph.D. I then spent 20 years in industry. 19 of the 20 weeks’ focus were not on machine learning (ML) and artificial intelligence (AI).
I figured I’d spend most of my time buried in code and data, I was right, I had to find shit buried in it, and dig it out with my teeth. Executives hated me because I was a threat, but they needed me so I continued to get paid. I continue to be able to create insight and predictions that almost no one else can, and until this stops I will get a 200k a year salary, benefits and a Tesla.
All of this happened, I can't be bothered to waste my time commenting on this moronic blog post.
From about 2012 to 2018 I went round a lot of universities, conferences and companies doing presentations and I used to often ask the audience for a definition of data science (in the hope of getting a good one). The best one I heard came at the University of Bath where someone (I know who, but he didn't say it to back it with his reputation so it's not fair to name him - it wasn't me though) said "Just drop the data, it's science".
I totally think that - Data Science is about doing Science with found and evolving data sources, we aren't often able to construct our experiments from scratch, but we often get to augment them, but we always start from the data we are given - which is why it's a sub-field.
In any case - the Ph.D's I have employed have almost all known how to do Science, and it has really helped. Some people without a Ph.D. learn to do it. Experimental Ph.D's are best.
Maths and theoretical Physics Ph.D's are generally not able to do this!
I'm not fully convinced that data science with ML and more modern techniques are applicable across domains out of the box. I think there is value to be added if data scientists can specialise in domains.
If we take humans as an analogy, even with the kind of general intelligence we have, we need domain expertise to be able to have advanced intuitions and make predictions about the future. I believe this is true for data science as well.
So here's what I think I did right:
1. Provide indisputable, obvious business value every month. You should consider yourself an in-house consultant to whichever cost center your salary is drawn from. If you're product development, prove value to them. If you're operations, or sales, or marketing, prove value to them. After about two months, you should be able to justify your existence in two sentences. Just remember, most of your company probably thinks of you as a optional add-on.
Your first few projects should attack high-impact pain points with the simplest solutions possible. My first projects were basically ETL into some basic regression into a dashboard. No machine learning required. But it was better then what they had (which was often nothing), and it was STABLE and RELIABLE. And that leads to the next point...
2. Build trust. With my dead-simple models, nothing ever blew up, there were no nonsensical answers, and there wasn't much brittleness when new categorical features or more cardinality was added. It mostly just worked. And that built my reputation for me. They didn't have to understand what was going on in the model, but they knew, from experience, that they could trust the result. Once I had the credibility, I could start building more complex, more elaborate models, and asked them to trust those as well. If they don't trust your models, then no business value has been created, and your job is worthless.
3. Recognize that data science is being done everywhere in the organization, and respect it. Every department has someone who has built a monster spreadsheet that contains more embedded domain knowledge then you could hope to learn in a month. As data scientists, we like to think that we're helping the organization by building critical metrics to improve performance. But here's the catch. If the metric was truly critical, someone has built it already. It might be ad-hoc, use poor-methodology, and be somewhat wrong, but it works and is good enough. You have to find that person, learn from them, and improve on it.
4. Be as self-contained as possible. Ideally, your critical path should not depend on other teams doing things for you (except for IT setting up data access). You should be able to do it all. From front-end dashboards, to ETL, to DevOps. Remember, you're an in-house consultancy. You should be able to take problems and just handle them, rather then be a perpetual bother and distraction to other teams.
There's more, but if you do these four things, I think you can build the reputation in your company for creating useful, accurate data tools that help other people do their jobs better. After that's achieved, people will breaking down your door to get your help. That's where my team is now - we've got a backlog for at least 18 months, with our work priorities often being set directly by the CEO.
In fact, they actually don't need a data scientist. At best they need someone who cleans data, creates pie charts or even worse, they relabel the database admin job as "Data scientist".
I personally love it but am doing more pure software engineering now as the infrastructure is not there and I need to build it myself.
An astonishingly large fraction of Data Science output goes to die in pretty presentations.
From what's left, a large fraction ends up in Spreadsheets.
A disappointingly small fraction ends up in live services.
I view all of such work as applied math.
My experience is that applied math, from the fields I mentioned and some more recent ones, and more, with emphasis on the more, can be valuable and result in attention, usage, and maybe money.
I've had such good results and have seen more by others.
(1) Airline fleet scheduling and crew scheduling long were important, taken seriously, pursued heavily, with results visible and wanted all the way up to the C-suite.
(2) Similarly for optimization for operating oil refineries: So, here is the inventory of the crude oil inputs and the prices of the possible outputs. Now what outputs to make? The first cut, decades ago, was linear programming, and IBM sold some big blue boxes for that. More recently the work has been nonlinear programming.
(3) The rumors are, and I believe some of them, that linear programming is just accepted, used everyday, in mixing animal feed.
No surprise and common enough, IMHO what really talks is money. If can save significant bucks and clearly demonstrate that, then can be taken seriously.
But from 50,000 feet up, tough to get rich saving money for others. If they have a $100 million project and you save them $10 million, then maybe you will get a raise.
What's better, quite generally in US careers, is to start, own, and run a successful business. If that business is to supply the results of some applied math, and the results pass the KFC test, "finger lick'n good", then charge what the work is worth.
Maybe now Internet ad targeting is an example.
I'm doing a startup, a Web site. The crucial enabling core of what I'm doing has some advanced pure math and some applied math I derived. Users won't be aware of anything mathematical. But if users really like the site, then it will be mostly because of the math. So, it's some math -- not really statistics, operations research, optimization, machine learning, artificial intelligence, or management science -- it's just some math. The research libraries have rows and rows of racks of math; I'm using some of it and have derived some more.
Generally I found that the best customer for math is US national security, especially near DC. E.g., now some people are building models to predict the growth of COVID-19. Likely the core of that work is continuous time, discrete state space Markov processes, maybe subordinated to Poisson processes. Okay: One of the military projects I did was to evaluate the survivability of the US SSBN (ballistic missile firing submarines) under a special scenario of global nuclear war limited to sea -- a continuous time, discrete state space Markov process subordinated to a Poisson process. Another project was to measure the power spectra of ocean waves and, then, generate sample paths with that power spectrum -- for some submarines. There was some more applied math in nonlinear game theory of nuclear war.
Here's some applied math, curiously also related to the COVID-19 pandemic: Predict revenue for FedEx. So, for time t, let y(t) be the revenue per day at time t. Let b be the total market. Assume growth via virality, i.e., word of mouth advertising from current customers communicating with remaining target customers. So, ..., get the simple first order differential equation, for some k,
y'(t) = k y(t) (b - y(t))
the solution is the logistic curve which can also be applied to make predictions for epidemics. This little puppy pleased the FedEx BoD and saved the company. Now, what was that, data science, AI, ML, OR, MS, optimization? Nope -- just some applied math.
I have high hopes for the importance, relevance, power, fortunes from applied math, but can't pick good applications like apples from a three.