I agree with the general sentiment of the article, but this seems like a poor example, since a more sophisticated approach can add a lot of value to a recommendation system. How do you know whether a customer is likely to want more than one item in any of those categories? If they already purchased sunglasses, wouldn't they be more likely to purchase, say, a sunglasses case and/or sunscreen? If they purchased a book, do you recommend the same book again? And if not, how do you choose which book(s) to include?
Of course, you could technically still handle this in SQL with a bunch of CASE statements, but obviously that doesn't scale well across a wide range of products. The whole point of ML/AI in that use case is to scale that type of nontrivial decision making.
In fact this is a perfect example of how NOT to do purchase-history-based suggestions, which unfortunately also seems to be how most companies do it. They see a big purchase (or search terms relating to one) and spam you with options for that purchase. But if I just bought a car, or a drone, or a laptop, then the last thing I want to see is ads for other cars or drones or laptops.
Even applying just a little intelligence and showing ads for accessories (floor mats? spare batteries? bluetooth mice?) would make things substantially more useful.
So you know... I don’t think it’s unfair to say that for smaller vendors, the cost/effort of setting up a ML model may dwarf the fractional improvement it offers over just having one person doing human generated SQL queries.
The point is this isn’t like machine vision or voice, where its almost expontentionally better than traditional approaches.
It’s just... a bit better. Which is worth it only if the fractional improvement pays for the setup cost.
Of course this can also fail if the pre-trained generic models don't offer enough value and you end up having to develop your own models, but we'll see how it goes.
Btw, I've published a short Kindle book that aims to provide an overview of these pre-trained services currently available on various clouds, it can be found on Amazon by searching for AI ML Managed Services 2018. It attempts to save you the trouble of scanning through all the online documentation to find out what they do.
How does the query make a good decision on the specific item?
It's a lot less stupid that recommending to buy the same item again and again.
Here's a different way to think about the situation with current AI/deep-learning; if the current upsurge of methodologies was getting close to general AI, it would be getting closer and closer to a hammer that really did let you treat everything as a nail. IE, it would be general purpose.
But I think I can say we're not seeing that even though deep learning seems to be continually expanding the domains that it can operate on. How is that? This Open AI is very eye-opening; "We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period)."
Essentially, as a rather brute-force-y method, we have shown we can expand deep learning's impact to a larger and large domain but not at all in the fashion of human learning tricks (where the new isn't that much harder than the old trick).
Maybe, in this process, a better algorithm that adjusts to new situations without increased costs will surface. But until then it seems new and old methods will need to coexist.
* How does every additional coupon-dollar affect the total amount a customer buys?
* What is the relationship between customer age and retention for my store?
* Does giving a customer more purchase options help or hurt their chances of making a purchase?
My experience is that each of these questions can be solved, in part, using 3 lines of Python code:
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
As a workaround, you could look for high VIF to detection multicollinearity, use some sort of stepwise selection / penalized regression, or use something like relaimpo (https://cran.r-project.org/web/packages/relaimpo/index.html) - not sure of a Python equivalent - to judge overall feature importance in the model.
The author describes using SQL to pull facts from history; who was the number one customer the last week, who abandoned online orders and so on.
The premise should instead be how to fit a model onto your business data so that you better can guess who will be the number one customer next week, what (s)he will order and so on.
The problem that ML addresses is how to arrive at that model, under the assumption that you can use historic data to pick either model or parameterise a model.
SQL has it merits, as does the relational database model, but this has nothing to do with creating models (even though we are modelling the data itself). The author gives some examples that are, frankly, trivial.
But he has a good argument around namedropping "hot" technology when your business need does not incorporate distributed trust (blockchain), modelling behaviour (or some such) using ML and so on.
When I worked with machine learning many years ago, we learned that it was no better than the heuristics already in place. The thing is, it's much easier to diagnose a well written and understood heuristic than a machine learning model.
For example, the author refers to a shopping newsletter where you personalize suggestions for certain products after a customer buys a particular product. This is very often a machine learning 101 example but really there's nothing preventing you from writing those heuristics yourself -no ML involved(e.g: if a customer buys a pillow, suggest pillow cases).
Machine learning does makes sense for something like that if your website is Amazon, but is definitely an overkill if your website is an e-commerce for house garments.
The funny thing is that usually you will end up writing those heuristics implicitly since you need to label your data anyways.
You can’t just throw an algorithm (even one like AutoML) at a problem and expect to be able to do magic with no knowledge of the domain. The technology simply doesn’t work like that.
And just because you have a small website doesn’t mean you have to behave like you have a small website.
A few years ago I was called into save a dying project. They had built some big Hadoop cluster, had consultants on site, etc.
End of the day, they were doing something similar to assessing fines on library books. I wrote a prototype in about 3 hours.
Kafka, Kubernetes, and things like Spark and machine learning are basically the next stage of the "data is King" hype cycle that Hadoop was a few years ago.
The comment lists at least two "questions" that can be answered easily with SQL and a graph and even in ways that give more nuance than linear regression can capture.
It's the second article I've seen here that uses it over the last few days, but I'm not sure if it's the same site or not.
font-feature-settings: "liga", "dlig";
So just remove that clause from your stylesheet and you'll be rid of that ligature.
Shame the advice is to turn them off completely...
But then I don't inflict them on anyone but myself and people who look over my shoulder...
There are so many problems you can solve with a neural network. Should Waymo ETL sensor data and do a WHERE NOT IN for bicyclists?
This is blog post is pretty dismissive. Statistics software has been in use since the beginning; see SAS. Financial institutions, actuaries, etc, have been using these methods with SQL data as the input and it’s the only reason they’re still in business.
If this blog post simply suggested hiring a BI Analyst in your startup, I wouldn’t disagree.
SQL is a language that helps retrieve the data you're looking.
ML/AI helps you predict the future (using past data).
Maybe this is directed towards product people? But it has SQL in the title so it can't be. I'm confused as to who the audience is here.
SQL can apply a human understood model to data points. AI lets us develop new models and adapt them.
AI lets us solve problems that have abstraction, or problems that change over time. You can't have SQL detect cats in an image or drive a car.
What OP suggests, the so called SQL, is basically a heuristic based system. When done probably and carefully, it could of course work very well, and is indeed often used as baseline model to bootstrap a ML system. However, eventually the rule-based system will hit the wall, and ML be the savior of the day to push the metric further for a margin of 20-30%.
So yes, when you are small and has little data, ML is irrelevant. But same thing could be said to too many things in software industry, you probably won't need Docker/Big Data/Fancy JS as well, if you are building a small scale online store.
Choose wisely your tech stack based on your problem, but the title is needlessly sensationalized.
For things like figuring out who your biggest customers are, SQL probably is the right tool for the job. Whale-spotting probably gives a decent bang per buck, and isn't particularly complex.
But when he gets onto recommendations, it starts to look like it's the author who's attached to the wrong tool for the job. His example of recommending sunglasses to people who buy sunglasses is terribly blunt. If someone in my locale, who doesn't regularly buy sunglasses, buys sunglasses; they're probably going on vacation - there's not much sun at home for them. Surely there's a whole raft of things someone excited for their summer holidays would impulse-buy, but the sunglasses they just bought are no longer on the list.
If ML can match them up with with a "going on summer holidays" demographic, and BI wants to sell them the only thing we know they no longer need, it's no longer making a strong case for blunt instruments.
That doesn't make it not a rule-based system. There is no learning component in SQL.
Ironic that machine learning is 'simple' but that seems to be the case at times especially with the 'throw block chain or machine learning at it' approach when a proper algorithm could do it far more efficiently. The funny thing is that both approaches have their place. If turning it off and on again fixes a rare issue faster than following every instruction to machine code you are better off restarting it occasionally - unless it is a critical application where doing so will cost millions of dollars or lives.
Not that I necessarily disagree with the OP but I find it deeply uninspirational.
What's the difference between using ML/AI for problems traditionally solved by some other tool and using any other tool to solve the same problem unconventionally? Both can be "hacking". I guess my issue with this is the word "need", don't do what you need to do but what you want to do if you are looking for inspiration. After all, mankind never needed to leave the garden of Eden but left it anyway.
From the article:
> I hear these days for you to close that funding round quickly and early enough, you must throw in “Blockchain” even if it has no relevance in the grand scheme of things. A while ago, it was Machine learning and Artificial Intelligence.
Right on. No, blockchain won't help you with your corrupt voting system. If you don't understand the technology, you can't reason about its applicability, and there are more buzzword-chasers than serious technologists.
In fact the majority of tools in this space are exclusively CPU based.
If current ML/AI is the future and reveals more than anything else could, then it's logical for everyone to be piling onto it whether it's an applicable at the moment or not.
If current ML/AI is just another tool, then it's reasonable to use if and only it's applicable. Sure, not doing ML means you don't get ML insights but doing SQL means you get SQL insights. Back in the day, I recall clever queries could reveal interesting things, find outliner data and so-forth. Certainly, you don't get the powerful ad-hoc statistics power that ML give. But I suspect that power requires extremely large datasets.
I would also call out the NoSQL hype train here.
NoSQL has its place, and largely its place is when SQL can not tolerate the intensity of traffic or the size of the dataset. You can look at the Dynamo paper for an example of the engineering rationale.
Postgres can take enormous amounts of data at quite decent rates - without spending too much time on tuning even.
also, its nice to plop json, avro, csvs, parquet, or what ever data in storage and just query/join/analyze it. no need to put the story on hold because you are waiting for the oracle dba to increase space again.
I mean, the author is talking about how SQL is a good-old 40 year old tech. In the mean time, one of the simplest ML algorithm, linear regression, is about 200 years old, even older (AFAIK) than Ada's program for Babbage's machine. It's very easy to understand and implement, and even excel has it as a standard function.
Sure, linear/logistic regression or naive bayes won't help you tag pictures with text à la facebook "this is a picture of a young man dancing with a red shirt", but the vast majority of use cases of ML are way easier, anyway. So yes, most of the time, you can easily find "talents" that will solve your ML problems. And if you really want to, you can implement it in SQL.
Also if you think you have unstructured data but you need to interpret every bit of that data, then it's not unstructured.
1. Shortcuts, such as "foreign key chasing" - i.e., if "a" int table x is a reference to field b in table y then "a.c" is "select c from x inner join y on (x.a=y.b)"; If you have a star schema, it cuts down queries and errors by 90% (and makes life simple for the optimizer). Of course, you can chase through as many tables as you wish in an expression, making item tables look a lot more like records.
2. Embracing order; The relational model has no ordering among tuples; SQL mostly pretends that's the case, but the order does emerge through "ORDER BY / TOP" and "ROWID" but not very usefully so. kdb+ embraces order and makes e.g. "first record that ..." very simple and intuitive (and also easier for the optimizer).
3. Embracing time series (not independent from embracing order) - when you have e.g. records with a "from .. to" validity range, it becomes exceedingly simple, as does "all records that are different from the previous one on this field".
Both times, the eventual consensus was that sql was simpler to implement and use, but maybe marginally slower. Then machines got faster, so SQL dominated the market for decades.