If your ML model is able to predict what consumers are going to buy, the revenue lift would be zero.
Let's say I go to the store to buy milk. The store has a perfect ML model, so they're able to predict that I'm about to do that. I walk into the store and buy the milk as planned. So how does the ML help drive revenue? The store could make my life easier by having it ready for me at the door, but I was going to buy it anyway, so the extra work just makes the store less profitable.
Maybe they know I'm driving to a different store, so they could send me an ad telling me to come to their store instead. But I'm already on my way, so I'll probably just keep going.
Revenue comes from changing consumer behavior, not predicting it. The ideal ML model would identify people who need milk, and predict that they won't buy it.
This is incorrect. You can predict many things that drive incremental revenue lift.
The simplest: Predict what features a user is most interested in, drive them to that page (increasing their predicted conversion rate) -> purchases that occur now that would not have occurred before.
Similarly: Predict products a user is likely to purchase given they made a different purchase. The user may not have seen these incremental products. For example, users buys orange couch, show them brown pillows.
Like above, the same actually works for entirely unrelated product views. If users views x,y,z products we can predict they will be interested in product w and we can advertise it.
Or we predict a user was very likely to have made a purchase, but hasn’t yet. Then we can take action to advertise to them (or not advertise to them).
ML is useful for many things. I'm asking the question of whether prediction is useful, and whether it is accurate to describe ML as making predictions.
The reason to raise those questions is that for many people, the word prediction has connotations of surveillance and control, so it is best not to use it loosely.
The meaning of the word "predict" is to indicate a future event, so it doesn't make grammatical sense to put a present tense verb after it, as you have done in "Predict what features a user is most interested in." Aside from the verb being in the present tense, being interested in something is not an event.
You can't predict a present state of affairs. If I look out the window and see that it is raining, no one would say that I've predicted the weather. If I come to that conclusion indirectly (e.g. a wet umbrella by the door), that would not be considered a prediction either because it's in the present. The accurate term for this is "inference", not "prediction".
The usage of the word predict is also incorrect from the point of view of an A/B test. If your ML model has truly predicted that your users will purchase a particular product, they will purchase it regardless of which condition they are in. But this is the null hypothesis, and the ML model is being introduced in the treatment group to disprove this.
You can predict a present state of affairs if they are unknown to you.
I predict the weather in NYC is 100F. I don’t know whether or not that is true.
Really a pedantic argument, but to appease your phrasing you can reword my comment with “We predict an increase in conversion rate if we assume the user is interested in feature x more than feature y”
That is a normal usage in the tech industry, but that's not how ordinary people use that word. More importantly, it's not how journalists use that word.
In ordinary language, you are making inferences about what users are interested in, then making inferences about what products are relevant to that interest. The prediction is that putting relevant products in front of users will make them buy more - but that is a trivial prediction.
Exactly. I know someone who does this for a certain class of loans, based on data sold by universities (and more).
Philosophically -- personally -- I think this is just another way big data erodes our autonomy and humanity while _also_ providing new forms of convenience. We have no way of knowing where suggestions come from, or which options are concealed. Evolution provides no defense against this form of manipulation. It's a double edged sword, an invisible one.
If the store knows you will want to buy milk, it will have milk in stock according to demand. If it doesn't have a perfect understanding of whether or not people want to buy milk, the store will over/under stock and lose money.
Let's say I go to the store to buy milk. The store has a perfect ML model, so they're able to predict that I'm about to do that. I walk into the store and buy the milk as planned. So how does the ML help drive revenue? The store could make my life easier by having it ready for me at the door, but I was going to buy it anyway, so the extra work just makes the store less profitable.
Maybe they know I'm driving to a different store, so they could send me an ad telling me to come to their store instead. But I'm already on my way, so I'll probably just keep going.
Revenue comes from changing consumer behavior, not predicting it. The ideal ML model would identify people who need milk, and predict that they won't buy it.