That's not such a big problem. I think we can get good an inferring values from behavior and using values in marketing. Then if you download an app that has positioned itself with certain values, the designer can reasonably assume that you hold those values.
Disagree. People are terrible at aligning behaviors with stated "values."
Economists look at this dissonance as "revealed preferences." While it may be the best thing we have to describe behavior, it's not well aligned with self-image - which is really what stated preferences aka "values" are.
As a result, designing systems around revealed preferences might be advantageous in the short term - it conflicts with people's long term self concept.
This ends up being a normative economics (< a rare thing any more) and philosophy question.
Some people in the AI safety community have explored this idea of consolidating values, as the seed goals for artificial general intelligence - but it proved to be impossible.
I'm super familiar with the problems of revealed preferences (that's literally chapter 1 of my PhD thesis). But we don't need to naively interpret every action as an expression of 'values.' For example, consider netflix playlists. I shouldn't infer your values from what's in your playlist, but I probably can inver your values from the movies that go in to your playlist but never get watched. You wish you were the kind of person who watched that documentary on the failing school system. That's a value inferred from behavior.