Behavior shaping isn't necessarily morally wrong to use, though companies like Facebook and Google are almost completely incentivized to use them against users.
For example: If I decide I want to exercise more, maybe I would appreciate an app which helps me become addicted to exercise. That might be good for me.
However, what if the app pushes me to exercise too much, and I begin to experience health problems associated with destroying muscles? Or, what if the app is formed on bad exercise science and it suggests routines that are bad for me? Now suddenly the addictions created by the app are working against me; the app isn't being irresponsible by helping me exercise, but it is being irresponsible by modifying my behavior and decision making processes to favor using it.
Similarly, is Instagram "good" for you? Probably not. But, per your comment: "there's nothing wrong with giving humans the tools they want". People might want to be addicted to Instagram; that doesn't mean it is good for them and that Instagram should deliver. People want to be addicted to nicotine and alcohol, so we put regulations around it.
If we look at something like the activity circles on the Apple Watch; that's safe enough in my mind to be pretty well in the white area.