This organization is disgusting and is evidence enough that our industry has no sense of ethical responsibility. When massive regulation lands on Silicon Valley and we whine about the impact it has on innovation, remember companies like Dopamine Labs who truly deserved it.
Not so much remorseless, we were just shooting for contrarian and attention grabbing...And here we are....knee deep in front page of HN hate mail...
Behavior shaping isn't necessarily morally wrong to use, though companies like Facebook and Google are almost completely incentivized to use them against users.
For example: If I decide I want to exercise more, maybe I would appreciate an app which helps me become addicted to exercise. That might be good for me.
However, what if the app pushes me to exercise too much, and I begin to experience health problems associated with destroying muscles? Or, what if the app is formed on bad exercise science and it suggests routines that are bad for me? Now suddenly the addictions created by the app are working against me; the app isn't being irresponsible by helping me exercise, but it is being irresponsible by modifying my behavior and decision making processes to favor using it.
Similarly, is Instagram "good" for you? Probably not. But, per your comment: "there's nothing wrong with giving humans the tools they want". People might want to be addicted to Instagram; that doesn't mean it is good for them and that Instagram should deliver. People want to be addicted to nicotine and alcohol, so we put regulations around it.
If we look at something like the activity circles on the Apple Watch; that's safe enough in my mind to be pretty well in the white area.
We make cars with both gas pedals and break pedals because sometimes you want to go faster and sometime you want to go slower. There are some behaviors that you want to see yourself doing more frequently, and others that you want to see yourself doing less frequently. So we make products to serve both of those use cases.
It can be unnerving to see yourself programable; it affronts our sense of freewill. But once we get to the point that these technologies are possible, the question isn't if to use them, but how. We're trying to lead that conversation. And I am genuinely interested to hear your thoughts on how to use this technology to encourage human thriving.