This organization is disgusting and is evidence enough that our industry has no sense of ethical responsibility. When massive regulation lands on Silicon Valley and we whine about the impact it has on innovation, remember companies like Dopamine Labs who truly deserved it.
Not so much remorseless, we were just shooting for contrarian and attention grabbing...And here we are....knee deep in front page of HN hate mail...
Behavior shaping isn't necessarily morally wrong to use, though companies like Facebook and Google are almost completely incentivized to use them against users.
For example: If I decide I want to exercise more, maybe I would appreciate an app which helps me become addicted to exercise. That might be good for me.
However, what if the app pushes me to exercise too much, and I begin to experience health problems associated with destroying muscles? Or, what if the app is formed on bad exercise science and it suggests routines that are bad for me? Now suddenly the addictions created by the app are working against me; the app isn't being irresponsible by helping me exercise, but it is being irresponsible by modifying my behavior and decision making processes to favor using it.
Similarly, is Instagram "good" for you? Probably not. But, per your comment: "there's nothing wrong with giving humans the tools they want". People might want to be addicted to Instagram; that doesn't mean it is good for them and that Instagram should deliver. People want to be addicted to nicotine and alcohol, so we put regulations around it.
If we look at something like the activity circles on the Apple Watch; that's safe enough in my mind to be pretty well in the white area.
We make cars with both gas pedals and break pedals because sometimes you want to go faster and sometime you want to go slower. There are some behaviors that you want to see yourself doing more frequently, and others that you want to see yourself doing less frequently. So we make products to serve both of those use cases.
It can be unnerving to see yourself programable; it affronts our sense of freewill. But once we get to the point that these technologies are possible, the question isn't if to use them, but how. We're trying to lead that conversation. And I am genuinely interested to hear your thoughts on how to use this technology to encourage human thriving.
Just "trust" us. There are obviously many teams working on doing this, but never so audacious. These things are definitely not a value add to society, yet very profitable, like e.g. heroin if that doesn't sound too hyperbolic.
It's probably the central problem of capitalism. Monetary systems draw no distinction between wealth creation and wealth extraction, and the second law of thermodynamics guarantees that the latter will always be much easier. When you design chips, cure diseases, or build rockets you are fighting entropy. When you addict, misinform, and con people entropy is on your side.
An unethical AI has more degrees of movement. It will always beat an ethical AI.
Literally an impossible thing to measure because it's subjective. If it makes money is adds wealth so by your standard anything goes.
I've never quite heard 'entropy' used as an argument, and though you might be a little bit correct in some sense of the word ... I don't think it holds true either physically or as an analogy :)
But bon-bons for trying.
That's my favourite 'most HN comment of the day'.
Or the addiction-feeding elements of healthcare -- opiates and other-than-therapeutic interventions.
We are building something evil and reprehensible, but we are open about what we are building, and therefore we are better than the other guys who are doing it surreptitiously.
Fine, but in that case your product is basically performance art. Bravo, you did a good job raising awareness: now go actually work on fixing the problem instead of pretending like you aren't part of it.
What does need work is their messaging. Suggesting they can control users does not sit well. Instead, they could suggest they make games so much more fun, users boost in-game time.
They need a lesson on marketing and messaging, the idea itself is fine.
Frankly, I feel bad that we are piling the heat on to this company. I think a lot of the people in this thread are frustrated and upset, and don't know where to apply their energy to start solving the problem that they are upset about. I am one of them; I had a very visceral reaction to the OP.
But I think this is a good wake up call for just how angry and frustrated we all are. We should be thanking this company for putting us over the edge. How many of the posters here railing against addictive media checked Instagram or Facebook in the same hour that they read this post? Time to start putting our money where our mouth is.
Considering the amount of behavioral science being used to accomplish this, it's actually quite "evil".
As much as we humans fancy our intellect and "free will", we are still just very predictable biological automatons. Put us in a very well designed skinner-box, and we gonna be pushing that button until our body rots away without us even noticing.
There is a very fine line between making something "engaging", for the sake of building something exciting, and making something "engaging" for the sake of keeping "engagement going".
I've played quite a few games that were addictive but not fun. The difference is hard to spot when you're in the middle of it, but stark from the outside. Typically they start off being plain fun to draw you in, and then gradually segue into addictive after a few hours to keep you there.
But that wasn't my point. My point is that it's not terribly evil to make a game addictive. Addictive qualities are only bad to the extent that the addictive thing hurts you (e.g., smoking, drugs). These games are all pretty harmless.
That doesn't mean it is wrong for a game to become addicting because it is so damn fun it just happens to become addicting. What is beyond despicable to me is to create a game that deliberately manipulates the human mind to become addicting without being of any real value.
If nothing else, it's a matter of opportunity cost. You are stealing attention away from hobbies, friends, family, and more likely, more wholesome forms of entertainment, possibly even more intelligent video games. Through in-app purchases you are funneling people's income from their bank accounts to yours and your employees'. This is fine when people are making a choice to waste their resources, but when you take people's choice away by addicting them, then you are effectively enslaving them.
We're actively working on all of those. I don't think it's possible to pretend that these techniques don't exist. So instead we're flamboyantly advertising that they can be used for good.
What behaviors do you wish you did more of or did more regularly?
This basically covers all of American business and politics. I think the anger in this thread is misplaced.
I'm just not sure which one is which. Is Dopamine Voldemort or Umbridge?
Contrast the new 'social credit' system in China - it's (IMO) pure evil and a disturbing example of government mind control, but presents itself as orderly virtue.
It’s the perfect media juncture to do it- force the conversation with obvious names, have fun with the media, and in so doing carry out the public service of shining the brightest light on the darkest spots.
And it’s not like anything is illegal. People need to be horrified before they react and then figure out exactly what is acceptable and what is not.
But the whole IT industry itself seems to be in desperate need of a code of ethics and behavior. These guys can always just dodge by pointing out that they are hardly the worst fish in the sea. Just the most self aware and well labeled.
It is inherently a people problem.
The issue is simply that any fix (by code or by competition), will only move the conflict point from its current location to a new place on the board.
Without being able to approach the bad actors and their motivations/incentives directly, you will not be able to effect change.
And there is always going to be an incentive to manipulate minds, easily and at scale.
This isn't just going to go away, it has to be fought.
Yup, people always take a high moral stand.. until there is money involved.
I think that's exactly their elevator pitch.
I have to say, I'm not convinced.
Yes of course if you say that your product is similar to a what a drug dealer would offer, then people are going to get interested because of the potential for returns on their money. You're just confirming my suspicion that branding things this way was in order to help get investment. I don't see how you are being contrarian. What is the popular opinion you are rejecting? That addiction is scary/harmful/a terrible thing to inflict on others?
You're in a tricky position now as I personally don't see how you can reverse course on presenting the company this way (after thinking about it for a couple of minutes :) )... but I hope you can figure it out!
Dopamine is about learning, and learning only matter because it changes us. The same brain mechanisms and technologies that make facebook captivating and can be used to make fitness, a good diet, and spending time with people who love you engaging too. When we're done building this tech-stack, it will be possible for people to learn themselves in to whoever they want to be.
Disclaimer - I'm the CEO of Dopamine
In other words, how would you go about solving the 'just trust us' problem others in this thread have described better?
Startups are dead -> they're all rock bands now.
VC's are record labels.
If there is a technical definition of illegal influence, what would be the test of legality work? Would would enforce it? Who would be in violation of those laws today? Are any of the archaic laws about advertising/mind control that could be applied to today’s tech/media/security firms?
Finally, in a hypothetical scenario where a state actor was able to influence the mentally fragile into turning into mass shooters, how would nations and individuals start to protect themselves against such attacks?
It has an edgy sounding name and just enough marketing buzz to scare you.
This company is not around in 5 years. Good UX > whatever they are selling.
You say you only work with certain clients based on their ethics. Can you name some examples of an application for this that has the potential to be profitable? I, and it seems many others here, have a hard time understanding how this science can be used both ethically and profitably.
This is the kind of thing that I fear will prompt a backlash from the people and government and destroy the entire online ads regime that underpins the current Internet.
It also doesn't strike me as benevolent to offer a service to fix the problems your other service caused.
You'll know when a user of a Dopamine-backed app installs Space, and that's useful to you. Measuring people trying to give up an addiction feeds back in to understanding the causes of addiction, and in your case presumably gives you quantifiable information you can use to make Dopamine more effective. Saying you made it and give it away out of the goodness of your heart seems a little underhanded.
Your company is still about making shit addictive. Its customers will inevitably be those with the money & inclination to make their app addictive - social media & mobile games. Its morally pretty close to adding sugar to junk food or selling cigarettes.
You were going to clear up "mob misconceptions" about ethics but it turns out you had nothing to say, just an "invitation for discourse".. while most of the tech world has been in on this particular discourse for years, with constant media coverage, and we know its a problem.
But I'm pretty sure you know all of that. Don't expect public approval for making glorified/AI-powered slot machines, they're still just slot machines.
His work on why you, me, everyone on this thread, and every species that's part of the subphylum Vertebrata shaped the course of thinking from contemporary Machine Learning techniques like Q-Learning to us!
Not evil, have done our reading though ;)
I think I truly get what you are doing, but I definitely think you are going to have to very seriously explain to people here WHAT the problem is and how you are going to help.
So some straight up advice -
1) People here need to understand with examples from your research, on just how Bad the effect of un-noticed conditioning can be.
2) Highlight your mission statement, and very strongly defend the part where your tools do good. Preferably with illustrations.
If you want to take on anti-hero, branding, remember that it only works if people are intimately familiar with the inner workings of his mind and thus his motivations.
As such you may want to put this very prominently out there -
>We watched and read too much scifi growing up. And for every 1 dystopia movie, there's 3 better utopia novels. So now we can't get 2 determinately optimistic ideas out of our head: (1) the future is going to be awesome (and awesome for everyone: not just white bearded educated heteronormative californian male technocrats: that would suck). (2) The relationship between humans and our machines must be one of mutual thriving and improvement. The little magical slabs of glass that live in our pockets and proliferate across the globe are tools for human thriving.
So we built Dopamine to help that future became manifest.
>Technology is becoming more (addictive / persuasive / coercive) every day and there isn't much we can do to reverse it. And it sucks that the technologies that are becoming persuasive the fastest aren't necessarily ones that are great for human thriving. You deserve a better world than one in which the most persuasive technologies demand from you your eyeball hours and brand loyalty in exchange for cat videos. That's dystopic.
people will also want to know that its not marketing speak. SO you will need to show that you take pains to actually move us to the world we need to go to.
I can't, on something like this. Talk is cheap.
Welp, there's another one on the 'never touch with a 10-foot pole' pile. Yikes, your promotional account sounds like a grinning glassy-eyed sociopath. Serious goosebumps.
I think they’re very aware of the problems and they would rather be in the good side of the issue.
I think he just doesn’t know how to convert this or talk about it in a forum setting well.
When's the last time you said more than 'hello' to a neighbor?
The only substitute for Social "Media" is a Social Culture. The reason most mature adults are checking their phones all the time is nobody interacts to any significance anymore in the public sphere. When we're in public we are: A. trying to get somewhere else, and frustrated at being impeded by other people, B. Trying to buy something to consume or C. trying to sell something.
The best way to fight the screens and the apps is to provide a better alternative, IRL.
On one hand, it's important we get a better understanding of what makes us tick, on the other hand, I fear what such understanding might enable us to do to each other.
That's the exact same way I feel. There is definitely a end game here where everyone has more autonomy and dignity, and that's the one we're fighting for.
It's not like you are doing anything new here, the only new thing here is your approach of being dead honest about what you are doing, kudos to that.
But besides that I don't see how any of what you are doing there results in an endgame of "more autonomy and dignity for everybody", it rather seems to be reinforcing the current trend where such knowledge and practices are sold to anybody who can afford it, leaving those who can't out in the rain.
I think around the 14 minute mark or something you made a statement that people need to be aware of the apps they choose.
Prior to that you had argued that (mostly for the first world) behavioral choices are the root of the major causes of morbidity.
Your redeeming speech comes at around the 20 minute mark.
1) you assume people have a choice - and that inherently Discounts the over size impact of bad actors and antagonists.
This point alone, fundamentally changes the constants underpinning the model you tacitly must be running to support your other predictions.
2) people are terrible at making a lot of choices on average. People choosing good apps over bad is similar to hoping people make good life style choices.
Unlikely, and far too path dependent. Unless you are highly informed, educated about skinner/conditioning and tech aware and cynical - you won’t succeed.
3) a lot of those behavioral problems have arisen from large firms using old school behavioral influencing to create that situation.
There is no basis for hope that more behavioral tools will solve the problem.
20 minutes onwards though, and you have a much better Wicket - at least up to the point I’ve watched.
Which is another thing. Very few people are going to watch a 30 minute video of you speaking, especially when we’re on a text based forum.
The reactions here could tell you that this is a really Bad Idea. Working or investing in this is a terrible choice and will forfeit baseline level of consideration you could normally expect from other people.
No, it absolutely is not. Some of most horrifying abuses in the history of the human race have occurred under the guise of "improving" other people for their own good.
Analogously, when someone designs a diet app, they're creating the boundary conditions of a new behavior pattern for users. Their use of Dopamine aligns well with their end user's goals: in the same way that you hold friends close because you know they rub off on you, so too should we think about our software. Having an app on your phone you chose to let close to your mind and behavior - to be a new venue of your thought and future self - is a decision about who you want to become. And if that app can use good design, a good interface, and persuasive tech to help you achieve your health or diet goals: you win.
That's an alignment of what tech wants and what humans want.
Even if they were today, your VCs and future investors will not allow you to turn down profits and only sell to health or self-improvement apps.
If your product is successful, you have a fiduciary duty. If you fail in that duty, you will either be removed, or potentially even sued. And then it will be too late, because you already built it, and someone else will have taken the wheel. And they will maximize it, inevitably weaponizing it against the population.
There are better things you can be working on. Just walk away. Humanity does not need this.
There is, of course, an argument that any manipulation of human behaviour is inherently evil, but it's a very weak one - if you manipulate a person to eat well, exercise, donate to charity, call their parents and vote for a [insert a political cause that you believe to be inherently moral], is it still an evil thing to do?
Fact is that morals are relative and software companies shouldn't be the arbiters of that.
Also, if this is designed to release dopamine or any other neurotransmitters, is there a risk that this will ever be classified as a drug? Or at least be treated legally like a drug?
This whole topic is gonna be huge soon, it's in the same vein as increased use of gambling style "loot boxes" in video games designed to be skinner-boxes.
What if it was a productivity app that you were spending more time in or interacting with more efficiently. Or what if it was a fitness app that got you to do extra stretches throughout the day? Is that just economic dead weight?
Disclaimer - I'm the CEO of Dopamine.
We are. And have been for a while now.
1138 is one of my favorite documentaries about the future. There are a lot of naysayers around here, but only because of the high fidelity mirror you present them with. It is only their own self loathing.
I'm wary that this will improve anything at all, but make things more annoying and addictive, as opposed to actually increasing productivity.
Founder (Brown) here. Appreciate the traffic!
Also, would love to clear up mob misconceptions about:
-Why did we pick lightening-rod branding?
-Why was this the most humanitarian use of a neuroscience MS/PhD?
-What quality of people must we be to do this?
-What is design's ethical imperative that, 100 years ago when most died of infections diseases, that this year the majority of people over 50 will die of strongly behavioral-based diseases? 
-How come the Founders of the Push Notification teams aren't part of the cultural dialogue?
-What does Tristan Harris think?!
-What is our pricing model and can your startup get a discount? (YES!)
Because we don't sell to apps that would hurt people. See manifesto on usedopamine.com/team/index.html.
~100 years ago, the most frequent causes of death in the US were pathogens for which we barely had a name. Pnemonia, Flu, Cholera, Fevers. And it was only after we developed a rigorous technology of the body (modern medicine) did we lift millions of people out of suffering simultaneously.
Today, if you are under 50, you're mostly like going to die of opiates. Over 50? Type-2 diabetes, stroke, cardiovascular disease, obesity and its complications, and stress-related illness.
Every single one of these has strong behavioral components.
Building a smartphone-first, AI-powered rigorous technology of the human mind gives us all an above-the-table, democratized chance at designing scalable technologies that stop this. It spreads better across national, sex, gender, and SES lines than most other behavior-change oriented solutions. And as we enter an age of an excess of cheap energy, food, and data, we NEED a rigorous way to help us better align modern aspirations with an ancient brainstem.
Note that you're using the word 'addiction'. Many would argue that is hurting in and of itself - regardless of what one would be addicted to.
And that last paragraph is absolutely haunting. We need companies controlling our minds because our brainstem isn't evolved enough??
I hope you get sued into the ground. It's time to start holding people accountable for their effect on our peoples brains.
Lightening rod: because now we're on HN.
Humanitarian use: see below about 100 years ago vs. today.
Quality of people: that's why Space is free.
What's design's ethical imperative: also, see below about 100 years ago vs. today.
Founders of push notification companies: Ask the CEOs of Leanplum/Marketo/Intercom/Vizurly/Kahuna why they haven't released an antidote for push notifications? Please. Try to get them to talk about it.
Tristan: great friend, mutual supporters
Pricing model: $0.05/MAU for qualifying startups.
(Other than ever attempting to re-brand before the heat death of the universe.)