Really this doesn't go far enough. Why are we allowing people to discriminate at all when it comes to intimate relationships? What about factors like personality, physical appearance, marital status, age, gender or political opinions? Really dating apps should just randomly assign you someone to have sex with and refuse to give you more matches until you provide evidence you have done so. Given people might be unwilling to comply with these directives it probably needs to be enforced by the state, at gunpoint if necessary, for us to achieve the unbiased utopia that all right thinking people must want.
We're now in such a crazy world that it took me far too long to realize you were being sarcastic.
Dating really is the one place where it's fine to be as discriminating as you like. It's really no one else's business as to what constitutes one's notion of attractiveness.
1. The world has gone mad - a claim made by people of every political group, essentially constantly, and about every single kind of political issue.
2. You are too willing to buy into satire if it makes you feel good.
That's not what's outlandish. What's outlandish is suggesting that you fail to recognize satire because somehow discourse has become so ridiculous that reality is indistinguishable from fiction.
This is basically the premise of Brave New World, you are really not supposed to turn down people (within your own caste) for sex there. This is helped by the fact that everyone is approximately equally attractive through selective breeding.
it's fascinating that this [satirical] comment is the top post while the article itself has simultaneously made the front page. like the comment above alludes to, this article is absurd, yet, there seem to be many who take it seriously (presumably the researchers studying "queer hci", whatever that is)?
The thing is that both viewpoints are basically worthwhile [0]. But this "algorithmic" optimization based around being ahead of individuals needs to choose one uniform "correct" framework for everybody, forcing these disagreements front and center. It used to be that people could go their own ways with live and let live, with change happening at a gristly organic boundary - eg an older generation pressuring a newer generation to conform to their biases / newer generation individually choosing to defy or follow. Now we've got these centralized companies putting themselves in the position of "knowing best", and given that the thing they're really optimizing for is their own bottom line, we know they're going to get it wrong.
[0] I've only read the abstract of the article, so I can't tell if its authors step over the line into unreasonableness. But I am willing to give them the benefit of the doubt, as there is a novel issue here.
edit: Oh jeeze. It seems my "benefit of the doubt" was wholly undeserved. I still stand by my general point, but in relation to some imaginary article where the authors explored users' mild preferences being extrapolated into much stricter "filter bubbles". These stakes of choosing "one true answer" are really bringing the tripe out, as well as fueling the reactions to it.
Absurd but also predicted, no? I am sure I recall people joking here on HN about how de-"biased" dating sites would surely arrive at some point soon, and here we are.
Bear in mind this comes from the same school of thought that brought us "air conditioning is sexist":
If anything I find myself agreeing with that author in a strict logical sense: if there are two temps that are comfortable and it's set to the colder one for men, you can argue that's sexist by the dictionary definition, albeit also practical. But it's still kind of absurd.
Researchers also got unhappy that an AI trained on bulk text learned doctors are usually men and nurses are usually women. Their solution wasn't to try and encourage more men into nursing and vice-versa, but rather, to edit the AI so it believed in equal outcomes.
I 100% disagree with the original article but upvoted it because I think it's important that people see that there exists attempts to push platforms to manipulate their users for ideological reasons. It's good to shine the flashlight under the rock.
I don't think it needs to be legitimized with the general audience to be implemented. The more people that see this, the less likely it is to come to fruition. These types of programs don't get pushed through due to popular request. They get snuck through the back door based solely on the political motivations of employees and outside activists.
Shhh, that's the improvement they roll out after a few years of the Mandatory Anti-Discrimination (MAD) dating system. The people will welcome it with open arms.
> While it may strike us as normatively acceptable to encourage intimate platform users to be open to more diverse potential partners, we might find some categories more palatable for such intervention than others. For example, it might seem inappropriate to suggest that a Jewish user seeking other Jewish people "expand her horizons" past those preferences, which might be based on a number of religious and cultural considerations. [..] Intimate platforms can
be very useful for minorities looking to meet others who share their background and values.
I guess some identities are more equal than others when it comes to preservation. Everyone except the majority (within a country) is allowed to prefer their own.
This might save you a click: The problem they are addressing is that some people prefer not to date others of specific ethnic background or countries of origin.
I’m 100% for eliminating language that is unduly offensive, exclusionary, or racist from socio-technical platforms....that said, aren’t we eliminating the concept of desire itself once we make “unrestrictedness” part of it? The whole notion of desire is a directed wanting—that is to say, I want this, not that. Whether this wanting is shallow or not, I think, isn’t really up for us to decide. This is a technological manipulation of wants and needs (something we’ve been subject to since the 20th cen. Though usually those orchestrating it aren’t upfront about it). I think urging people to step outside their comfort zones and expand their horizons is a great thing, but I think this is throwing a technological restriction/solution at a non technical social problem—you really want people to broaden their notions of whom they might be intimate with, start by putting funding into community activities and inclusive communal structures and go from there—I think that’s a much better way to get people talking than to lump some restraints onto what are already arguably dehumanizing, distancing technical outlets. I know I’m talking past the authors of this paper somewhat (since this isn’t their point) but sometimes limiting, reducing, or restricting the application of technical solutions to certain domains is a better solution than commiting all this effort to properly technicise elements of human life that shouldn’t have been so heavily technicised in te first place.
I don't ever want a platform attempting to change my behavior. I'm quite happy with how I make decisions and I'm super uninterested in having other people's morals and opinions covertly enforced on me through data manipulation, user interface decisions, hiding of opinions...
I don't use dating apps but this statement goes for all platforms. Don't mollycoddle your users. You are not a moral authority. You do not know what your users want better than they do. This is not an opportunity for you to push your ideology on people that may like your platform for non-ideological reasons.
There's way too much editorializing and desire to editorialize going on in our industry. We need to collectively take a step back and realize that not only are we as technologists not responsible for adjusting people's point of view, attempts to do so are amoral, unjust, ineffective and just bad user experience. Give people what they want and get out of the way.
What's your response to the article's point that a platform, any platform, already changes your behaviour?
This is the focus of the discussion - do people know what they want? If, as the authors write, desire emerges from interaction with a platform, then there's no sense in talking about "what people want". There's no sense in even trying to criticize manipulating what people want.
Particularly on browsing platforms, like social media or online shopping, what the user wants to be presented with is not a single, well-defined goal; even when restricted to just one user. Platforms always editorialize, even if it is unintentional. Platforms are already a moral authority, even if just by reinforcing norms.
Technologists who say "I'm neutral, I just listen to the data/user/profits/shareholder/specification" fail to recognize that software acts on people, and being willfully blind to what influence you have is not absolving you of influence.
> Technologists who say "I'm neutral, I just listen to the data/user/profits/shareholder/specification"
This is not being neutral - these are all justifications for changing user behavior based on the interests of the company.
The real answer is to separate dataset (aka community) from software (aka policy). Then every individual is better able to opt out of choices made for them, rather than the current state of outrage groups pushing companies to adopt their team's desired policies for everyone.
>The real answer is to separate dataset (aka community) from software (aka policy).
The result of that would be software that doesn't aim to serve the user. This is exactly what software was like before the current trend of analytics and user experience - and the reason those have become so popular is because believing you could make a product without feedback from how your users wanted to use it turned out to be a much worse approach.
Instead, you're proposing to make all software so powerful that the user is in complete control of all choices, which is really another way of suggesting that the user become a programmer of their own software. Designing something for end users involves pruning choice paths.
Every paradigm carries inherent biases, but if you simply present it as-is you're at least not exacerbating them. Once you go down the road of "optimizing", you're moving away from neutrality.
> The result of that would be software that doesn't aim to serve the user. This is exactly what software was like before the current trend of analytics and user experience
Um, analytics are a large part of what is feeding this trend of opinionated software that pushes users into behaving certain ways. Analytics optimizes for the company's goals - the users' goals can only be subservient to that.
> Instead, you're proposing to make all software so powerful that the user is in complete control of all choices, which is really another way of suggesting that the user become a programmer of their own software
You're shoehorning my argument in order to use an old ignorant put down of Free software. Yes, Free software has been outpolished by surveillance as a service - invasive control is inherently more lucrative, attracting capital. Now that centralized services are moving from "acquisition" to "imposition", their downsides are becoming a lot more apparent.
When the community is bundled with the software, Melcalfe's law restricts competition between softwares - likely leading to two attractors, the most popular option and the fed-up dissenters.
While not being an ubermensch programmer means that you cannot make your own software that perfectly reflects your preferences (ie nobody can), the point is being able to choose between a plurality of competing software options that are better able to match them.
> What's your response to the article's point that a platform, any platform, already changes your behaviour?
I strongly disagree with it. Especially when it comes to mate selection, people have a lifetime of experience on which to draw from to make a decision.
> Platforms are already a moral authority, even if just by reinforcing norms.
I don't agree that "reinforcing norms" (aka giving people what they want) is a moral position unless that position is: We'll let your users decide for themselves what's best for them.
> Technologists who say "I'm neutral, I just listen to the data/user/profits/shareholder/specification" fail to recognize that software acts on people, and being willfully blind to what influence you have is not absolving you of influence.
Everything acts on people. Every input you take in is processed and your brain decides what to do with it. By staying as neutral as possible, you let the end user make the decision on their own. I also don't think this sort of manipulation is even effective.
Then how do you otherwise explain the article's claims that desire (as expressed by user actions) can and does change even over the short time of using a platform? Platforms which guide users through an experience (as UX designers aim to do) can then have some influence on the user.
>(aka giving people what they want)
How do you define this? The article says that the assumption that people approach dating or hookups with a rigid desire is wrong - so is "what they want" how they feel before they use the application? Why are changes in desire invalid? Are services based on discovery (shopping/advertising/social media/etc) immoral for not giving you exactly what you ask for?
>By staying as neutral as possible, you let the end user make the decision on their own.
But you're presupposing that neutrality is possible. That's exactly what the article argues against - a platform is an active influence on its users, even if that influence isn't deliberately designed.
If you limit someone's selection, then yes they have no choice but to take the "best" option of what you've offered them. The question is are you limiting that selection because you're trying to help them find what they like and thus filtering out what you think they wouldn't like. Or are you limiting selection because you want them to like what you like so you strip away everything that you don't like.
The first is fine (although can be achieved with varying degrees of success). The second I would consider immoral but at the very least it's bad UX.
I think the best option is to offer suggestions based on what you think someone may like but also offer access to the raw data/all data so they can decide for themselves with 100% transparency.
For a dating site example, imagine that someone puts in that they're only looking to match with people greater than 6 feet tall. That could be viewed as either discriminatory or personal preference. If the platform views it as personal preference then they go ahead and filter out all people under 6 feet tall. If they view it as discriminatory, they continue to show the user people under 6 feet in hopes that they'll "learn the error of their ways". My argument is that the platforms have no business doing the later.
Your example is meaningless because the article doesn't make any kind of argument for that situation. It suggests that those filters should be more difficult to use, or completely absent - but not that they should be present and ignored. This also wasn't the direction of your original objection against "data manipulation, user interface decisions, hiding of opinions".
More generally, the article doesn't agree with your general idea that "what they like" is a well-defined concept, or that a platform is able to get a good idea of what it is. Giving a user the power to search for specific preferences doesn't mean that a platform will be able to return the best results for that user - just the results that have been searched for. What that means for the authors is that a preference search isn't necessary or meaningful to have.
> It suggests that those filters should be more difficult to use, or completely absent - but not that they should be present and ignored.
This is semantics. It's about intent. Does the platform intend to give the user what they want (whether they can or not is not part of my argument) or do they intend to give the user what they want the user to see based on politics. This can include taking away tools that would allow the user to look for particular preferences.
I also strongly disagree with the article that people don't know what they want. I think they do, especially when it comes to mate selection.
This hints at the problem I have with a lot of censorship efforts. Censoring a bigot doesn't make them any less of a bigot, nor does it make them disappear. Censored bigots just form their own separate communities, where they're less likely to be exposed to ideas that change their minds.
Conversation is a two-way street. If we refuse to let bigots converse, we have to realize that we're also losing the ability to converse with them. A lot of people's gut reaction is, "I'm okay with not talking to bigots", but I simply can't accept that. Every progress toward reducing bigotry has been due to having a conversation about it.
"Circassian girls were described as fair and light-skinned and were frequently enslaved by Crimean Tatars then sold to Ottoman empire to live and serve in a Harem. They were the most expensive, reaching up to 500 pounds sterling, and the most popular with the Turks."
Here the Turks were doing the colonizing, so it can't be said that these preferences were somehow imposed on them by a colonizer.
I would say your example is more like how “the other” is fetishized by the society doing the colonizing. We see that today with asian women being highly sexualized by western media.
Aren't e.g. white people just as much "the other" from the point of view of black people? We're just picking the explanation we prefer depending on the case, without any real justification. If it's a minority that prefers some other ethnicity, it's colonized desire, but if it's a majority, it's fetishizing "the other"?
This is what you get when your school of thought holds group conflict as its axiom of morality and justice rather than the rights of the individual. It is a narrative of victimhood which can never be rectified because its perpetual existence is baked into the worldview.
The hosts of this seem to be pushing the idea that non-whites dating whites is bad, because...reasons? There's nothing wrong with interracial dating for those who want to do it, and there's nothing wrong with those who don't want to do it, either.
Uh, is this a real article, or one of those AI-generated ones used to show that you can publish anything as long as you add enough quasi-progressive verbiage to it?
I tend to use the phrases "unjust discrimination" or "prejudicial discrimination" when talking about the sort of discrimination that ascribes negative or stereotypical characteristics to a person based on their membership in a class.
But discrimination on its own is a neutral action, and as you point out, being discriminating by judging things on _relevant_ attributes is perfectly legitimate and often a mark of wisdom.
So I would reject what you say about interpersonal relationships being the only area where being discriminating is laudable.
Employers are discriminating when interviewing job applicants -- but hopefully, they are practicing appropriate discrimination, and weeding out people based on their skills and capacity to do the job, rather than inappropriate discrimination, based on a prejudicial reasoning.
Similarly, consumers are discriminating when they look for products or services that are of high quality, and there's nothing wrong with that.