It's a film that was intended as a joke, and uses Eugenics as its premise. Yes, the Internet has made idiots louder, but it has also helped intelligent people become smarter. The next 4 years will be like the last 8, minus the pandemic.
We still haven’t restored the part of the US federal government that stopped SARSv1 (they operated out of China and other countries with the cooperation of local authorities). Trump disbanded them before SARSv2 (aka COVID-19), so they weren’t around to respond to it.
Also, we’re still funding the biological weapons research programs that almost certainly created COVID (according to documents from multiple departments in the Biden administration).
On top of all that, RFK’s trying to switch everyone to raw milk in the middle of a bird/cow flu pandemic. That creates a new disease transmission vector that’ll probably help it cross to humans.
I like the movie a lot, but the beginning is a little problematic from a modern viewing iirc. It discusses how the poor and uneducated produce more kids than the higher classes, thus a dumb population after many generations.
Factually true about the correlation between higher standard of living and having fewer kids. However, that exact discussion has been used as a dog whistle against other "undesirable" groups in the past. The movie's beginning implies it would be better if we decide who gets to have kids.
Overall a great movie, but I think that part has aged poorly.
How you go from "smart people should reproduce more" to "cull the population of the unfavorable" is on you, but thats certainly not the conclusion I would draw.
If ignorance is rewarded, then people will willingly choose to be ignorant. The advice in this article sounds nice, but the reality is the rats only want the reward. If learning Kubernetes gets a six figure job tomorrow, people will chase it while ignoring networking and OS fundamentals.
I'm sorry you went through that. I had an interview recently that asked no technical questions, only logic puzzles like "princess is behind door number 1, monster is behind door number 2" scenario shit. I mentioned I'm extremely bad at these, but I have ten years of experience that I can speak to. I went ahead and did the quiz and got ghosted anyways. The silver lining is we get to watch companies like this become landfills. Cheers.
Next time just refuse to dance and end the interview on the spot. I do that and it feels great to see the light dying in their eyes when they realize you're the one "dumping" them.
Feels good for 15 minutes while you burn a bridge and don't proceed at the interview. Presumably they have other candidates, they probably don't even remember you a couple of days later.
It's a matter of not wasting each other's time and energy on futile things. If it's not going well for one reason or another, better to politely end it and let each other do something else with their day.
I for one think that logic puzzles are a reasonable filter for many positions.
I used to be very good at logic puzzles and have steadily been getting worse. Yes I have wider perspective now, but my raw problem solving ability has deteriorated. I can understand if a company wants to maximize problem solving for certain roles.
I'm the same way but I use a hobonichi journal with a frixion erasable ink pen. It flows much smoother. I used to use a moleskine with fisher pen but I kept having to go over lines again. Maybe it works better in space.
If you're already using the frixion pens, you might like rocket book notebooks as well. They work with the frixion ink so that the pages wipe clean with a damp cloth, and there's also an app that will scan your notes to your cloud storage or email.
I used to use rocketbooks back in school, and they were quite nice. This was a few years ago so I can't comment on it now, but I really enjoyed it then. You could setup various places for it to save to and there were boxes at the bottom of the page I could check and say the first box was checked it got sent to drive, the second box could be like dropbox etc I had all mine set to different google drive folders for my classes. It was able to scan everything quite well and was honestly pretty seamless to me. I feel like there were more features but I can't remember now, even if there weren't though I still liked the organizational structure of it all.
I haven't had any issues with residue after a lot of use. The only thing to watch out for is writing too hard and leaving a scratch in the paper, but that's easy enough to avoid.
Anand would do well to read the room. This is why soft skills are so valued. If he sees he’s annoying the vp with too many questions, maybe dial it back. Does Anand want the job or want to be right?
He wants to be right. A job interview is not like an exam, it's like dating. An opportunity for two parties to decide if they can work together. Hiding who you really are to 'pass' is a bad foundation.
That's why I don't like the conclusion of the article. Anand didn't fail the interview because the criteria were wrong - the interviewer failed because they suck at being an engineering manager.
It’s nothing like dating. Employees need a paycheck. It’s an asymmetric relationship. This becomes very apparent during economic downturns!
There’s really no use searching for the perfect job when the market is flooded with candidates and you have young children and mortgage payments due, yet you have to act like every job you’re applying to is some gift you’ve been searching for your entire life!
> Employees need a paycheck. It’s an asymmetric relationship. This becomes very apparent during economic downturns!
While this is certainly true, there still are usually enough jobs that fit the broad parameters of "pays enough to eat and take care of my rent". If you have a choice of a good boss at lower pay, or an a-hole boss at higher pay, you should take the good boss every time.
We spend 1/3, or more, of our lives at work, and life is far too short to spend any of it taking orders from an unpleasant, incompetent, or abusive a-hole. Not only will this make your time at work miserable, it will also lead to poor sleep and spending far too much of your "free time" brooding about your job. In that type of situation, you're not giving them 8 hours per day, you're giving them 24 hours per day. Eventually that's gonna take a toll on your physical and/or mental health. Don't do it.
This is also important if one wants to grow in their career. It’s hard to grow in a position that is only asked to do what they’re told. As a result, they won’t progress to a higher-level position and will lose money in the long-term. In other words, short-term thinking loses out, even economically.
To be fair, not all of us can be socially competent mentally healthy people who enjoy life and regularly stop to smell the flowers. Some of us just suck, some of us are miserable, so we have to put on a fake persona in order to entertain the other person enough for them to actually give us a chance. In a way, dating is all about entertaining and being entertained.
You are implicitly assuming that there's some finite amount of work that could make every such person "dateable without deception". While it may or may not be true in individual case, as a general statement it sounds to me more like "well, it's _your_ fault because you didn't work on yourself enough, so you deserve to be lonely".
I've encountered this too, where I was interviewed by many of the team I would be working with, and was considered a good fit, except for by the one high-standing academic type, who didn't. In that situation, I knew I wasn't impressing that person I was just being myself. Maybe I was subconsciously testing they're openness because I remember I would use a certain word/expression and they would rephrase it and instead of mirroring their phrasing I would say it the way I thought of it. It wasn't so much about being right/wrong but maybe not rocking the boat or just 'be like them'.
Anand could very well be moving across countries and investing immense amounts of times to start a new life. He could also be leaving a great job to seek a new challenge in a company he wants to invest years of his effort in. The VP has also dedicated time specifically for Anand's interview. He also probably has the power to fire him in short notice and with limited severance.
Anand should ask all the annoying questions. This way he's saving both himself and the VP all the wasted time of hiring him and then having a broken relationship because things weren't clear later. This way the VP can also see what concerns the potential employee and if they have a potential mismatch in expectations.
Finally, even if we disregard all that I said above, if it's Anand's working style to ask a lot of questions and he hides this style during the interview, then he will invariably clash with management later due to it if he's hired. Then he may have more to lose then just a potential future job.
I'm not sure why it's only on Anand. If the VP was getting annoyed, why couldn't the VP steer the conversation? He's the VP ffs. Isn't "having conversations" one of the key job requirements for any VP?
The problem with "just read the room" is that you can very easily misread someone when you're meeting them for the first time, for example during a several-hour-long interview session.
Its easy to misread people after several hours of interviewing. Especially if everyone else is open to questions and things go well. Id actually apply criticism to the VP ahead of the candidate. My expectation is that someone operating at the VP level has some degree of self awareness which this person clearly lacked.
The article says that being an effective contributor is more about bringing a fresh perspective to the table, rather than making leadership comfortable. It also says that Anand was ultimately hired and was effective in the role.
It's sad when "read the room" and "soft skills" are equated to agreeableness and making management comfortable. A company that thinks this way has only one real brain, the one in the head of the leader. Everyone else's job is to agree with that brain, not use their own brains to question it. If the leader happens to be a world-historical genius, this might work, but that's not true in most cases, so such a dictatorial structure doesn't seem likely to be the foundation of a successful company.
And actually, there's an even more devastating problem with being a dictator who surrounds themselves with yes-folk. Dictators don't like information that contradicts their perspective, and the yes-folk learn that. So eventually the dictator is only getting information that confirms what they think, and the org becomes incapable of adaptation.
Personally, I would not enjoy working for someone who was annoyed by me asking questions. Questions mean someone is interested in it, and is paying attention. To learn about something is to invest effort into it; it’s a great sign. Anand dodged a bullet here IMO.
OTOH maybe the VP feels challenged, which is why the questions are 'annoying'.
I mean, in this climate where people are being asked to win X-factor competitions for the privilege of a job, how sure are we that those already onboard would pass if they had to run the gauntlet?
I often try to explain this concept to people that want to work in IT engineering jobs. They see qualities like: high salary, work from home, no physical labor, no college degree. I explain to people that read less than one book a year that you have to read a lot, every day. Or that you're going to have to change your lifestyle to balance out the negative health effects of sitting for 9+ hours. Then they ignore me and get Security+ and complain no one wants to hire them. I should just send this article in the future.
This article has almost nothing to do with what you're talking about; the people you're talking to want to get into IT specifically for the attributes of the job itself, not because they enjoy consuming IT products. They're much closer to the mark than the aspiring coffee shop owners are.
I'm sorry, but honestly fuck Meta. Conducting psychological experiments on users without consent, and enabling psyops from organizations like Cambridge Analytica, is enough for me to never use their services. I hope they implode as a company. I happily said no when people from meta approached me on LinkedIn.
We can condone the good behaviors and condemn the bad ones. It is important to do both because if all we do is condemn then there is no pressure to do the right thing. If all you can do is evil then you'll only ever be evil and criticism will fall upon deaf ears.
A lot of people here agree with you about Meta's faults, even me. But that doesn't mean we can't appreciate the good they've done. Even if it isn't much and even if it is vastly outweighed by the bad. We should still use positive reinforcement to pressure companies to do the right thing.
I agree. Thanks for bringing that to the discussion. The risk is that it creates a false equilibrium where the perception is the good equals, outweighs or justifies the bad.
But you are totally right. We must celebrate with he good in others and life or our critical observations lose all merit.
Yeah I think we just have to recognize that complexity exists in the world around us. I think a lot of problems we face today are directly related to this issue. We're sold simple solutions which fall short. Our brains weren't designed to think in complicated manners and we want simplicity. But as a species we need to move beyond this. After all, one of the unique qualities of humans is being able to override our instinctual behavior (easier said than done).
Well yes, but a good fraction the good they've done would have been done by other companies if they weren't around.
What's frustrating is that they currently have a near-monopoly on friend-to-friend publishing, yet unlike about 5 years ago, they now gatekeep a lot of those communications -- many times I post something on my Facebook or Instagram that I want friends to see, and Meta doesn't show that content to my friends, just because it isn't a "Reel". Friends in my age range (+/-10) who grew up in North America tend to use only Meta's products, so I basically feel like I'm talking to a wall most of the time.
Not saying it makes it okay, but every company in existence does this to some extent. Everyone does A/B testing with the intent to alter user behavior (aka psychology) to increase their profits.
And without A/B testing, every product you use would be worse. Not only would it be less profitable, but it would also be harder to use, less useful, and less productive.
A/B testing isn't a new thing - I'm sure the inventor of the wheel experimented with different shapes, and the buyer of the hexagonal wheel probably didn't have the best user experience.
Multiply that by the number of people in the world and the number of products people use, and A/B testing is really up there as possibly one of the most beneficial ideas ever.
I really don't understand those who claim it should be banned - I see no way that testing two different versions of a website with people who desire to use that website can bring sufficient harm to outweigh those massive benefits.
I'm sorry, you could make this case about some kinds of telemetry, but specifically not A/B testing. Speaking from work experience: A/B testing doesn't look into the nuances of usability or productivity, it looks at easy-to-quantify metrics like conversion rates and money spent. These metrics rarely align with a better experience for the user (outside of like, prettier buttons and stuff), and instead tend to result in less-informative, less-agentic software (information and choice often distract from conversions!)
This is complete BS. I run hundreds of a/b tests each quarter and I specifically refuse to run the types of experiments you allude to. My a/b testing is all about helping users achieve the things (the outcomes) that they want to achieve by using our product in the first place. If we can help them do that, with more ease, then we are creating a better experience.
Perhaps you should just agree that, "not all a/b testing is the same".
Did you even read my whole comment? It's BS because he/she/they blanketed it without taking any nuance which i tried to do with my comment + an example!
Quote - "Speaking from work experience: A/B testing doesn't look into the nuances of usability or productivity, it looks at easy-to-quantify metrics like conversion rates and money spent"
This is an interesting example, and perhaps pushes part of the blame and dislike for A/B testing onto tech companies' incentives.
If you're building a tool to make life easier for the user, something that gives them a better experience is your optimal outcome. This seems like a scenario where A/B can produce a good outcome.
The challenge is when you throw in an ad-based revenue model, and the A/B testing is then optimized for the opposite (eyeball-hours, linear metres scrolled per session, ad spots passed, ads clicked) - engagement-based business models end up (I'd argue) A/B optimizing for the opposite of what their users want, to get them to spend longer doing a task they could have done quicker.
> The challenge is when you throw in an ad-based revenue model
The funny thing is - the ad-based revenue model is not the only possible variant. Last time I’ve checked Facebook’s profits per user were $7 per quarter, that is $28 a year. At the same time I am paying LiveJournal $25 a year for the ad-free version. Just taking my money looks like a much better model in many respects:
- less overhead: a lot of people doing these studies how to force me to look at something I do not want to look at will be free to do something more useful to the society;
- streamlined relationship between me and my publisher: in this model there is no advertiser who can say “I do not like these texts, no revenue for you”.
That’s why I prefer to pay for some Substack authors, like Matt Taibbi and Glen Greenwald, than to try to fish their texts for free amid some sea of “clever” advertising (hey AI testers, I bought this thing already, what’s the point of forcing it on me again and again?).
I kinda wish that Brave model (my money distributed between sites I visited) got more traction. It looks much more healthy.
There are much better methodologies for speeding up worker productivity than A/B testing. A/B testing is designed to extract information from people you can’t do more complicated tests such as eye tracking or motion studies with.
The major issue with A/B testing in the workplace is it causes confusion and slows people down when you change things. Which makes these tests really expensive even if they are seemingly easy to preform. So, I would call it useful but flawed.
As someone who’s run literally hundreds of A/B tests, many of them on the backs of UX research with users in the field, people have no idea what they want. The anecdata is a place to investigate, but never the end of the journey.
The fear with direct user research is that, unless you have a team and budget for getting enough of a sample, one-on-ones might not only be unhelpful but actively harmful if you implement something that solves that customers' problem but otherwise gets in the way for other customers.
I'm having a difficult time imagining a situation where people's actual productivity using a piece of software can be so easily measured. I'm sure it happens, but I think it's safe to say this is the exception to the rule when it comes to A/B testing
The specific test that did it for me, is that Facebook ran this experiment where they logged users out and then wouldn't let them log them back in despite the correct password, just to toy with them to see how long/hard they would keep trying to login, in order to see how addicted they were to Facebook.
I was in the "B" group, and felt so humiliated at how many times I tried to reset my password to get into Facebook.
Wow. That is insanely user hostile and borderline gaslighting/psychological torture. That is truly one of the most insane experiments I've ever hard of someone running.
It was a footnote in the wake of the main psychological experiments facebook ran on its users back in 2014*, which is overshadowing my searches for this particular detail.
I had this happen to me I think, and another case where half my links/screenshots would randomly get censored. Not just like random blogs/tweets/forums but even to super mainstream stuff like wikipedia/cnn/bbc.
We can look at the different demographics of people who tried to log in less than others, and try determine why they aren't as hooked as others, and work to rectify by improving their experience!
There's also a neat sleight of hand here. Your inventor of the wheel surely tested multiple variants to optimize for the utility of his invention to the user. The A/B testing that's problematic is about optimizing taking advantage of the user. That doesn't lead to better experience, but the opposite. This is what's increasingly popular, and this is what people complain about or want to see banned.
Related: attention economy is predicated on bad user experience, because it makes money from friction.
There's a reason Tristan Harris called upon SV to avoid "A/B testing ourselves into the 'gradient descent of mankind'".
My qualms are not so much with the method as the morals that guide it. It's agnostic but when operationalized in a faulty moral framework can definitely lead to bad results.
> And without A/B testing, every product you use would be worse.
The primary goal of A/B testing is to see what's more profitable.
If that happens to result in better UI that's a side effect.
In fact, it could result in less usability (relevant to this conversation, it probably resulted in the frustrating "algorithm-based" timeline at FB/Twitter/etc).
> > The primary goal of A/B testing is to see what's more profitable.
Well, I think I'll provide a disagreeing opinion. :)
I assume this opinion probably comes from your past experiences, and I believe it is true in many cases. Since I'm not American and have never worked in an American corporate environment, I can't say what is true over there... but my experience in EU and Canada with A/A/B, A/B/C and typical A/B testing (as well as building such testing tools for others) was not like that.
For example, when building tutorials for users, profitability is far from being the primary objective. Same goes for building documentation, programming languages, open-source software, internal tooling and other such things.
Of course, I get that in the end, profitability is the primary goal of the company (with some exceptions). But I maintain that not all A/B tests have profitability as their primary goal, which makes the previous statement an incorrect generalization IMO.
A/B testing lead to the development of effective "dark patterns" in UI that trick users into doing things they don't want or don't understand, and then making it difficult to undo.
My thinking is that in this alternate universe where pi=3, circles (with diameter 2 * pi * r) will look like hexagons (which have diameter 2 * 3 * r), so wheels would have to be hexagonal.
Certainly, because it probably never existed as a function for wheel. Do you have any evidence that the "well duh" criteria wasn't used, and hex wheels show up in the archeological record?
> A/B testing isn't a new thing - I'm sure the inventor of the wheel experimented with different shapes, and the buyer of the hexagonal wheel probably didn't have the best user experience.
Consider yourself that inventor, would you A/B test hexagonal vs round?
That sounds like a clear no, you already know the answer to the hypothesized A/B test.
As for the other aspects they sound like great targets for testing within different use-cases, but I'm not sure why that'd be an A/B test as we think of them now.
> And without A/B testing, every product you use would be worse. Not only would it be less profitable, but it would also be harder to use, less useful, and less productive.
Do we live in the same universe? As far as I can tell, software keeps trending worse. Usability is terrible, options and settings keep getting moved around and hidden, software is less responsive than it used to be...
You are free to keep using MS-DOS for your computing needs...
But the majority of people choose to use a modern computer, presumably because they find it overall more useful than their old MS DOS computer and software.
Sure - there are gripes, but they must be outweighed by something pretty big for 99.99% of people to choose a new computer over a 30 year old one.
>And without A/B testing, every product you use would be worse.
Worse for whom? I feel like a lot of the A/B testing results in more revenue, a more addictive app, and less user satisfaction, because they're not testing for anything beneficial to the user, because at least with FB, you're not the customer, their advertisers are the customer.
This is so wrong in its conclusion, that its hard to know where to start. First, we should be clear that we are talking about involuntary, undisclosed A/B testing.
I have not experienced a product become better for the user as a result of involuntary A/B testing in my entire adult life.
Producers and consumer have both an adversarial relationship and a mutually beneficial relationship, and the distinction between these two is essentially the split between voluntary A/B tests and involuntary ones. In the adversarial component, the producer is trying to figure out how to extract more money from the consumer, without improving the product. Alternatively, (and equivalently), how to make the product cheaper, but also worse, in a way that yhe customer doesnt notice (with their wallet). A proactive version of the "market for lemons".
For instance, if you A/B test your cancellation process to minimize the number of people who cancel their subscriptions, you will almost certainly do something that makes you some additional money, and is also unambiguously evil.
Any A/B testing that is mutual benefit to consumers and producers can be done with consent, by volunteers. And the miniscule amount of scientific rigor you would lose by doing so is not worth the tremendous sacrifice we have seen in quality of consumables in the past 2 decades (probably longer, but i do not have the personal experience to go longer)
You might be compelled to describe involuntary A/B testing as a strategy for maximizing evil subject to the constraint that it be legal, but it often dips its toes into seeing what is illegal but still profitable, and is capable of fundamentally undermining our legal system and even our political system.
The technology has grown more powerful. The addition of computers that can optimize essentially arbitrary objective functions has serious existential implications for humanity.
A blanket ban on the practice, incurring the total dissolution of any corporate entity found guilty of the practice of involuntary A/B testing, would be a start.
well, for starters, there would have to have been a product that improved at all. Those are already rare enough that I can enumerate them, and in each of those instances involuntary A/B testing can be ruled out for other reasons.
When craigslist added the map that shows you where all of the people are offering the thing you are interested in. That was a very good change, but thats pretty far from how craigslist operates.
When dominos stopped serving hot glue on cardboard, its pretty easy to see how that didnt come about by furtive A/B testing. They were pretty confident people would like the new pizza more than the old pizza. So they told them about it. Boy did that work for dominos.
That actually speaks more generally to my point. If you're making a change that you think people will like, you tell them about it, because even if it turns out that they dont like it more, the fact that they thought they would and you did it generates quite a lot of good will for them.
UX is orthogonal to the thing most web-based software companies optimise for. Unless users are paying for the service, what the company cares about is user engagement, not user experience. It's not that every software company ever doesn't know what they're doing, it's just that what they're doing is _at best_ tangentially related to improving UX. It is clearly not in the user's best interest that they feel compelled to check Facebook frequently throughout the day, or spend hours scrolling through their feed. You can spin that as the company just improving the experience so that people want to keep using it, but fundamentally it is intentional psychological manipulation.
Within a university, research with human subjects is required to pass an ethical review before it is allowed to proceed. Given the scale and impact of the research conducted by Facebook on its users, it is entirely reasonable to hold them to the same standard.
> what they're doing is _at best_ tangentially related to improving UX . . . You can spin that as the company just improving the experience so that people want to keep using it, but fundamentally it is intentional psychological manipulation.
So we agree that ab testing is good for optimizing toward a numerical objective. You then seem to think that either:
A) There are simply no numerical objectives that correlate with good ux or
B) Every software company ever is optimizing toward perverse incentives by which they take more money from their users while making their products worse
It’s probably B that you believe, and this is such a myopic and paternalistic view. There are a couple cases where it’s a problem, eg cancellation flows. But this problem is orthogonal to AB testing (try cancelling your newspaper subscription in 1994). AB testing is mostly just trying to improve the rate at which people sign up or buy something, and in this case, your objection hinges on the hidden premise that people are idiots.
> Within a university, research with human subjects is required to pass an ethical review before it is allowed to proceed. Given the scale and impact of the research conducted by Facebook on its users, it is entirely reasonable to hold them to the same standard.
obviously "A/B testing" is not "hiring psychologists", because one is in the category of experimental methods the other one is about human resources
yet there are obvious connections. A/B testing is used to increase "conversion", which is profitability. which is the same fuckin' thing as addictiveness in case of a site where you pay with your eyeballs
That doesn't make any sense. You decide what they read. You would want to know if it's harming their mental state. Closing your eyes to the impact you have does not negate the impact
It isn’t just the one thing, it is a pattern where when given a choice between respected a sense of ethics and decency or taking more money, Facebook as an org has at every instance that is publically known, has taken the money. The high salaries seem to be justified not by their technical skill but their willingness to do what they are told for momey without regard to conscience.
Read the whistle blower report, witness the evolution from seeing content from your friends posted in less addictive chronological feed to addictive content your friends like in the internet sorted by addictive news. Hell, the site started as a PHP hack to creep on pretty women. They sold a bunch of data to foreign adversaries. For years, they let people sell ads to Nazis. They don’t give the people faced with the psychologically brutal jobs of moderation get benefits. They have been a platform for genocide and government surveillance.
I might be missing some examples of them missing some money to do the right thing, but nothing comes to mind.
A/B testing (which is getting users to respond/react to a UX event, and choosing which outcome is more suited to the business) is considerably different from "can we manipulate users up front, to perceive or react to things assertively and programmatically, even if against their interests?"
You have a good point. But let's not forget that the Nazi's and the Japanese used to do incredibly invasive medical tests on human beings in the name of "science". (Even the Americans have done political and medical experiments on their citizens, using the CIA, on African Americans and criminals). All these are condemned today by the scientific community because of it caused great harm (or even death) to the subjects who never gave their consent to such experiments. The psychological experiments conducted by FB on its users was equally bad because it looked to trigger emotions in the users (a useful feature for an advertising platform), some of which could cause users to go into depression. I don't know if the FB people conducting those experiments are aware that even mild depression causes great stress on an individual, and serious depression triggers suicidal impulses.
(This is not an attack on you or your otherwise valid point. Just a reminder that people should be mindful of their ethical obligations to get informed consent and not cause harm to others with their experiments).
I doubt that the mentioned 'psychological experiments' are just A/B testing. There is a very strong case to be made that Trump and Brexit would not have happened without Facebook, and those are just two examples.
That is such a poor reason to not like Meta, among many, many reasons. All meta did was publish their results. The backlash ensured that no other companies shared results—but AB testing is bigger than ever across the industry.
Now if you had said “fuck meta because my feed is filled with an unintelligible mix of baby photos and political screeds,” I’d totally follow.
"On 6 December 2021, approximately a hundred Rohingya refugees launched a $150 billion lawsuit against Facebook, alleging that it did not do enough to prevent the proliferation of anti-Rohingya hate speech because it was interested in prioritizing engagement."
I'm sorry, but the "A/B" responses to this sentiment are some of the worst euphemistic cope I've ever seen. You all really need to get out of your "tech" shells and take seriously this idea that "running nonconsensual psych experiments on humans" is a fundamentally evil thing to do.
I am interested in this line of logic, but I'm not sure I would go so far.
I feel engaging in commerce in any way at all is impossible without being "evil" under this definition.
Aren't advertisements at their core unconsented psychological manipulation? What about retail store design? Is providing customer service altogether just manipulation?
I think I take issue with the word "evil." It seems to imply a certain malice or intent to harm, which just isn't logical, given the context.
> Aren't advertisements at their core unconsented psychological manipulation?
Yes. 100% yes.
> What about retail store design?
Also yes.
Advertising exists to try and make you purchase product “x”, regardless of whether you need it, or it meets your requirements. Even “harmless” display advertising exists to gently push to mind into forming the desired emotional association with a product.
Store layouts are well documented for being deliberately anti-efficient: they make you walk past everything else to find the things you need, with the goal of exposing you to more advertising and product placement. Combine that with strategies like putting sweets and other “low friction” products at the checkout, where you’re more likely to make a spur of the moment emotional decision under pressure, or be hassled by your children for sweets, and you have something that is inherently exploitative and morally questionable at a minimum.
Evil is probably too high-modality, but unethical and exploitative are definitely suitable.
I agree wholeheartedly with everything you were kind enough to share. You even elaborate on the things which I was to lazy to expand on myself. So I think we're on the same page.
I only took issue with the choice of "evil" because I am working on curbing my own hyperbole and sweeping generalizations.
I don't care much what the word (or any word) means, I care what it does.
Nothing wrong with calling "Facebook" evil. It's not a person, it's an artificial entity. It can take it. (I too would be more reluctant when it comes to people)
That's a really bad and useless definition of evil, and we should have figured that out a long time ago. Can't just say "we were following orders," or "that's just business."
And no, I can't give you a clear brightline test right now to determine this. It takes homework.
But.
If you want to A/B test whether people prefer blue chewing-gum to green, be my guest.
If you want to A/B test for literal depression and sell stuff to people based on that, you're probably evil.
The line's somewhere in between there. Now we know where Meta has chosen to go on some of this; and again, this is from WHAT IS KNOWN TO BE PUBLIC.
The safest thing is to assume the worst, then. May they rot.
Thanks for the clarification, specifically the examples. I agree with the spirit of your conclusion, but it's far outside my expertise (psychology) and experience (haven't used FB since about 2012).
"They" is unnecessarily vague I guess. It's not fair to write off the entire population of Facebook staff... There are surely a substantial chunk of employees who were entirely unaware of these tactics.
It's easy to claim an organization is evil. Less so for individuals, and with good reason. There's something to be said for remembering to humanize people, even those most "undeserving."
But it doesn't have to be outside of your expertise to criticize it anyway, that's the point I want to make.
There is no harm in me saying out loud "Facebook is an evil company" even if I'm completely wrong, because they are so powerful. It would valuable for everyone to scream at Facebook at being evil because that would force them to show and prove that they are not.
We shouldn't get stuck on being "accurate." Throwing rocks is not just okay, it's a good idea if it forces the powerful party to act/respond.
As for the individuals -- it's their job to defend the company. If that hurts their feelings or whatever, too bad. I don't care and neither should you (presuming that Facebook is potentially very harmful.)
That's not a psychological experiment - it's barely even an experiment. If someone chooses one over the other, what have they learnt? Are they controlling or even measuring any other variables? It's also not non-consensual. If you go to a store you expect to have options of things to buy - that's the entire point of a store. It's not even an A/B test! Everyone sees both options and makes an informed choice.
I'll cede no ground to anyone on loathing the corporate world, to the extent that I've all but abandoned work and the cash economy (at the cost of considerable personal privation).
But humans are complex beings, and it's just realistic to view us all with some nuance rather than casually tossing everyone into good/evil baskets. The gp outlined an example of decent behaviour, and it remains just that regardless of our attitudes on other grounds towards Zuckersnuffles and/or Meta. Even murderers can be kind and decent people given in some contexts (witnessed at first hand).
I understand the sentiment. I agree there should be accountability.
With that in mind, I'd like to point out root merely expressed an anecdote about above and beyond humane treatment of employees from the CEO. Parent described the hacker-friendly culture. Those appear to be first hand accounts.
How does your comment relate contribute to that discussion?
Further, how does over the top and genericized outrage and harm wishing on a cohort of innocent people advance society in any meaningful ways?
Your argument is essentially that the company treated its employees well, so who cares what it did to users. Who gave you the right to decide the discussion is framed around how Meta treats employees rather than what it does for the entire world?
I made no such argument. It is counter productive and disengenous state otherwise.
I asked a question, no conclusions drawn. If I had a point, it's that the comment was off topic with regard to the comment it replied to, and warrants its own tree of discussion. It was a rant; There wasn't even an attempt at a segue or good faith effort to provide contrast.
What "gives me the right to decide how the discussion is framed" are the hn guidelines (which you violated yourself by presenting a strawman).
But is it compulsory to be so Manichaean? Have you really never seen someone (or group of people, or company) behave well in one context, and badly in another?
I understand perfectly well that they are both depending on context.
All I'm saying is that the harm they inflict on billions of people outside the company outweighs the good they do for thousands inside. Just because there's both good and bad doesn't mean the degree of each cannot be weighed at all.
Your redoubled insistence on a clearly irrelevant totalising good/bad judgement as a purported response to a quite specific comment about labour practices is more than a tad onanistic.
Not sure I really intended it as a burn, though you're right I do love words, sometimes to a fault! That was a generous response on your part anyway. Not one of my better days. Thanks.
Work culture and values both matter. I found @Leires comment insightful because it jarred me to the reality that while we are appreciating the good in someone, we shouldn't forget their capability to be bad. Can a Gandhian be comfortable working at Hitler's gas chamber? Both had amazing leadership qualities. What attracts us to either of them are the values we think they represent.
The comments were specifically about Zuckerberg's character. The replies should be about Zuckerberg's character. The reply in question didn't even mention him.
So, Cambridge Analytica. Facebook has been used as a source of electioneering data for years before that. See for example Eitan Hersh's testimony. In fact, I've read articles bragging about how inventive the political technology using social networking profiles is, for a couple of electoral cycles before that - they may still be somewhere in HN archives even. And of course, selling the very same data to advertisers, maybe repackaged a bit differently but the same source and same data set, is the whole business model of Facebook. And it somehow never bothered anyone until Cambridge Analytica. Why is that?
Most don't have a problem uploading address books and contacts into these platforms. I think it depends which team it is. Companies? Cool! Political Party you agree with, sure! They mined the social graph and Zuck reached out to them and said they're on the same team and didn't restrict access while they mined 50 million people. Don't forget Zygna!
Yet bigger concern is the government blatantly violating the constitution by instructing Facebook to do that, and nobody stopping them. At least Facebook's bad behavior is within the law. The supreme law of the land explicitly prohibits the government from restricting the speech based on their politics. They do it anyway, and no consequences whatsoever. I think it's concerning.
It's naive of you to think that this is a Meta problem. Just by surfing the internet you're subject to all the same psyops by organizations that are arguably even worse than CA.
You seem very sure of what you know, and very confident about how the people working at Meta think and act. Do you have any direct experience? Or know anyone who works there?
I never stop being amused by lazy oversimplification.
I never stop being amazed by the level of trust some have in lazily oversimplified media.
Meta is a city, not a person.
They gave specific examples of negative things that Facebook has done. This is a pretty shallow dismissal of their criticism, even if their comment is a bit heavy-handed.
I won't rehash the A/B discussion, there's plenty of other people talking about it.
The Cambridge Analytica thing probably didn't have a meaningful impact on the elections. Facebook was dumb to allow partner apps access to so much data and rely on those partners to follow Facebook's policies, but they recognized the mistake and probably overcorrected.
All the other coverage probably serves to reinforce and amplify negative sentiments about Facebook, and a ton of it wasn't deserved. People cite the Rohingya, I remember also a news cycle about Facebook profiting from hate speech.
Those things happened, but Facebook had also built probably the most expensive and effective hate speech filtering operation in the world. That it be 100% effective is not a reasonable goal. With billions of pieces of content even 99.999% effectiveness will result in examples that the press can point to reinforce a narrative about Facebook profiting from hate speech.
I doubt there ever was any profit in serving hate speech. The ad revenue from filter misses would not have been big enough to pay for the filtering operation itself.
> The Cambridge Analytica thing probably didn't have a meaningful impact on the elections.
That's a pretty big claim to make without evidence. At the very least I think we can agree that political parties wouldn't be investing so much money into social media campaigns (analytics, marketing, etc.) if they didn't think it was impactful.
There is ample evidence that CA had no impact. They were charlatans trying to sell snake oil; IIRC the underlying algorithm was based on a simple factorization of a User x Likes matrix that did not have substantial predictive value.
There is plenty to lay at the feet of Meta, but I've always thought the CA scandal was an overreaction.
> Sumpter analyzed the accuracy of Cambridge Analytica’s regression models in his book ‘Outnumbered.’ He used a publicly available dataset created by Michal Kosinski and his colleagues, a psychologist, who created an anonymized database of 20,000 Facebook users. Of the 20,000 Facebook users 19,742 were US-based, and of that amount 4,744 had registered their preferred political party Democratic or Republican, and had also liked over 50 Facebook pages. Sumpter first aimed to test the accuracy of regression models in general, and so created a model which predicted political party allegiance based on Facebook page likes. He concluded that the regression model worked “very well for hardcore Democrats and Republicans” but “does not reveal anything about the 76 percent of users who did not put their political allegiance on Facebook” (Sumpter, pg. 52–53). He also describes how just because the model may have revealed, for instance, that Democrats tend to like Harry Potter, it does not necessarily mean that other Harry Potter fans like Democrats. Therefore, a strategy employed by Democrats to aim to get Harry Potter fans to vote, may not necessarily benefit them.
You'd probably have better spent your money by using party-owned voter databases to create custom audiences based on email addresses. Everything I've seen points to Cambridge Analytica being mainly an operation to grift campaign dollars. From the same article:
> CEO Alexander Nix himself corroborates these results. He claimed in his testimony to members of the British parliament’s ‘Digital, Culture, Media and Sport committee,’. He contended that Kogan’s dataset wasn’t very useful and that made up a tiny part of their overall strategy for the 2016 United States presidential election. How do we resonate this admission, with his presentation at the Concordia summit, in which Nix openly bragged about the ability to wield Facebook data to tune an incredibly powerful instrument that significantly impacts elections? The answer to that came from Nix himself in his testimony, claiming that he has in the past used hyperbole when pitching his company to potential clients. This view is corroborated by Kogan who mentions how “Nix is trying to promote (the personality algorithm) because he has a strong financial incentive to tell a story about how Cambridge Analytica have a secret weapon” (Sumpter, pg. 54).
Billions of people use the service for "free". As long as everyone knows they are what's for sale, the rest makes sense. Any company of this size is going to sway discourse so it's time to accept that. I for one do not and thus deleted our family accounts long ago.
Take my downvote. "Psychological experiments" on users is what every company; user and market research. Enabling "psyops" is simply building a platform; this isn't the first platform that his been misused or scraped for other purposes. You get spam robo dialers? It wasn't from meta... etc. etc. etc.
Its much much worse than just psychological experiments. Meta currently stands accused as an enabler of a genocide in Myanmar [1], and provides a platform to spread massive hate against Muslims in India and elsewhere [2,3,4,5]. For folks trying to do bothsidesism here and bring out how great they've been, I am sorry but there is no excuse for enablers of Fascism.
Conducting “psychological” experiments without “consent” is the only way to do science. Perhaps in this instance the knowledge gained is used in the ad industry and not benefiting society, but the nature of the research is to be admired and replicated.
either they charge for social media or they attempt to grow by ad targeting, or social media is maybe banned through regulation to stop what you describe.
I think maybe your beef is with the nature of technology today and/or our current culture that enables/glorifies it's mass use. What's the solution?
This statement is a false equivalency.[0] The Obama 2012 campaign did not violate the Facebook TOS, and received permission to acess the data from users.
Please stop trying to use what-aboutism to fuel a partisan divide.
Yes, however, Obama campaign was very different from CA.
>The Obama campaign collected data with its own campaign app, complied with Facebook’s terms of service and, most important in my view, received permission from users before using the data.
>And numerous other developers, including the makers of such games as FarmVille and the dating app Tinder, also used the same Facebook developer tool that Cambridge Analytica used.
>Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked, as well as more limited information about friends who had their privacy settings set to allow it.
>Although Kogan gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time, he did not subsequently abide by our rules. By passing information on to a third party, including SCL/Cambridge Analytica and Christopher Wylie of Eunoia Technologies, he violated our platform policies. When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. Cambridge Analytica, Kogan and Wylie all certified to us that they destroyed the data.
It's a film that was intended as a joke, and uses Eugenics as its premise. Yes, the Internet has made idiots louder, but it has also helped intelligent people become smarter. The next 4 years will be like the last 8, minus the pandemic.
reply