Maybe, be nice to kids and don't harm them ? You are teaching and encouraging the behavior "blackmail to get what you want" by doing that.
Instead you could use non violent communication to get by. Tell them how you feel, how you feel hurt by their behavior. Maybe they start talking about how they are hurt being stuck 8h in the car ? And maybe both of you will feel compassion and a way to minimize the damages.
Like I said elsewhere in the thread, do not actually do this: candy is toxic to lots of birds and animals, and if not toxic, it is sticky, and can cause them to choke.
I mean, sure, if your end goal is to give your children anxiety, eating disorders, and unhealthy emotional associations with food (and especially w/ sweets). Not only that, but then you have to give them a bag of candy and give it to them at the end of the trip every trip or they learn that your "punishment" means absolutely nothing because you never give them "their" candy anyway. Oh, and wait to see how that scales when you have multiple children and they start to abuse each other to get the other to act out and lose candy. That'll be fun.
But I guess it'll work well to keep kids quiet in the short term, sure. Advice is a form of nostalgia - a way of picking up the past, dusting it off, and selling it for more than it's worth; and, like nostalgia, it's never as good as it is remembered.
I love the vision of trying to manage this with multiple kids with different candy dislikes and likes. It'd be a free-for-all to see who could get you to throw the most of their siblings favorite candy. Honestly I think they'd have a ball.
Plus you're littering all over the place. That sucks.
> Oh, and wait to see how that scales when you have multiple children and they start to abuse each other to get the other to act out and lose candy. That'll be fun.
Learning about cause, effect, reaction, and manipulation are vital skills in life. Knowing and understanding them does not mean you’re a sociopath either. They’re widely useful in many aspects of life. Makes you a better poker player too.
That's one heck of a justification that one. Childhood trauma and abuse sometimes also sometimes results in functioning, well-adjusted adults with decent abuse coping skills and without crippling mental illness - that doesn't mean it's the best way to achieve that goal.
You should very much know that the null hypothesis for a proposed behavioural modification technique is that it isn't a benefit and doesn't work and may be harmful. It's on the proposer to show otherwise.
What's less harmful when it comes to human health and safety in non-essential activities, to assume no risk of harm or injury or to assume risk of harm or injury? I assert that the null hypothesis is that it does not have a benefit and is harmful. This is not an assertion of fact, simply a position of likelihood and a judgement that doing a physical and emotional punishment to a person is generally harmful and that doing so must be balanced against ensuring a just outcome. This is not an unreasonable stance. You, on the other hand, if I understand properly, are saying that we should be able to do whatever we like to a punish person unless we show it directly causes harm. Down that road lies such human rights abuses that holding such views should be considered unconscionable.
> What's less harmful when it comes to human health and safety in non-essential activities, to assume no risk of harm or injury or to assume risk of harm or injury?
I agree with this statement. Assume a risk.
> I assert that the null hypothesis is that it does not have a benefit and is harmful.
Why did the word 'risk' disappear here? I don't think that's a good null hypothesis at all.
> You, on the other hand, if I understand properly, are saying that we should be able to do whatever we like to a punish person unless we show it directly causes harm.
And you need to make sure you're looking for both positive and negative effects.
"It's harmful" is a huge bias for a null hypothesis.
> Then what are you saying?
That you shouldn't assume it is harmful.
I didn't say anything about what parents should be "able to do". I didn't even give an opinion on the candy thing. I just think your justification is a big overreach.
But also a null hypothesis implies you're currently doing the testing, so that's a different scenario too...
Meta owns over 50% of the VR headset market (over 60% of the headsets used on SteamVR). With the amount of money they are pumping into the space, combined with how invested Zuck is personally, there is no doubt that they will dominate this market. They are investing huge sums of money just to get developers to build for their platform. If you are long on VR, you should be long on Meta.
That's a massive hindrance for innovation and my #1 argument above. No matter how much he throws money and hires people, it's still just this same guy and the same viewpoint.
Ability to innovate is not function of money and resources.
Solving concrete issues in VR is mostly money and resources. Go listen to an interview with Zuck regarding VR, he understands the problems in the space very well. He also understands what brings users to a platform, made evident by Meta's complete dominance of the social network space.
Not sure what you're alluding to by his viewpoint, but his company has a userbase of over 3.5B which is the vast majority of the internet connected population of the world (excluding China, for obvious reasons). You might not like him, but he knows how to drive adoption.
Data collection, anti-trust and privacy policies. Right now it is the wild west and consumers have no idea what is being collected and how they are being profiled for ads.
https://www.youtube.com/watch?v=wqn3gR1WTcA
I think it's pretty reasonable to argue that they're effectively monopolies in certain market segments and also that their businesses are causing extreme societal externalities that wouldn't exist if they didn't exist.
I don't think Facebook or Instagram is a monopoly in any sense of the word. Lots of people happily don't use either and there's a boatload of competing social media platforms.
If your goal is to thwart software monopolies (a pretty low impact use of FTC time imo) Google search or Visa & Mastercard (duopoly) should be a much higher priority.
The social externalities piece gets into philosophical territory. We "protect" people from cigarettes by taxing them heavily. Yet lots of people still choose to use them. Some would argue we should do the same for social media, others would argue that individuals should be free to decide for themselves. Still others would argue that comparing the health impact of cigarettes to that of social media is ridiculous.
IMO disincentivizing carbon emissions is a much more clear-cut externality to fight against!
I'm not necessarily talking about bots clicking ads, could just be fake profiles for whatever reason: inflating the number of likes of a page, writing reviews, phishing attempts, propaganda etc
Let's say FB has 200 million bots.
Do you think businesses could tell that there's actually 1.7b users and not 1.9b?
As someone who had bought ads there, I can tell you this is not as true as you think. You pay for the ad and the business picks up a bit, but the reported conversion numbers at least for me never made sense. It's really about making more money than you put in that you keep it going. Now given the ethics around Facebook, I don't do it anymore. I say it's all about CPC, I admit here it isn't.
I agree with your assessment, and I’m a long term holder. I’d love to hear your rebuttal to the following bear case:
- $10 billion to Reality Labs R&D in 2021 expected to increase annually will eat into earnings
- Last year saw first ever decline in US Facebook user base, expected to continue shrinking (albeit slowly)
- Apple’s 2021 opt-in privacy change makes FB/Insta/Messenger/WhatsApp ads less effective permanently for the highest margin demographic (iPhone users)
- Hugely negative public sentiment* hurts adoption and makes continued litigation headwinds likely
* to be clear, I think the negative sentiment is unfair towards Meta. You use their service and in exchange they show you ads based on your behavior. They’ve always been transparent about that, there’s nothing particularly secretive or nefarious going on.
Ads power most of the free internet stuff we enjoy. YouTube, Snap, Google search, Tiktok, they all do the same thing! But people for whatever reason really seethe when it comes to Mark Zuckerberg and FB.
In fact, it’s pretty insane that TikTok isn’t under more scrutiny in the US from a national security perspective. At least the other ad-powered entertainment apps aren’t controlled by a foreign adversary!
> - $10 billion to Reality Labs R&D in 2021 expected to increase annually will eat into earnings
Seems like that is a knob that can be changed at any time to give more profit.
> - Last year saw first ever decline in US Facebook user base, expected to continue shrinking (albeit slowly)
Including instagram? Regardless, this was right after the pandemic, some people will switch from screens.
> - Apple’s 2021 opt-in privacy change makes FB/Insta/Messenger/WhatsApp ads less effective permanently for the highest margin demographic (iPhone users)
This will be worked around - the impact isn't as large as people make it out to be.
> - Hugely negative public sentiment* hurts adoption and makes continued litigation headwinds likely
Doesn't seem to really be hurting IG adoption that much. The real risk is litigation.
> - Apple’s 2021 opt-in privacy change makes FB/Insta/Messenger/WhatsApp ads less effective permanently for the highest margin demographic (iPhone users)
> This will be worked around - the impact isn't as large as people make it out to be.
And you think Apple will stay put and do nothing? You can expect that as soon as they find a work around, Apple is gonna screw them again. They have no chances.
Depends on the work around. I don't think it is possible to get between app tracking data from some side channel, Apple will definitely block that through legal if not technical enforcement.
But they can use data from Android to infer user behavior for Apple users.
Also, they can still use tracking data from their own properties (IG, FB, Whatsapp).
Less and less people are using FB .. perhaps IG and Whatsapp yes, although you have to consider that privacy regulations can only get more stringent over time.
Will they go bankrupt ? I don't think so. Will they be able to grow at the rate of the last 10y? I don't think so.
If two apps can both identify a user then they will be able to combine that data. Apple took away the one identifier they controlled. What are they going to do next?
I disagree about their being nothing nefarious. It took people years to realize the extent to which FB was tracking your outside-of-facebook activity. I would even guess that many still aren’t aware. Shadow profiles, etc.
We probably don't agree on tracking being nefarious.
I'm personally just not concerned about being represented by a user ID in a database with info attached to it. Every service I've ever used has this on me and in general I trust US-run public companies with that level of data.
Tech businesses more than most! Equifax leaked my personal info lol, no Silicon Valley companies have.
Equifax leaked your social security number and some financial info. Facebook has much more valuable data about your deepest interests, browsing histories, intent, network, future goals and plans. In aggregate they know more about you than your family, parents, maybe even yourself. Sure it may never be leaked. But bits and pieces can leak out in the form of ads.
Do you think that FB is unique in being something that people use without adding value to their lives or do you think some form of reckoning is coming where people stop doing all of the things they do that don't add value to their lives?
Not the previous commenter but there will certainly be a reckoning whenever knowledge that social media is harmful reaches a critical enough mass. Like when cigarettes went from smoked by everyone everywhere to widely known to cause cancer and now its somewhat rare to even run into a cigarette smoker these days.
I don't think "social media" is anywhere well defined enough to compare its effects with the link between cigarette smoke and cancer.
It's far more difficult to prove specific effects of social media use, because it's so hard to separate it from all other social and economic dynamics of our time. There's no proper control group.
The way in which different people use social media may well make all the difference between it being beneficial and being harmful. Social media is not a substance that has a specific effect regardless of how we use it as individuals and as a society.
I think it's a mistake to expect social sciences to work like natural sciences.
The smoking gun you're hoping for may never be found. The entire question could end up in the same basket as worries about the alleged harmfulness of reading novels or comic books, watching TV, watching porn or playing computer games. If anything "social media" seems even more vague than all of these.
It doesn't seem fair to compare Facebook to the other companies you listed, with respect to negative public sentiment. People rightly criticize FB for the numerous scandals, data leaks, and lack of ethics, and as Zuckerberg runs the company, he deserves that criticism.
It's one thing to buy the stock and be bullish, and another to turn a blind eye to the horrors this company has created.
Youtube has it's problems, Google Search is going down the drain, and a ton of other websites run off ad revenue, yes. Those services are not responsible for organizing genocides, providing political analytics companies access to unauthorized data, or running unethical experiments on users to test their impact on the users' mental health.
- 2020, Facebook announced it won't fact-check political ads during the looming election.
- 2020, ex-Facebook data scientist Sophie Zhang revealed inner practices profiting from political disinformation and manipulation.
- 2021, ex-Facebook employee Frances Haugen leaks documents revealing unethical business practices for profit.
Given Facebook's rotten practices, seems it will give us at least one major scandal per year, but without serious consequences so don't sell your stocks.
This is the big one, and it was a bona fide scandal.
That said it happened in 2016, the story broke in 2018. Cambridge Analytica was very clearly the bad actor here, they took advantage of a too-permissive FB API that was patched.
Facebook was rightly fined for negligence.
> 2020, Facebook announced it won't fact-check political ads
This is not a scandal lol.
Ad platforms are under no obligation to fact-check political ads. Quite the opposite - federal law prohibits television broadcasters from refusing to run an ad from from a qualified political candidate for any reason!
Google does not fact check any ads. Remember those insane Tai Lopez Youtube ads?
> 2020, ex-Facebook data scientist Sophie Zhang revealed inner practices profiting from political disinformation and manipulation.
I hadn't heard of this one. From Wikipedia:
>>> Zhang reported that most of these subversive networks use Facebook's organization pages, configured with human names and photographs to mimic human accounts in order to successfully evade Facebook's emerging efforts to counter fake users.
She was on a team dedicated to stopping stuff like this, and she reported that people were figuring out ways around their automation? That's the point of the team! Staying ahead of bad actors!
> 2021, ex-Facebook employee Frances Haugen leaks documents revealing unethical business practices for profit.
Lots of allegations here, no legal consequences (unlike Cambridge Analytica which resulted in billions in fines). Similar to the Zhang story, the whole claim is "Facebook should be doing more than it already does to combat the bad actions of others" not "Facebook is actively doing bad stuff."
From Wikipedia:
>>> Haugen told The Guardian she was motivated to focus her work at Facebook on addressing misinformation because of her experience with losing a friend... after the friend visited online forums and became a proponent of conspiratorial beliefs that included white nationalism and the occult.
>>> In December 2021, Little, Brown & Company announced a book deal with Haugen for her memoir... After leaving Facebook, Haugen relocated to Puerto Rico, and has invested in a cryptocurrency company.
Facebook knew the Myanmar genocide was being orchestrated via ads on their platform. This wasn't even people posting and sharing, Facebook was accepting money in exchange for boosting content that promoted genocide. And they were told about it plenty of times without doing anything.
Not sure I agree here. People love to blame social media for stoking hatred. Plenty of people use social media like... well, normal people. Life updates, baby pictures, levity.
Some people don't. Social media is a mirror.
People aren't zombies - you're not powerless before an omnipotent algorithm.
You should watch The Social Dilemma, if you haven't. It's quite a good summary of issues with social media, all of which are present on Facebook/Instagram.
Social media exploits human behavior, often in the worst ways. You say people aren't zombies, but people aren't rational actors, either, they are people. Human behavior lies somewhere in between zombie and rational actor, rarely completely at one extreme. This is where the responsibility of a megacorp controlling the world's largest social media platform comes in. People can have their behavior manipulated on these platforms, and advertising is the proof - why would advertisers spend if it didn't work?
Social media is definitely more than a mirror, and FB deserves the general negative sentiment it gets because it consistently fails to self regulate in the interest of its users or the general public. FB has two objective functions due to its corporate structure: 1) shareholder profit; 2) Zuckerberg's agenda (currently the Metaverse, previously user growth).
The problem is, hateful, outrageous content attracts clicks. It's a bug in human nature. Social media discovered it early by A/B testing, machine learning, whatever, and now is exploiting this bug for profit. At this point I think it's clear the algorithms of most big social media networks are biased towards this kind of content, spreading it and radicalizing people.
> People aren't zombies - you're not powerless before an omnipotent algorithm.
You have to make a deliberate effort to not interact with polarizing content. Facebook constantly puts poisoned bait in your feed in the hopes you get hooked. Happy people don't engage as much as angry people. Seniors who don't have an IT background, children who don't have enough common sense, poor people who don't have enough education to know Facebook tracks 50K+ traits to better manipulate your behavior, etc.
It looks like you're not considering all the different kinds of users of Facebook in all countries where it operates. You can start looking at funny memes and slowly end up with paid divisive government propaganda.
They don’t sell data, they let anyone who buys ads choose who the ads go to based on anonymized data.
They also made it way harder to run political ads after public sentiment changed in 2016.
You can go buy an ad from them right now. So can “foreign groups.” Neither of you will get access to anyone’s data.
Same goes for YouTube, Twitter, Snap, etc.
If you’re worried about “foreign groups” there’s this Chinese app called TikTok that knows everything about every American under 30 that no one seems to be concerned about.
> They don’t sell data, they let anyone who buys ads choose who the ads go to based on anonymized data.
You're using the current supposed state as if it excuses or negates previous behavior. It doesn't. Facebook has sold data in the recent past. https://www.bbc.com/news/technology-46618582
> If you’re worried about “foreign groups” there’s this Chinese app called TikTok that knows everything about every American under 30 that no one seems to be concerned about.