Wow, this brings back memories. I bough a little house and had a bakery outside Trinsic. You could hire a vendor NPC who would sell your wares while you were away. One of my friends had a tower or something, all decorated on the inside with furniture. It has taken 20+ years for World of Warcraft to have housing, and while the current setup is very good, it's instanced and not alive and part of the world the way UO's housing was.
We can’t? Are south Florida, southern California, Hawaii, Puerto Rico, are they not “here”? There is literally a banana variety called California Gold.
This has also been my experience. I don't really have any problems with it. It works fine, it doesn't have the weird telemetry and monetization issues that Windows has.
Just off the top of my head, Canada, Switzerland, Iceland, Norway, Denmark, and Sweden would all seem to be pretty good counterexamples to your assertion.
It’s like coders (and now their agents) are re-creating biology. As a former software engineer who changed careers to biology, it’s kind of cool to see this! There is an inherent fuzziness to biological life, and now AI is also becoming increasingly fuzzy. We are living in a truly amazing time. I don’t know what the future holds, but to be at this point in history and to experience this, it’s quite something.
The issue is that for most things we don't want the fuzzy nature of biology in our systems. Yet some people try to shoehorn it into everything. It is OK for chat or natural language things, which are directed at a human, but most other systems we would like to be 100% reliable, and not 99% or failing after a few years, and at the very least we want them to behave predictably, so that we can fix any mistakes we made, when writing that software.
As a middle aged (gen x) woman, my facebook feed is pretty good. It's filled with posts from friends and interest groups that I am a part of. The reason I no longer use FB has nothing to do with the feed, it's because Mark Zuckerberg is an awful person, and I refuse to use his product. The cognitive dissonance is great here, because I still use WhatsApp; it's the best way to stay in contact with my relatives in Europe, and I still use IG, albeit mostly for work, and sparingly.
I'm still a FB user even though most friends and relatives have disengaged due to toxicity. But what I've noticed consistently is that any group on FB that has more than 1000 members will end up surfacing so much toxic sentiment that I have to unsubstantiated. I'm talking about innocuous fields such as the local road conditions. That one became full of rants about out of state drivers, drivers who don't understand English, people posting license plates of bad drivers, etc. This has led me to a theory that humans just can't behave nicely beyond some threshold group size.
> This has led me to a theory that humans just can't behave nicely beyond some threshold group size.
I think what happens is that the risk of including a critical amount of "toxics" (lacking a better word) such that they can keep a conversation going, increases by FB group size. Without actice moderators it doesn't take much.
I think it is important to remember that only a tiny, tiny fraction of most facebook groups is actually posting, commenting, or even viewing the group at any given moment. Most people who view don't post/comment. (True of reddit and other social media as well.)
And the thing about poorly moderated groups (especially on platforms with rage-boosting algorithms) that let assholes go off without consequences is: the people who both a) actually look at the group ever and b) aren't assholes either leave entirely, stop looking at the group, and stop posting/commenting to the group (if they ever did in the first place). They go find places to hang out where there aren't a bunch of assholes. Nobody wants to hang out with the assholes when they can easily just not.
And at the same time, the assholes all gravitate to the same few places because they get kicked out of all the other places. Or if they don't get kicked out outright, they get shouted down or ignored, which they hate. So instead they congregate where they can get away with or get praised for saying whatever vile things they want.
> But what I've noticed consistently is that any group on FB that has more than 1000 members will end up surfacing so much toxic sentiment that I have to unsubstantiated.
It depends on the group and how well it is moderated.
I live in an area where everything depends on Facebook. There are multiple FB groups for the town, the largest of which has 80k members. Not perfect, but not toxic. The same in other similar groups.
I am an admin of another with 30k members. It has a tight focus (exams and qualifications for home ed kids in the UK - GCSEs/IGCSEs mostly, but other things too), membership is only for parents of such kids (there are membership questions), the group is private, posts require approval, irrelevant comments get deleted, repeat offenders get kicked out. We do not have a lot of problems (some attempts at spam by tutors, but they get kicked out).
> This has led me to a theory that humans just can't behave nicely beyond some threshold group size.
I think you're generalizing far too broadly. The problem you're describing is more-or-less exclusively a problem with online, open-membership groups.
Consider: if the groups you describe were in-person groups, these ranters would constantly be getting disengaged/off-put/disgusted reactions from the "silent majority" of the people in the group. And just these reactions — together with a lack of any positive engagement — would, almost always, be enough to make them stop or go somewhere else.
(Or, to put a finer point on that: "annoyed, judgemental silence, and then turning away / back to the person you were talking to" would always put off the vast majority of people, with just a few — people who have trouble understanding non-verbal signals — persisting because they aren't "getting the message." And in an in-person context, these few would still eventually be taken aside and given a talking-to, because if they're butting into other in-person conversations with this behavior, they're being far more disruptive than "random new conversation threads" tend to be felt as. Even though "random new conversation threads" can kill a group just as dead.)
The problem with decorum / respect-for-purpose in unmoderated online open-membership groups seems to mostly stem from the fact that people underestimate the importance of non-verbal signals in moderating/regulating behavior. And so there is a dearth of such signals available in such groups. Our brains didn't evolve to play the game of socializing without these signals, any more than ants evolved to coordinate without pheremones. So many people's brains begin to play the game in degenerate / anti-social ways.
From what I've been able to gather, from personal interactions with many people who admit to being "Internet trolls" at some point in their lives... their behavior was almost never intentional maliciousness/active-disregard-for-others on their part. It's rather an emergent behavior — something they "just ended up doing" — given a lack of (non-verbal-signal-alike) calibrating feedback.
And why is there so little non-verbal-signal-alike communication online?
Well, for one thing, we often aren't even aware we're giving off such signals; and so, if we need to consciously choose to communicate them (as we do in online contexts), then we simply fail to do so, because the majority of these signals never even rise to our conscious attention as something to be communicated.
And even when we do become aware of them, we often don't feel them to be important enough to be "worth" going to the effort of translating into some more conscious/explicit/non-subtextual form of communication.
And then, even when a strong desire to communicate a nonverbal signal does bubble up within us... most online chat/forum systems are horrible at transmitting such signals with any degree of fidelity, when they transmit them at all. Especially the kinds of signals used for intra-group behavior regulation.
Facebook, for example, has reaction emojis on both posts and comments — but no reaction emoji that transmits a sentiment like "I disapprove of you saying this; please stop" (e.g. U+1F611 EXPRESSIONLESS FACE or U+1FAE4 FACE WITH DIAGONAL MOUTH). Rather, the only reaction emoji available are those meant to react sympathetically to the emotive content of the post/comment — e.g. with anger, sadness, etc. (People do try to use the "anger" reaction to express disapproval of posts; but when the content itself is often "ragebait" / meant to evoke anger, the poster won't necessarily understand that these reactions are being directed at them, rather than at their post.)
Further, no chat system or forum I'm aware of has participant-visible signals of "detach rate" — i.e. there's no way for people to know when others are clicking on their posts, reading one line, doing a 180 and running away as fast as they can. (YouTube videos expose this metric to their creators; I think it's actually very helpful for them. It could do with being implemented far more widely.)
(And, to be a conspiracy theorist for a moment: I think, in both cases, this is probably intentional. The explicit purpose of signals that "regulate behavior", after all, is to make people engage less in certain anti-social behaviors. Making available any such tools, will therefore inevitably make any kind of platform-aggregate "engagement metrics" go down! If they were ever temporarily introduced, they'd have been quickly removed again with this justification.)
Great analysis. I do not think its conspiracy theorist to believe it to be intentional, or at least a result of KPIs.
One thing I think you are missing is that in person groups are usually far smaller. Anything with 1,000 people would be organised and there would be rules of behaviour, moderation of discussion etc. Most often if something is that big, its mostly an audience.
I think the other thing that happens in real life groups is that there is no community or real relationships. If you annoy people in real life it has consequences. In an FB group there are none.
> One thing I think you are missing is that in person groups are usually far smaller.
Yes, but — an online group with 1000 members isn't really equivalent to an in-person group with 1000 members. It's actually more equivalent in "activity" / "number of expected novel pairwise interactions" to an in-person group with, say, 150 members.
(Why? Because the "members" of an online group, as reported by most chat/forum systems, are just the number of people with access to the chatroom/forum, or who are subscribed to updates to the chatroom/forum, etc. Most of these people have never posted. Many more have only ever posted once. Whereas, in common parlance, you wouldn't really describe someone as a "member" of an in-person group, unless they actually regularly attend the group's in-person meetings. [And that goes double for formal in-person organizations, which often have membership fees or dues. Nobody bothers paying to maintain membership to these if they aren't intent on attending!] So the word "members" here really refers to two very different metrics: for online, the number of passive readers; for in-person, some upper bound on the number of people you might expect to encounter at the average in-person event. We need to do some unit conversions here in order to make valid comparisons!)
Let's say, for the sake of argument, that the average online group with 1000 "members" might have ~100 regular posters. (It's probably less, actually.) And let's also say that the average (geographically-based) in-person group with 150 "members", has events attended by ~100 people. And let's assume "regular posters" and "regular event attendees" are roughly equivalent in how they cause interactions that drive (dis)affection / (dis)engagement within the group.
I believe we both already agree that an in-person group where events regularly see ~100 attendees, tends to do just fine without rules of behavior / explicit moderation / etc.
And yet, it seems to me that an online group with "just" ~100 regular posters, almost always tends toward falling apart, unless it does have such rules, and moderation to enforce those rules.
That's the more specific, apples-to-apples-ish distinction that I had in my head in my GP post: that it's weird that when we take basically the same "level of expected interactions" from in-person + synchronous, to online + asynchronous, that it tends toward a different equilibrium state.
---
I do also agree with the lack of community / real relationships being a major driving factor. If you take a bunch of people who are already in the same community, and give them a closed-membership unmoderated online forum to speak in, the resulting interactions don't seem to tend toward awfulness/collapse nearly as badly.
But I would argue that this isn't just due to "consequences" (i.e. posters knowing they're impacting their position in the equivalent real-world community.)
Rather, I think a large part of what makes online forums "backed by" shared pre-existing communities more robust, is that the community provides its members with an implicit shared context for "recovering" an assumed set of nonverbal signals that "would go along with" others' textual wording choices... which in turn regulates behavior exactly as if those nonverbal signals were being explicitly communicated. People don't need to actually convey that they're frowning at you, if everyone in the community (including the poster!) knows exactly what subtextual meaning is carried by a reply of e.g. "Well bless your heart."
This is a testable proposition: it implies that closed-membership forums "bound to" a community offer no benefit, if 1. the community itself is open-membership and 2. new people join the community itself frequently enough that few community resources are being invested per new member on giving them a thorough enculturation into the community (incl. awareness of the community's wording-subtext equivalences.)
- So you would expect that, if there's an online community forum for e.g. a small village, where the only way to move there is to marry into an existing household there — then that forum will be robust and self-moderating, because every newcomer to that community gets a thorough dose of community enculturation.
- Whereas, if there's an online community forum for e.g. the congregation of a church in a particular urban neighbourhood of a city, where anyone can just rent an apartment in the neighbourhood and start attending the church... then that forum might be quite awful, despite every member being aware that what they say there will impact how the congregation sees them. Because there's no enculturative "speed limit" preventing absolute newcomers from immediately posting in that forum.
My Facebook feed is great, my X feed is great. I don't use Facebook and X because I like Mark Zuckerberg and Elon Musk but because I genuinely read interesting things and I interact with people I like.
That being said, I don't spend too much time on social networks because I have lots of other things to do.
It's working too. All my friends stopped using Facebook for similar reasons. My feed went from a 24/7 pleasant reunion to a fetid swamp and now I also have stopped using it.
I agree with you, but it's a tool that should only be used very sparingly because tariffs can be incredibly difficult to get rid of. See for example the "chicken tax" for light trucks which was instituted in 1964 (because the Europeans tariffed US chicken exports).
Green algae, which are essentially being farmed by the fungus, are closely related to Plantae and are often included in the kingdom in the broad sense (Plantae sensu lato).
> LLMs aren’t built around truth as a first-class primitive.
neither are humans
> They optimize for next-token probability and human approval, not factual verification.
while there are outliers, most humans also tend to tell people what they want to hear and to fit in.
> factuality is emergent and contingent, not enforced by architecture.
like humans; as far as we know, there is no "factuality" gene, and we lie to ourselves, to others, in politics, scientific papers, to our partners, etc.
> If we’re going to treat them as coworkers or exoskeletons, we should be clear about that distinction.
I don't see the distinction. Humans exhibit many of the same behaviours.
There's a ground truth to human cognition in that we have to feed ourselves and survive. We have to interact with others, reap the results of those interactions, and adjust for the next time. This requires validation layers. If you don't see them, it's because they're so intrinsic to you that you can't see them.
You're just indulging in sort of idle cynical judgement of people. To lie well even takes careful truthful evaluation of the possible effects of that lie and the likelihood and consequences of being caught. If you yourself claim to have observed a lie, and can verify that it was a lie, then you understand a truth; you're confounding truthfulness with honesty.
So that's the (obvious) distinction. A distributed algorithm that predicts likely strings of words doesn't do any of that, and doesn't have any concerns or consequences. It doesn't exist at all (even if calculation is existence - maybe we're all reductively just calculators, right?) after your query has run. You have to save a context and feed it back into an algorithm that hasn't changed an iota from when you ran it the last time. There's no capacity to evaluate anything.
You'll know we're getting closer to the fantasy abstract AI of your imagination when a system gets more out of the second time it trains on the same book than it did the first time.
Strangely, the GP replaced the ChatGPT-generated text you're commenting on by an even worse and more misleading ChatGPT-generated one. Perhaps in order to make a point.
reply