Hacker Newsnew | past | comments | ask | show | jobs | submit | abdullahkhalids's commentslogin

This also applies qualitatively to physical devices. It takes some effort to determine if a vehicular accident was caused by a fault in the vehicle or a driver error or environmental causes.

Some key inherent differences with older engineering fields is that software can be more complex than physical devices and their functionality can be obfuscated because it is written as text but distributed as binaries.

However, the main problem is that software has not been subjugated to enough legal regulation. Ultimately, all law does is draw lines somewhere in the gray between black and white, but in the case of software there are few lines drawn at all, due to many political and economic reasons. Once we draw the lines, most issues will be resolved.


Software is already subject to enough regulation. The stuff that's actually safety critical like medical devices or avionics is already heavily regulated.

There is a difference between

- software company decides to release a new version and auto installs it for everyone who has the old version (like Google Chrome)

- software company decides to release a new version. The Debian packaage maintainer checks if the update is fine, is compatible with Debian policies, then includes it in the packages repositories.

In the first, there are no checks. In the second, there are.


Yes, and it is precisely that kind of curation that makes Debian as valuable as it is.

What's your opinion on FreeCAD's scripting abilities [1]? The link [1] claims

> FreeCAD has been designed so that it can also be used without its user interface, as a command-line application. Almost every object in FreeCAD therefore consists of two parts: an Object, its "geometry" component, and a ViewObject, its "visual" component. When you work in command-line mode, the geometry part is present, but the visual part is disabled.

[1] https://wiki.freecad.org/Python_scripting_tutorial


There's a lot of alpha to be had in experimenting with LLMS and writing OCCT kernel cad code imo

CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.

The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.


Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies. The abusers just have to find the weakest link, and that weakest link will have fewer resources than multi trillion dollar companies. You would also likely not hear many news stories about it, because they won't have the expertise to even detect it.

That's a tradeoff you can choose to make, but you need to enter into it with open eyes.


> Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies.

It doesn't matter how many are shared but how many are viewed. On a small server community policing works just fine, bad actors are easier and faster to block and to top it off, the smaller reach of each server makes it unprofitable to target multiple serves, fish for their weak points. etc - the dirty jobs become unprofitable which is what matters most.

With the help of AI, small players can do a better job at removing CSAM.


> With the help of AI, small players can do a better job at removing CSAM.

Chicken/egg. How do you expect that AI to be able to detect CSAM without appropriate training, which requires appropriately classified training data?


>That's a tradeoff you can choose to make, but you need to enter into it with open eyes.

No it's not. It's certainly not my choice. No one asked me if it's okay for Facebook to distribute CSAM because you insist it would be worse if it didn't.


I don't really care if you classify it as a choice or not. One set of actions results in more CSAM than others. Just because you don't like the implication of there being tradeoffs doesn't mean there aren't tradeoffs.

You classified it as a choice, not me.

> or not

So you don't care that you're wrong? Not a surprise coming from someone handwaving away the mess Meta made.

In what regard is it incorrect that a single, larger entity that is at least notionally committed to avoiding the existence of any specific type of content on their platform is more likely to successfully avoid the existence of that type of content on their platform than smaller entities with less resources?

Now consider that some of those smaller entities might not be even notionally interested in avoiding the existence of that specific type of content on their platform, and are small enough for regulators to be unaware of its existence?


Trying so hard not to say child porn.

The problem generalizes, so there's no need to focus on this specific type of content.

Care to address the actual issue?


But its not a general problem, it's a problem specifically about child porn that we are discussing. The idea that there is no point in discussing the child porn problem on Facebook is exactly what I'm disputing.

Content moderation is a general problem, whether you're talking about child porn, content intended for mature audiences, or memes about Winnie the Pooh.

What I and others are trying to tell you is that your obsessive focus on Facebook as if they are the root cause of the problem is incorrect. There is no magic solution I'm aware of because each of them have some sort of tradeoff.

The most extreme version of content moderation I can think of is that a human being examines and approves every single message of any kind before it is published, any image of a minor is banned because it's too hard to objectively define child porn (that still leaves the open question of how to determine if someone is a minor visually), and no accounts for anyone under the age of legal majority are allowed, as verified by a legal ID that is checked by a human being.

Even in that case, kids will find some way to get an account or just use their parent's account, and the door is cracked open again. And the pedophiles will just go elsewhere, probably using a service with significantly less resources available to attack the problem, which is probably worse than the status quo.

This doesn't even touch on the privacy concerns that most people would have with every message being reviewed.

As I said before, I would welcome you to share the solution that you imply exists which addresses every issue above.


I don't see others trying to tell me what you are.

> Content moderation is a general problem,

Easy to reform any problem in a more general manner. Doesn't make your discussion any less dishonest.

>As I said before, I would welcome you to share the solution that you imply exists which addresses every issue above.

It's not really my burden to come up with a solution. That's ridiculous. It's Facebook's problem, not mine. You haven't even disputed that they could not do a better job. Your argument was that it's better for the child porn to be on Facebook than smaller websites, which is specious at best.


There's nothing dishonest about my attempts to have a conversation with you about this.

You've decided that there's some relatively easy solution to a problem that existed before Facebook and will exist after Facebook that Facebook should be implementing to solve what appears to be an unsolvable problem to basically everyone else on earth, yet you have no ability to describe this solution and don't seem to have put much effort into thinking about it beyond assuming it exists.

No one is arguing that it's better for child porn to be anywhere. What myself and others have said is that there are tradeoffs to be made concerning content moderation, and you basically refuse to even contemplate the theoretical benefits and downsides of different approaches and their outcomes.

I don't know what your motivation is, whether you just have some irrational hatred of Facebook, are a zealot concerning child porn, both, or there's some other explanation for your obstinate ignorance, but attempting to talk to you appears to be a complete waste of time.


> You've decided that there's some relatively easy solution

I never used these words either. That's where the dishonesty is. Look back at our thread, how many times have you done that? You ask me to define basic words and then don't respond when I do... everyone else on earth agrees with you? Just read this thread. There is literally someone else in this very thread here agreeing with me.

>No one is arguing that it's better for child porn to be anywhere

You did. You argued it's better to be on Facebook than on smaller sites and audaciously asked me how I could disagree?

> I don't know what your motivation is, whether you just have some irrational hatred of Facebook, are a zealot concerning child porn, both, or there's some other explanation for your obstinate ignorance, but attempting to talk to you appears to be a complete waste of time.

It's much more telling that you think those are the only two reasons why someone would think "Facebook should really do something about its child porn problem already."

>but attempting to talk to you appears to be a complete waste of time.

Good riddance


> I never used these words either.

You don't use any words, other than repeatedly saying "Facebook should be solving this problem they created", so people have to fill in the gaps because that is a very strange perspective and you refuse to elaborate.

> That's where the dishonesty is. Look back at our thread, how many times have you done that? You ask me to define basic words and then don't respond when I do... everyone else on earth agrees with you? Just read this thread. There is literally someone else in this very thread here agreeing with me.

You don't define basic words, that's the issue.

I never said everyone agrees with me, and the one person "agreeing" with you is just as clueless about the pros and cons of a centralized vs distributed system.

> You did. You argued it's better to be on Facebook than on smaller sites and audaciously asked me how I could disagree?

I did not. You're either confusing me with someone else (and twisting their words) or just imagining messages, just like you're imagining that you've diligently responded to every request for clarification on your ill-defined yet adamant stance.

> It's much more telling that you think those are the only two reasons why someone would think "Facebook should really do something about its child porn problem already."

Again, feel free to elaborate.

> Good riddance

Likewise


>You don't use any words, other than repeatedly saying "Facebook should be solving this problem they created", so people have to fill in the gaps because that is a very strange perspective and you refuse to elaborate.

Thinking that Facebook should solve its own child pornography problem is not a weird perspective at all. What is weird about that? What do I need to elaborate on? That's my position. Are you saying it's unfounded?

>You don't define basic words, that's the issue.

I did, you asked me to and didn't respond.

>I never said everyone agrees with me, and the one person "agreeing" with you is just as clueless about the pros and cons of a centralized vs distributed system.

Oh, excuse me, not everyone, just "basically everyone else on earth". Again, incredibly dishonest on your part.

>I did not. You're either confusing me with someone else (and twisting their words) or just imagining messages, just like you're imagining that you've diligently responded to every request for clarification on your ill-defined yet adamant stance.

There's nothing ill-defined about my stance. It's very clear. Meta should clean up its child porn mess,

>Again, feel free to elaborate.

Well, I think it's incredibly disingenuous to act as if the only reason one could come to such belief is because of an extreme opinion. I'm willing to bet you that most people would agree with me that Facebook should do something meaningful about its child porn problem. For no discernable reason you jumped to the conclusion that what I stated is an extreme opinion only shared by zealots. I'd bet most parents would agree. I'd bet most people would agree. In fact, you haven't at all explained what is extreme about that opinion. I think most people think child pornography is a problem, and I think most people think that Facebook, a website which facilitates the proliferation of child pornography and enables predators to get in touch with children, shouldn't. That all seems fairly self-evident, actually. I'm not sure where you spend most of your time such that you think people don't think child pornography is a problem and that only zealots care about it. What a weird place that must be.

> Likewise

Yet you came back to respond again. Either engage in a conversation honestly or fuck off.


> Thinking that Facebook should solve its own child pornography problem is not a weird perspective at all. What is weird about that? What do I need to elaborate on? That's my position. Are you saying it's unfounded?

Again, the person you originally were talking to about this and myself have pointed out that it's not just Facebook's problem, it's society's problem, and all I have said is that there are tradeoffs, which you deny for inexplicable reasons (probably because you have no idea what you're talking about, but feel free to correct that assumption).

In a similar vein, I asked you what specifically you'd like Facebook to do, and you didn't have any meaningful answer (probably because you have no idea what you're talking about, but feel free to correct that assumption).

> I did, you asked me to and didn't respond.

Why does this thread end with my question: https://news.ycombinator.com/item?id=47987893

Where is your comment where you've defined these basic words and got no response?

> Oh, excuse me, not everyone, just "basically everyone else on earth". Again, incredibly dishonest on your part.

I'll restate to "basically everyone on earth with a clue about the differences between centralized and distributed systems".

> There's nothing ill-defined about my stance. It's very clear. Meta should clean up its child porn mess,

The first obvious question is: How (what is the definition of "clean up")? The obvious question after that is: If they do so, where do the pedos go next, because Facebook didn't create their interest in child porn? The obvious question after that is: Is that better than the status quo?

Yet you have literally no comment on this. Why are you so adamant about your position when it's apparently so uninformed?

> Well, I think it's incredibly disingenuous to act as if the only reason one could come to such belief is because of an extreme opinion. I'm willing to bet you that most people would agree with me that Facebook should do something meaningful about its child porn problem.

See the link above where I asked you to define meaningful and you didn't respond. They aren't doing nothing now from what I can tell, and they certainly could be doing more, to the point of shutting down their service entirely. What is "meaningful" to you?

> For no discernible reason you jumped to the conclusion that what I stated is an extreme opinion only shared by zealots. I'd bet most parents would agree. I'd bet most people would agree. In fact, you haven't at all explained what is extreme about that opinion. I think most people think child pornography is a problem, and I think most people think that Facebook, a website which facilitates the proliferation of child pornography and enables predators to get in touch with children, shouldn't. That all seems fairly self-evident, actually. I'm not sure where you spend most of your time such that you think people don't think child pornography is a problem and that only zealots care about it. What a weird place that must be.

I live in a world where Facebook is used for a lot of things, just like every other service on the internet, recognize that those services are far from the root cause of any issue related to the creation or distribution of undesirable content, understand that they are not able to solve the root cause, and that the only way for them to fully eradicate any specific type of content from their service is to shut it down, with the end state being no internet once this is applied to all services that host content.

If you see that state as acceptable or desirable, then just come out and say so. If not, then you need to accept that online services will end up hosting some objectionable content at some point. You rejected both of these options previously when stated slightly differently, and have yet to describe a third state that must exist for that rejection is valid, which is what leads to believe you might be some sort of zealot (as they are known for rejecting reality). Feel free to describe why this rejection of the only two option I'm aware of is valid at any time, beyond just saying "Facebook needs to do more".

> Yet you came back to respond again. Either engage in a conversation honestly or fuck off.

You could start by sharing a coherent thought beyond "Facebook bad" on this topic. I've presented numerous comments and questions that have gone unanswered.


> You could start by sharing a coherent thought beyond "Facebook bad" on this topic.

There's nothing incoherent about that thought, which is also not what I expressed anyway. It's not my burden to solve Facebook's problem just because I pointed out that it should be solved. The proposition that it is my burden is, again, ridiculous.

>Why does this thread end with my question: https://news.ycombinator.com/item?id=47987893

It doesn't. It ends with me providing the common definition of meaningful, which is apparently something you were otherwise incapable of determining yourself?

https://news.ycombinator.com/item?id=48009016

>I'll restate to "basically everyone on earth with a clue about the differences between centralized and distributed systems".

Hah. It'd be funny if it were intentional on your part.

>In a similar vein, I asked you what specifically you'd like Facebook to do, and you didn't have any meaningful answer (probably because you have no idea what you're talking about, but feel free to correct that assumption).

What don't I know about?

> The first obvious question is: How (what is the definition of "clean up")? The obvious question after that is: If they do so, where do the pedos go next, because Facebook didn't create their interest in child porn? The obvious question after that is: Is that better than the status quo?

Oh so you are making the exact argument you just said you weren't? Okay. Not going to bother because none of that is a reason why Facebook shouldn't "clean up" their problem. What do I mean by clean up? Again, make an impactful change on the ability for predators to interact with children and distribute child pornography on their platforms. You act as if I'm the only person in the world making this criticism, it's a bit bizarre. Here's some material for you that I found with 3 seconds of using Google (because, again, you apparently can't?):

https://www.biometricupdate.com/202601/meta-pledges-complian...

https://www.bbc.com/news/business-67640177

https://www.bbc.com/news/technology-39187929

https://childhood.org/news/facebook-a-hidden-marketplace-for...

> I live in a world where Facebook is used for a lot of things, just like every other service on the internet, recognize that those services are far from the root cause of any issue related to the creation or distribution of undesirable content, understand that they are not able to solve the root cause, and that the only way for them to fully eradicate any specific type of content from their service is to shut it down, with the end state being no internet once this is applied to all services that host content.

Facebook does not have to solve the root problem of child pornography. No one suggested that they did. This is a total strawman argument. Again, if you're going to be dishonest, disingenuous, and frankly just rude, please don't bother responding to me as you previously insisted you would.

> You rejected both of these options previously when stated slightly differently

No, I didn't. I never suggested that there was a world where Facebook would have absolutely zero amounts of child porn or predators or facilitating their actions. It's so weird how you keep making up arguments to knock down.


> It doesn't. It ends with me providing the common definition of meaningful, which is apparently something you were otherwise incapable of determining yourself?

Your post was marked as dead by the mods, and I didn't have deadlinks enabled. I'm glad we now have external confirmation that you are commenting in bad faith and the substance of your comments is obnoxious and useless enough that this site hides some of them by default.

I think that's a good ending point for this thread, because you have absolutely nothing of substance to say on any topic remotely related to technology. I'm betting you're a lawyer?


Non-responsive slop from someone also willfully misrepresenting how dead posts work here and a pejorative jab too! I'll play your game: enjoy your child porn on Facebook!

What your opponent is saying is, "there are mutually exclusive A and B". A being widespread CSAM and B being somebody need to look at CSAM to remove it.

Can you elaborate on what exactly is wrong there? Do you see the third alternative C and it's not the "whole choice"? Or are you saying A or B do not exist and therefore there's no choice? Please name C, or tell us why A or B don't exist (or aren't acceptable), or explain your view that doesn't fit into these options.


Some people are not okay with actively facilitating harm to people, even if inaction results in harm to other people. See: the trolley problem. This is totally okay, but the point made above is that

>That's a tradeoff you can choose to make

is not correct: It is a tradeoff that one specific person can choose to make, but not one that I or we can choose to make, because we don't control facebook. Mark Zuckerberg controls facebook. He alone can choose to make that tradeoff, or not, on behalf of society.


He can't do it alone because Facebook is under US jurisdiction.

Yes, facebook is under US jurisdiction.

Yes, he alone can make the choice. Not you or I.


This isn't an either or. X isn't the only place CSAM is, there are gazillions of other sources. It I'd probably the easiest place to find it tho.

> Server moderators should be legally responsible for content on their server.

And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.


>And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

This is already the case. There is a lot of lawful, useful, medical or educational content that is actively censured on social medias because they include words or pictures of organs while same social medias actively encourage and develop algorithm to push underage girls (and possibly boys) posting pictures of themselves in sexual poses, attires and context.

Big tech and social media networks love and push CSAM, they just hide the genitals but the content really is the same.


> a lot of lawful, useful, medical or educational content

Like what? It’s all there on Wikipedia, and for all of Wiki’s faults, I have trouble imagining what kind of useful, educational, medical information you will find on social media that is better than that.


You don't necessarily reach the same population and some people, believe it or not, are afraid, unable or have difficulties to read yet are online.

Afraid to read Wikipedia? Unable to? What hellhole are you describing?

20% of adults in the USA read below a 5th-grade level.

https://www.thenationalliteracyinstitute.com/2024-2025-liter...


So?

If you find Wikipedia too complicated, you probably aren’t looking up your medical diagnoses.


We aren't talking diagnoses but prevention, wellbeing and awareness campaigns.

You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.

But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.


I have no deep problem with exclusion; as you say, that’s human nature and unfixable. Making mods personally legally liable for everything that appears on their board is just insane. How many minutes are acceptable for them to see and review content? Or does everything have to be pre-approved?

I know a local blog that pre-approves every comment. He lets a lot of stuff through, because he lets people be dumbasses. If he were personally liable, the conversation would get a lot quieter.


Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.

No. It's a legitimately difficult problem because there not all naked pictures of kids are illegal. The false positive problem is bad for business, but also generally bad even if the big social media was benevolent.

Moderators need to actually understand the context of the picture/video, which requires knowledge of culture and language of the people sharing the pictures. It's really difficult to do that without hiring moderators from every culture in the world.

But small federated servers can often align along real world human social networks, so it's easier for the server admin to understand what should be removed.


The amount of CSAM online is completely out of control. There's already nation-level and sometimes international cooperation to catch any known images with perceptual hashing (think: the opposite of cryptographic hashing) as well as other automated and manual tools.

My impression is it would take Manhattan-Project levels of effort and funds to come close to "solving" this problem, especially without someone getting on a watchlist for having a telehealth-first primary care provider insurace plan and asking for advice on their toddler's chickenpox.

Human review? Meta has small armies worth of content moderators already that tend to burn out with psychological problems and have a suicide rate where you're probably better off going to fight in a real war. (This includes workers hired by Sama in Kenya, to link back to the OP.)

I will reluctantly grant Meta that they're up against a really hard problem here.


>I will reluctantly grant Meta that they're up against a really hard problem here.

It is a problem of their own making.


They created the concept of CSAM?

No, being so large that it's such a problem for them.

Seems like your blame is quite misplaced.

It certainly is not.

Yeah, I agree with you. Of course, it’s not Meta’s blame that the CSAM actually exists, but calling the problem of filtering it extremely difficult at Meta’s scale is a problem that is easily solvable but fundamentally requires changing how the platform works, and would likely require a lot more money to be spent.

Exactly, no one put a gun to Meta's head and ordered them to make Facebook.

So what would satisfy you?

Them to actually undertake meaningful efforts to reduce the child porn on their platform. Crazy, I know.

Define meaningful then? It's quite odd how you seem to be adamant that there is a perfect solution available yet seem to be talking around it.

Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.

>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.

That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.

Which is the first argument any decentralization advocate always brings up (and they're correct to do so), censorship is harder and evasion of law enforcement easier when dealing with a network of independent actors.


What stops Humbert Humbert from joining hundreds of small servers?

You now have 100x the total human effort for mods to review and ban him.


> Server moderators should be legally responsible for content on their server.

So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?

> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.

Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?

They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.

This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.


right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.

It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.

Fosstodon fediverse server doesn't have this issue, for example.

I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.

The problem is the people who participate in this (the illegal and immoral), and not "the network."


> well, that ought make enforcement easier.

Because of course the people congregating to do illegal stuff online are going to do it in your jurisdiction where prosecution is guaranteed


So people congregating to do things online will do such in places where it isn't illegal to do such things?

We're all aware that it is possible to run a private website, forum, chat server (irc-like or discord-like), including "federated" servers, but not federate? in fact, Element, a chat client, has a parent company that even sells "completely private, encrypted chat", which will never "leak."

I'd much rather have leaky CSAM federated servers than every bad actor behind a VPN. I don't want to see the shit, but i can null route the entire domain and be done with it, or i can send links to my local authorities and let them deal with it.

A similar thing is racism, would you rather have someone be openly racist, or just privately? This was said, i believe, about Joe Biden, about why people tend to trust Trump more, since he wears everything on his sleeve, and Biden speaks out of both sides of his mouth. Like how Carlin said Clinton won people over by saying "Hi folks, i'm completely full of shit and what do you think about that? and folks said, "well, at least he's honest."

Sunshine is the best disinfectant.

and authorities will know if there's new CSAM, and will crop the images and send it to the groups that track down where clothes came from, what the decor in the room means, if there's anything else identifiable. None of this is possible if it's underground and "non-federated."

I'd rather CSAM ceased to be a thing; but again, i'd much rather have idiots announcing it publicly than on an E2EE private network.


The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.

Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.

One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.


That seems reasonable though. The FBI isn’t interested in busting one perv in a closet, they want the ones making the stuff.

The FBI is interested in busting perverts in closets. That's often how they work their way up the "supply chain" when it comes to CSAM. Consumers lead them to distributors, who lead them to producers.

A fair point. But it still seems reasonable that only about 1% of suspect posts lead to a formal inquiry. Doesn’t mean they aren’t taking the report into account. You have to figure that they already have leads on most of them.

Do we have to figure that?

Do we really have to give the benefit of the doubt to the agency that was literally running one of the largest CSAM distribution outlets in the world for years as a honeypot?


If you want ro argue that the FBI is a fundamentally flawed agency that on balance is a net negative, I won’t fight you that hard. But during the civil rights struggle, they were the only force that could be trusted at all.

Of course, that was 60 years ago.


Wasn't the FBI doing some pretty questionable stuff with regards to MLK during said civil rights movement?

Eg https://en.wikipedia.org/wiki/COINTELPRO

I guess "other forces were worse" can certainly be true, but then how low are we holding the bar?


As I said just above, I am not a fan. They are nor completely evil, though.

Yes, that was 60 years ago. No one involved at that time is still there - and in fact, most of them have passed. I don't know why you think there's a shred of relevance there.

If I didn’t mention it, someone would complain that I was ignoring the one time they were on the side of the angels.

Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.

If car manufacturers cannot bring car related deaths to zero, they too should no longer be legitimate companies.

A better comparison would be that if a car company can’t meet preexisting crash/safety standards, they need to shut down.

These are pretty clear laws established by a democratic government with a pretty good record for rule of law.


Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.

The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.


Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?

Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?


We're talking about CSAM right? Which all platforms remove proactively, build models to remove and essentially always respond to when informed.

Demanding some perfect immediate magic response there is the equivalent of asking car manufacturers to prevent all deaths.


Do they remove it and respond really though?

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,

https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html

https://www.zdnet.com/article/us-ai-safety-institute-will-be...

https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok...

The list of not even trying anymore goes on and on. Mechahitler was also fun


When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.

So yes, yes, let's do like we did with cars.


I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.

I am not sold on the federated thing to solve CSAM or similar issues.

Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.


> Banning people is way easier on small servers

Big “citation needed” here. My bet is that Meta have far better moderation systems than any other social media company on the planet.


when i ran a fediverse server for myself and 3 people, but allowed public signups if someone came by; it was very easy to ban people, and very easy to null-route entire swaths of the fediverse, because i didn't want their content on my service.

That's more what i got from that pull-quote. I know a company that has hundreds of individual forums, and those are all moderated quickly and correctly (last i heard). They're moderated so effectively they often get DDoS by Russian IPs for banning users for scam posts from that country.


I feel like for Project 1 at least, the old dashboard is better than the new one.

The problem with a set of mutually conflicting laws like this is that good designers are able to intuitively understand which ones to ignore and which ones to use for a particular project.


What's the price per use compared to a standard industrial metal one?

There is in fact just such a repo maintained by Terence Tao and other mathematicians [1] who are actively using LLMs to try to find solutions to them.

[1] https://github.com/teorth/erdosproblems


…and this problem was in fact sourced directly from that list!

The Greasemonkey Firefox addon that allows you to run site specific JS has been around for two decades [1].

[1] https://www.greasespot.net/2005/03/


And they even have a name: userscripts! [1]

Chrome also used to natively support userscripts back in 2010 [2] but they mostly killed it off

[1] https://en.wikipedia.org/wiki/Userscript

[2] https://lifehacker.com/chrome-4-supports-greasemonkey-usersc...


There's also Violentmonkey [1][2] (which is said to be more lightweight) and Tampermonkey.

Tampermonkey has analytics enabled, and is closed source.

Here's a comprehensive summary on all three. [3]

[1] https://addons.mozilla.org/en-US/firefox/addon/violentmonkey

[2] https://violentmonkey.github.io/

[3] https://news.ycombinator.com/item?id=26057106


It certainly is great to have first-party support for such a simple feature. It doesn't have to support the whole GM_ API

Yes. But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language).

The impact on people's time, money and on the environment are proportional.


> But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language)

Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?


Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser. Even fairly decent-looking ones, not your Doom or Quake!

And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.


> Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser.

What about in QML, which uses Web technologies like CSS, JS and even basic HTML? The whole KDE Plasma 6 desktop is built around these technologies now and I (and many others) consider it light and high-performance.

If you saddle up those technologies in the full browser everything then it will get larger, yes, but nothing requires you to do this, just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed.

Plain Javascript can be very fast and still come at relatively low resource demands and the same is true of HTML and CSS. Many "plain desktop-native" applications often end up reinventing their own variants of HTML and CSS in the course of designing the U/I anyways.


It's better, but it's still quite bloated, to be honest. Linux is generally more memory-hungry than Windows because of how modular it is, and having no Win32 equivalent really hurts. Although they've started doing UI in React Native over there too...

Qt is much lighter than your Chromium-based stacks but all the waste kind of adds up.

"just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed" Containers are hungrier than running stuff on bare metal...


Yeah, React Native is apparently how Claude Code operates (even on terminal) so it wouldn't surprise me to see it being useful in a native GUI context as well, if we can get more bindings than Skia.

> Containers are hungrier than running stuff on bare metal...

Containers are tremendously lightweight compared to VM. You might as well point out that running a full multiuser security-protected OS like Linux is hungrier than running on bare metal with DOS too. It's just as true, and even proportionally as true.

In any event a full Fedora container with all packages installed is going to be tremendously larger than a distroless hello-world "built" around Alpine, for instance, even though they both use container technologies. Same applies to Web technologies, you can certainly go and easily add a lot of waste using them but they are not themselves inherently wasteful.


I believe Firefox use separate processes per tab and most of them are over 100MB per page. And that's understandable when you know that each page is the equivalent of a game engine with it's own attached editor.

A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.


> You have to take a topic you find interesting and read all possible related work in it

This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.


The usual justification is that if you don't do at least a breadth-first literature review, you can get burned by missing a paper that already does substantially what you do in your work. I've heard of extreme case where it happens a week before someone goes to defend their dissertation!

Excuse my naivety, but isn't it good if the same results get proofed in slightly different ways? This is effectively a replication, but instead of just the appliance of the experiments, you also replicate the thought process by having a slightly different approach.

It would be good (especially with the replication crisis), but historically to earn a PhD, especially at a top-tier institution, the criteria is conducting original research that produces new knowledge or unique insights.

Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.

Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.


And here we once again see an example of misaligned incentives baked into another one of our most hallowed institutions.

The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.

It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”


> The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.

> It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”

Or are they trying to require enough rigor and discipline so that out of 100,000 people who want to be the next Einstein, the process washes out the 99,000 who aren't willing or able to do more than throw out half-baked 'creative' ideas and expect the world to pick them up and run with them.

There's only finite attention and money for funding research, so you gotta do SOMETHING to filter out the larpers who want to take it and faff around.

I think at this point the system has eaten its own tail a bit, but there's good reason to require some level of "show me" before getting given the money to run your own research.


For the humanity? Yes, it's generally good. For that particular researcher's career? Not really. Who wants to pay for research into something that's already known?

My imagination was leaning more into the educational side than the research side of university. I see how that wouldn't be appreciated by a patron, but when you get search grants, isn't the topic discussed before starting and paying for the research? Also that is kind of the point, why topics are cleared with the chair-holding professor, which is expected to be already experienced in the subject to know where the knowledge needs to be expanded.

Well, if you don't care about not being able to do your defense after 4 years of work because someone managed to do it just before you..

Unless you're already an expert in the topic a literature search is literally step 1 since you have to check if your idea has already been done before.

That's where your supervisor comes in. In most cases, they should be an expert in the field, and guide you towards a useful and novel problem.

Moreover, I am not suggesting you don't look at other papers at all. But google scholar and some quick skimming of abstracts and papers you find should suffice to check if someone has already done the work. If you start fully reading more than a handful of papers, your ideas are already locked in by what others have done, and it becomes way harder to produce something novel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: