Also, the ads were common in the UK where I was growing up, and the Superbowl was not as I recall broadcast on normal TV (we didn't have sky or cable, though that existed, so possibly that?), to the extent that I had to google the event to find out which sport the superbowl even was — I'd incorrectly guessed the other famous US sport of baseball, which is also not a big thing in the UK.
I don't think I've ever even noticed American Football being on UK TV at any point.
It was a Bud Light ad campaign that originally ran for three years everywhere that Anheuser Busch ran ads. Wikipedia claims it started on Monday Night Football, not the Super Bowl: https://en.wikipedia.org/wiki/Whassup%3F. But in any case, that is hardly the only time it ever ran.
Asking why it became popular may as well be asked of any pop cultural phenomenon that ever becomes popular. 20 people will give 40 opinions. Nobody really knows.
If a visibly armed convoy of folks that oppose your vote and your existence slow-rolled past your house, you might consider options other than staying and voting.
Your feelings are within your control. So: be an adult and control them. In this case that oftentimes means: Do research. Consume media from as many sources as possible. Do research. Try to find the good. Do research. Channel your feelings into improving the world, not retreating inside yourself or to somewhere else.
It's also okey to not share the feeling that the other person on the internet is having. Just saying. I would even argue it's mildly impolite to dispute the feelings of other people when your have a different vibe about the situation.
I am a progressive. I'm obviously dissatisfied with the results of this election. But I'm even more dissatisfied with the reaction I'm seeing from the left. I cannot and won't respect the "vibes" of "we're all gonna die, there's roving gangs in F150s in the streets, I'm leaving for Canada, the country hates me". I won't respect that. It is not indicative of any reality outside today, or in the immediate future (and I live in an extremely right-leaning place).
The reality that I want the left to adopt is: Its not that bad. The world isn't going to end. Both Trump and Kamala received fewer votes this year than in 2020. Progressive policies are majority-popular in the United States. Kamala was just a (very) bad candidate. Don't let that energy you're feeling go to waste by planning your escape to Thailand or laying in bed all day wrapped up in a cocoon. Educate the people around you. Be out in your community. And prepare for an even more important election in 2028.
A number of people from marginalized groups literally fear for their lives right now. That's based in reality, coming from Trump's bigoted rhetoric and the people he allies himself with. Whether you think they should feel that way or not, it's the way that they do feel and it's valid.
It's a reasonable reaction when you believe your life is in danger to try and get yourself out of that situation. You saying "it's not that big of an issue" and "where's the evidence" is both tone-deaf and beside the point. This person is scared. They have bigger problems than your opinion that they should stick around and vote. Help or stay out of it.
You are deluding yourself. This country is finished. Why on earth would they allow there to be legitimate elections in 2028? You know they have "elections" in Russia, China, and Hungary, too.
The people who voted for Trump want people like you and me to die. (And fair enough, I want THEM to die!) You can choose how to respond to that, but you cannot deny it. For me, I don't think I have good prospects for leaving the country, but I am investing in a weapon. "Be out in your community" -- fuck that. I am acting for me and mine alone at this point, EVERYONE else can go to hell.
If you’re looking for a monitor with high pixel density and a ton of real estate, you can also buy a monitor. 5k2k’s are pretty sweet. I’m driving one of these nowadays and it’s fabulous, without all the quirks of adapting a huge TV for computer use: https://www.dell.com/en-us/shop/dell-ultrasharp-40-curved-th...
HiDPI, two 4k monitors without a bezel, 120Hz, and no need for a separate thunderbolt hub.
HN’s association with YC has felt looser every year for over a decade at this point. If not for the Jobs link, the subtle username colors, and the domain, it’d almost be forgettable.
Best of luck to the author! My understanding is that anything that makes large file sharing easy and anonymous rapidly gets flooded with CSAM and ends up shuttering themselves for the good of all. Would love to see a non-invasive yet effective way to prevent such an incursion.
For Firefox Send, it was actually malware and spearfishing attacks that were spread.
The combination of limited file availability (reducing the ability to report bad actors), as well as Firefox urls being inherently trusted within orgs (bypassing a lot of basic email/file filtering/scanning), was the reason it became so popular for criminals to use. Like we've seen in the spearfishing attacks in India[1].
For a case when file sharing is intended between individuals or small groups there's an easy solution:
Anyone who got the link should be able to delete the file.
This should deter one from using the file sharing tool as free hosting for possibly bad content. One can also build a bot that deletes every file found on public internet.
Imagine some computer work with a class of high school kids, where a teacher has to send them a file... there will be maybe three full downloads max, before someone presses the "delete" button.
Sending files anonymously and sending files easily seems like mutually exclusive problems. If it's easy and anonymous, it's too easy to abuse. The teacher should just be using file storage that is tied to an account: it's not as though they're trying to stay hidden from their students
For a lot of use cases, simply sending the address of the deleter to whoever sent the file would suffice. Next time, just don't send it to them, or apply real-world consequences.
Sure, it wouldn't work for a large public setting... but it'd work for many other settings.
If governments and big tech want to help, they should upload one of their CSAM detection models to Hugging Face, so system administrators can just block it. Ideally I should be able to run a command `iscsam 123.jpg` and it prints a number like 0.9 to indicate 90% confidence that it is. No one else but them can do it, since there's obviously no legal way to train such a model. Even though we know that governments have already done it. If they won't give service operators the tools to keep abuse off their communications systems, then operators shouldn't be held accountable for what people do with them.
This would potentially let somebody create a "reverse" model, so I don't think that's a good idea.
Imagine an image generation model whose loss function is essentially "make this other model classify your image as CSAM."
I'm not entirely convinced whether it would create actual CSAM instead of adversarial examples, but we've seen other models of various kinds "reversed" in a similar vein, so I think there's quite a bit of risk there.
Are you saying someone will use it to create a CSAM generator? It'd be like turning smoke detectors into a nuclear bomb. If someone that smart wants this, then there are easier ways for them to do it. Analyzing the detector could let you tune normal images in an adversarial way that'll cause them to be detected as CSAM by a specific release of a specific model. So long as you're not using the model to automate swatting, that's not going to amount to much more than a DEFCON talk about annoying people.
If you have a csam detection model that can run locally, the vast majority of sysadmins who use it will just delete the content and ban whoever posted it. Why would they report someone to the police? If you're running a file sharing service, you probably don't even know the identities of your users. You could try looking up the user IP on WHOIS and emailing the abuse contact, but chances are no one is listening and no one will care. What's important is that (1) it'll be harder to distribute this material, (2) service operators who are just trying to build and innovate will be able to easily protect themselves with minimal cost.
You are mandated to report what you find. If the g-men find out you've not only been failing to report crimes, but also destroying the evidence they will come after you.
Note that this is specific to CSAM, not crimes in general. Specifically, online service providers are required to report any detected actual or imminent violation of laws regarding child sex abuse (including CSAM) and there are substantial fines for violations of the reporting requirement.
Wow. I had no idea. That would explain why no one's uploaded a csam detection model to Hugging Face yet. Smartest thing to do then is probably use a model for detecting nsfw content and categorically delete the superset. Perhaps this is the reason the whole Internet feels like LinkedIn these days.
Someone send you a meme that looks like a meme, you share it through messenger, the meme looks like something else to the messenger, messenger reports you to NCMEC. It's NOT police but they can forward it to police. As a side effect NCMEC gets overloaded helping more of real abuse continue.
> Perpetrators will keep tweaking image till they get score of 0.1
Isn't this - more or less - already happening?
Perpetrators that don't find _some way_ of creating/sharing csam that's low risk get arrested. The "fear of being in jail" is already driving these people to invent/seek out ways to score a 0.1.
My understanding is at that Microsoft runs such a tool and you can request access to it. (PhotoDNA). As I understand you hash an image send it to them and get back a response.
The government doesn't need more dragnet surveillance capabilities than it already has. Also this solution basically asks the service operator to upload illegal content to the government. So there would need to be a strong guarantee they wouldn't use that as proof the service operator has committed a crime. Imagine what they would do to Elon Musk if he did that to run X. The government is also usually incompetent at running reliable services.
"The government" in the UK already basically shields big internet operators from legal responsibility from showing teenagers how to tie a ligature. But I wouldn't characterise them as the problem — more public oversight or at least transparency of the behaviour of online operators who run services used by thousands of minors might not be a bad thing. The Musk comment also speaks to a paranoia that just isn't justified by anything that has happened in the past 10 years. The EU is in fact the only governmental organisation doing anything to constrain the power of billionaires to distort and control our public discourse through mass media and social media ownership.
I don't understand this comment. Are you implying Prince Andrew was _in_ or part of the UK Government? This would be a weird misunderstanding of our system.
If it's just a general cynical "all gubernment is bad and full of pedos" then I'm not sure what the comment adds to this discussion.
Pretty sure apple already scans your photos for csam, so the best way would be to just throw any files a user plans on sharing into some folder an iPhone or iMac has access to.
Because some people would tell them. For example, the FBI would look at a child porn sharing forum and observe a lot of people sharing Send links. Then they would go to the operators of Send servers, and "strongly suggest" that it should shut down.
"We know that this link includes material that is illegal to possess, and it is on your server."
"We don't know the contents of the files on our server, so we can't know that is was illegal"
"Fine, delete that file, and we won't charge you for possession this time. Now that you know your service is used for this illegal material, you need to stop hosting material like that."
"How, if we don't know what's in the file sent to our server?"
"... maybe don't take random files you don't know about, and share them on the open web with anonymous users?"
That’s not how csam reporting works at all. You aren’t punished for csam being on your server, as long as you do something about it. You can easily set up cloudflare to block and report csam to the NCMEC (not the FBI) and it will all be handled automatically.
I wonder how that'll play out in this case, since everything uploaded here expires at maximum 3 days. Maybe they can "handle" abuse reports by simply auto-responding in 3 days that it is now removed.
Do we know whether this uploading is motivated by actual pedo reasons, by anti-pedo honeypot reasons, by sociopathic trolling reasons, by sabotage reasons (state, or commercial), or something else?
It's discouraging to think that privacy&security solutions for good people might end up being used primarily by bad people, but I don't know whether that's the situation, nor what the actual numbers are.
It is just pedophiles. A user posted here on HN a while ago that they ran a Tor exist node and the overwhelming majority of it was CSAM or other cybercrime. Here in Germany they busted some underground forum and a single individual had 35TB worth of it at home. There's no great conspiracy, the criminal underworld is huge and they use every service that doesn't clamp down on it in some form.
How would they know their traffic was csam? Traffic passing through an exit node is traffic between a Tor user and the open Internet. Who's running csam servers on the open Internet instead of as hidden services? And without https? Especially when Tor Browser enforces https? This story doesn't add up at all.
They did bust a site owner despite Tor, though. That story's true.
That's discouraging, if true. I don't mind doing the occasional on-principle thing (like running Tor Browser for general-purpose browsing, and Tor as the VPN plugin on my phone), and maybe it's a citizen-technologist obligation to do my share of that. But I'd rather not have some Orwellian Bayesian system flagging me for using Tor.
Oh. We're having to explain 90's humor as though on a museum placard.
I feel ancient.
reply