Hacker News new | past | comments | ask | show | jobs | submit | Arubis's comments login

> A man poses with a mop on his head at Cyberia, the world’s first cyber cafe. This was very indicative of the humor of the times.

Oh. We're having to explain 90's humor as though on a museum placard.

I feel ancient.


One day we shall have to explain that no, wearing an onion on one's belt was never the fashion at the time.

But the 90s had many strange things, and I think I may never understand "wazzzzzzzzaaaaaaaaap" phone calls… or was that the early 00s?

And, having recently described the period as "around the turn of the century", I agree entirely with the sentiment.


Wazzzzzappp is easy. It was just a super bowl commercial that was popular

That origin doesn't tell me why it was popular.

Also, the ads were common in the UK where I was growing up, and the Superbowl was not as I recall broadcast on normal TV (we didn't have sky or cable, though that existed, so possibly that?), to the extent that I had to google the event to find out which sport the superbowl even was — I'd incorrectly guessed the other famous US sport of baseball, which is also not a big thing in the UK.

I don't think I've ever even noticed American Football being on UK TV at any point.


It was popular because it was funny. I think you will have a hard time ever finding a final "why" for social phenomena

It was a Bud Light ad campaign that originally ran for three years everywhere that Anheuser Busch ran ads. Wikipedia claims it started on Monday Night Football, not the Super Bowl: https://en.wikipedia.org/wiki/Whassup%3F. But in any case, that is hardly the only time it ever ran.

Asking why it became popular may as well be asked of any pop cultural phenomenon that ever becomes popular. 20 people will give 40 opinions. Nobody really knows.


don't fret, it wasn't funny then either. theyre just trying to exacerbate the delta T.

If a visibly armed convoy of folks that oppose your vote and your existence slow-rolled past your house, you might consider options other than staying and voting.


Has this happened in a non-isolated fashion? Please cite examples.


It's okay to rely on feeling and vibes alone. Feeling and vibes are real, important and precede outright indiscriminate violence


Your feelings are within your control. So: be an adult and control them. In this case that oftentimes means: Do research. Consume media from as many sources as possible. Do research. Try to find the good. Do research. Channel your feelings into improving the world, not retreating inside yourself or to somewhere else.


It's also okey to not share the feeling that the other person on the internet is having. Just saying. I would even argue it's mildly impolite to dispute the feelings of other people when your have a different vibe about the situation.


You're misunderstanding my motivation.

I am a progressive. I'm obviously dissatisfied with the results of this election. But I'm even more dissatisfied with the reaction I'm seeing from the left. I cannot and won't respect the "vibes" of "we're all gonna die, there's roving gangs in F150s in the streets, I'm leaving for Canada, the country hates me". I won't respect that. It is not indicative of any reality outside today, or in the immediate future (and I live in an extremely right-leaning place).

The reality that I want the left to adopt is: Its not that bad. The world isn't going to end. Both Trump and Kamala received fewer votes this year than in 2020. Progressive policies are majority-popular in the United States. Kamala was just a (very) bad candidate. Don't let that energy you're feeling go to waste by planning your escape to Thailand or laying in bed all day wrapped up in a cocoon. Educate the people around you. Be out in your community. And prepare for an even more important election in 2028.


I think you misunderstand.

A number of people from marginalized groups literally fear for their lives right now. That's based in reality, coming from Trump's bigoted rhetoric and the people he allies himself with. Whether you think they should feel that way or not, it's the way that they do feel and it's valid.

It's a reasonable reaction when you believe your life is in danger to try and get yourself out of that situation. You saying "it's not that big of an issue" and "where's the evidence" is both tone-deaf and beside the point. This person is scared. They have bigger problems than your opinion that they should stick around and vote. Help or stay out of it.


You are deluding yourself. This country is finished. Why on earth would they allow there to be legitimate elections in 2028? You know they have "elections" in Russia, China, and Hungary, too.

The people who voted for Trump want people like you and me to die. (And fair enough, I want THEM to die!) You can choose how to respond to that, but you cannot deny it. For me, I don't think I have good prospects for leaving the country, but I am investing in a weapon. "Be out in your community" -- fuck that. I am acting for me and mine alone at this point, EVERYONE else can go to hell.


Depends on the state. Colorado does do this, for what it’s worth.


I thought federal law required paper backups as of 2016-ish? No pure electronic voting today, IIRC.


It appears you're correct, albeit somewhat more recently (2021): https://www.nbcnews.com/tech/security/u-s-election-commissio...


If you’re looking for a monitor with high pixel density and a ton of real estate, you can also buy a monitor. 5k2k’s are pretty sweet. I’m driving one of these nowadays and it’s fabulous, without all the quirks of adapting a huge TV for computer use: https://www.dell.com/en-us/shop/dell-ultrasharp-40-curved-th...

HiDPI, two 4k monitors without a bezel, 120Hz, and no need for a separate thunderbolt hub.


A little tangential, but 538 is no longer affiliated with Nate Silver; he sold it to ESPN in 2013.


it’s true that 538 is no longer affiliated with nate silver but he left in 2023, not with the espn sale


There may be examples of this, but picking on Venkat Rao for not being sufficiently prolific is a laughable argument.


HN’s association with YC has felt looser every year for over a decade at this point. If not for the Jobs link, the subtle username colors, and the domain, it’d almost be forgettable.


Best of luck to the author! My understanding is that anything that makes large file sharing easy and anonymous rapidly gets flooded with CSAM and ends up shuttering themselves for the good of all. Would love to see a non-invasive yet effective way to prevent such an incursion.


For Firefox Send, it was actually malware and spearfishing attacks that were spread.

The combination of limited file availability (reducing the ability to report bad actors), as well as Firefox urls being inherently trusted within orgs (bypassing a lot of basic email/file filtering/scanning), was the reason it became so popular for criminals to use. Like we've seen in the spearfishing attacks in India[1].

[1]: https://www.amnesty.org/en/latest/research/2020/06/india-hum...


For a case when file sharing is intended between individuals or small groups there's an easy solution:

Anyone who got the link should be able to delete the file.

This should deter one from using the file sharing tool as free hosting for possibly bad content. One can also build a bot that deletes every file found on public internet.


Or the link expires after a download.


Sucks for people that have a shitty connection


The server would know when a download has completed.


Oh, that's pretty clever!


That then ruins perfectly valid use cases that someone could maliciously delete the file for.


But it allows sending. That might be an okay tradeoff, depending on what you're aiming for.

Anonymous file hosting isn't something I'd be keen to offer, given the nhmber of people who would happily just abuse it.


But people would abuse the delete button too.

Imagine some computer work with a class of high school kids, where a teacher has to send them a file... there will be maybe three full downloads max, before someone presses the "delete" button.


Sending files anonymously and sending files easily seems like mutually exclusive problems. If it's easy and anonymous, it's too easy to abuse. The teacher should just be using file storage that is tied to an account: it's not as though they're trying to stay hidden from their students


For a lot of use cases, simply sending the address of the deleter to whoever sent the file would suffice. Next time, just don't send it to them, or apply real-world consequences.

Sure, it wouldn't work for a large public setting... but it'd work for many other settings.


If governments and big tech want to help, they should upload one of their CSAM detection models to Hugging Face, so system administrators can just block it. Ideally I should be able to run a command `iscsam 123.jpg` and it prints a number like 0.9 to indicate 90% confidence that it is. No one else but them can do it, since there's obviously no legal way to train such a model. Even though we know that governments have already done it. If they won't give service operators the tools to keep abuse off their communications systems, then operators shouldn't be held accountable for what people do with them.


The biggest risk with opening a tool like that is that it potentially enables offenders to figure out what can get past it.


Fair point, but wouldn’t we rather they be spending their time doing that than actively abusing kids?


So they publish an updated model every three months that works better.


So security by obscurity? Man, open source software must suck…


This would potentially let somebody create a "reverse" model, so I don't think that's a good idea.

Imagine an image generation model whose loss function is essentially "make this other model classify your image as CSAM."

I'm not entirely convinced whether it would create actual CSAM instead of adversarial examples, but we've seen other models of various kinds "reversed" in a similar vein, so I think there's quite a bit of risk there.


Are you saying someone will use it to create a CSAM generator? It'd be like turning smoke detectors into a nuclear bomb. If someone that smart wants this, then there are easier ways for them to do it. Analyzing the detector could let you tune normal images in an adversarial way that'll cause them to be detected as CSAM by a specific release of a specific model. So long as you're not using the model to automate swatting, that's not going to amount to much more than a DEFCON talk about annoying people.


I think the point is generating an image that looks normal but causes the model to false positive and the unsuspecting person then gets reported


If you have a csam detection model that can run locally, the vast majority of sysadmins who use it will just delete the content and ban whoever posted it. Why would they report someone to the police? If you're running a file sharing service, you probably don't even know the identities of your users. You could try looking up the user IP on WHOIS and emailing the abuse contact, but chances are no one is listening and no one will care. What's important is that (1) it'll be harder to distribute this material, (2) service operators who are just trying to build and innovate will be able to easily protect themselves with minimal cost.


You are mandated to report what you find. If the g-men find out you've not only been failing to report crimes, but also destroying the evidence they will come after you.


Note that this is specific to CSAM, not crimes in general. Specifically, online service providers are required to report any detected actual or imminent violation of laws regarding child sex abuse (including CSAM) and there are substantial fines for violations of the reporting requirement.

https://www.law.cornell.edu/uscode/text/18/2258A


Wow. I had no idea. That would explain why no one's uploaded a csam detection model to Hugging Face yet. Smartest thing to do then is probably use a model for detecting nsfw content and categorically delete the superset. Perhaps this is the reason the whole Internet feels like LinkedIn these days.


Not "crimes". Child sexual exploitation related crimes specifically.

And not "you" unless you are operating a service and this evidence is found in your systems.

This is how "g-men" misinformation of born


Someone send you a meme that looks like a meme, you share it through messenger, the meme looks like something else to the messenger, messenger reports you to NCMEC. It's NOT police but they can forward it to police. As a side effect NCMEC gets overloaded helping more of real abuse continue.


Perpetrators will keep tweaking image till they get score of 0.1


> Perpetrators will keep tweaking image till they get score of 0.1

Isn't this - more or less - already happening?

Perpetrators that don't find _some way_ of creating/sharing csam that's low risk get arrested. The "fear of being in jail" is already driving these people to invent/seek out ways to score a 0.1.


How about the government running a service where you can ask them to validate an image?

Trying to tweak an image will not work because you will find the police on your doorstep.


My understanding is at that Microsoft runs such a tool and you can request access to it. (PhotoDNA). As I understand you hash an image send it to them and get back a response.


The government doesn't need more dragnet surveillance capabilities than it already has. Also this solution basically asks the service operator to upload illegal content to the government. So there would need to be a strong guarantee they wouldn't use that as proof the service operator has committed a crime. Imagine what they would do to Elon Musk if he did that to run X. The government is also usually incompetent at running reliable services.


"The government" in the UK already basically shields big internet operators from legal responsibility from showing teenagers how to tie a ligature. But I wouldn't characterise them as the problem — more public oversight or at least transparency of the behaviour of online operators who run services used by thousands of minors might not be a bad thing. The Musk comment also speaks to a paranoia that just isn't justified by anything that has happened in the past 10 years. The EU is in fact the only governmental organisation doing anything to constrain the power of billionaires to distort and control our public discourse through mass media and social media ownership.


You mean the government of Prince Andrew?

Yeah I think I understand now why they want the csam so badly.


I don't understand this comment. Are you implying Prince Andrew was _in_ or part of the UK Government? This would be a weird misunderstanding of our system.

If it's just a general cynical "all gubernment is bad and full of pedos" then I'm not sure what the comment adds to this discussion.


Pretty sure apple already scans your photos for csam, so the best way would be to just throw any files a user plans on sharing into some folder an iPhone or iMac has access to.


Checkout https://pipe.pico.sh which is a system for networked Unix pipes using ssh.


I've been using this version for a while, presumably it's just gone under the radar enough. So please don't upvote this too much, haha.


I have been using both Swisstransfer.com and filetransfer.io since Firefox Send shut down.

How have they dealt with this?


If it's truly e2e how would they even know what's being shared on it?


Because some people would tell them. For example, the FBI would look at a child porn sharing forum and observe a lot of people sharing Send links. Then they would go to the operators of Send servers, and "strongly suggest" that it should shut down.


> and "strongly suggest" that it should shut down.

I don't know about that, is there any documented case of that?

I feel like they'd probably just contact them and ask for removal of the file(s) and to forward any logs?


"We know that this link includes material that is illegal to possess, and it is on your server."

"We don't know the contents of the files on our server, so we can't know that is was illegal"

"Fine, delete that file, and we won't charge you for possession this time. Now that you know your service is used for this illegal material, you need to stop hosting material like that."

"How, if we don't know what's in the file sent to our server?"

"... maybe don't take random files you don't know about, and share them on the open web with anonymous users?"


That’s not how csam reporting works at all. You aren’t punished for csam being on your server, as long as you do something about it. You can easily set up cloudflare to block and report csam to the NCMEC (not the FBI) and it will all be handled automatically.


Maybe a judge would see it that way, but FBI agents aren't required to act like judges.


The fbi literally only investigates if the ncmec tells them to.

https://www.fbi.gov/investigate/violent-crime/vcac


I had a Send instance exposed online for years, but I changed it to 1 day retention and I never had any issues.

It was literally just to send large files between friends so more than 1 day was redundant.


> ends up shuttering themselves for the good of all

mostly because it's difficult to handle all the abuse reports


I wonder how that'll play out in this case, since everything uploaded here expires at maximum 3 days. Maybe they can "handle" abuse reports by simply auto-responding in 3 days that it is now removed.


Do we know whether this uploading is motivated by actual pedo reasons, by anti-pedo honeypot reasons, by sociopathic trolling reasons, by sabotage reasons (state, or commercial), or something else?

It's discouraging to think that privacy&security solutions for good people might end up being used primarily by bad people, but I don't know whether that's the situation, nor what the actual numbers are.


It is just pedophiles. A user posted here on HN a while ago that they ran a Tor exist node and the overwhelming majority of it was CSAM or other cybercrime. Here in Germany they busted some underground forum and a single individual had 35TB worth of it at home. There's no great conspiracy, the criminal underworld is huge and they use every service that doesn't clamp down on it in some form.


How would they know their traffic was csam? Traffic passing through an exit node is traffic between a Tor user and the open Internet. Who's running csam servers on the open Internet instead of as hidden services? And without https? Especially when Tor Browser enforces https? This story doesn't add up at all.

They did bust a site owner despite Tor, though. That story's true.


Naive question. Isn't tor supposed to be private? How can you know the contents of the communication just by running a node?


If it's an exit node, you know what the user is connecting to but not which user.


That's discouraging, if true. I don't mind doing the occasional on-principle thing (like running Tor Browser for general-purpose browsing, and Tor as the VPN plugin on my phone), and maybe it's a citizen-technologist obligation to do my share of that. But I'd rather not have some Orwellian Bayesian system flagging me for using Tor.


run a node. but never an exit one.


"Work is simply whatever we must do to get from one decision to the next." (Venkatesh Rao, Tempo)


You effectively could do so for ages using PredictIt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: