Hacker News new | past | comments | ask | show | jobs | submit login
Freenet 2024 – a drop-in decentralized replacement for the web [video] (youtube.com)
109 points by sanity 18 days ago | hide | past | favorite | 93 comments



From https://freenet.org/faq#faq-2:

> In 2019, Ian began work on a successor to the original Freenet, which was internally known as "Locutus." This project, a redesign from the ground up, incorporated lessons learned from the original Freenet's development and operation, and adapted to today's challenges. In March 2023, the original version of Freenet was separated into its own project, and what was known as "Locutus" was officially branded as "Freenet."

And https://freenet.org/faq#faq-3:

> Anonymity: While the previous version was designed with a focus on anonymity, the current version does not offer built-in anonymity but allows for a choice of anonymizing systems to be layered on top.

So what has been known as Freenet and the new Freenet are quite different apparently.


It was not the first, either: See "Cleveland Freenet" https://cfn.tangledhelix.com/


The original freenet was the Case Western freenet; I still remember my email as it was issued sequentially as aa000 to zz999, skipping some. It was `freenet.edu`

Primarily used it for research and usenet, most of the time through uucp to get the updates.



Just a note that this link is not by a Freenet developer.


Is there a 'sanctioned' option or do you have any pointers on how to layer anonymizing systems on top?


No design fleshed out at this point but algorithms like mixnet and dining cryptographers should be implementable on Freenet's primitives (if not we'll improve the primitives).


Locutus as in borged Picard?


Maybe, though it could also be that locutus is a Latin word that essentially means "speak". Adds another layer of meaning to its use in Star Trek. :D


> locutus is a Latin word that essentially means "speak"

Or more precisely “spoken”, the (nominative masculine singular form of the) past passive participle. (In English, other Latin words from the same root get you locution, colloquium, etc.)


Usually Latin past participles are passive, but loquor is a deponent verb which has only passive forms with an active meaning, so locutus means "spoken" as in "I have spoken" rather than "the words have been spoken".

https://en.wikipedia.org/wiki/Deponent_verb


It makes sense because the Borg chose Picard as a representative for all humankind.


and there are some reasons to believe the borg were running some kind of distributed system that was weighting certain nodes at various times.

Perhaps a future fork of freenet will become borgnet? Seems futile, but resistance often can be.


Borg reference (decentralized architecture), and Latin "speaker" - but Locutus was just a working title for what's now called Freenet.


I think throughout Star Trek history the borg have proven themselves to be a noble collective worthy of naming your repo's working title after.

How do I join the collective chatroom? And of course I want to skip The Matrix. Been there, done that. If assimilation is painful, I'll endure.


cargo install freenet will work but not yet.


So apparently this project isn't the old Freenet, that's been renamed to "Hyphanet" - https://www.hyphanet.org. The same team started a new project, originally called "Locutus" and then renamed that to Freenet. A bit confusing.

I remember hearing about Freenet like 20 years ago, I think on Kuro5hin, kind of amazing it's still going.


It’s not the same team. The Freenet Team is developing Hyphanet now.

Ian had not been active in Freenet development for over a decade (he managed the non-profit but didn’t interact with actual development) before he started Locutus. He initially started Freenet in 1999, but had long been absent.

The renaming was done against the explicit objection of the core development team. For further background, see the links in https://www.hyphanet.org/freenet-renamed-to-hyphanet.html

Hyphanet is the original Freenet and the next release is just blocked by finalizing the new signing setup for the Windows installer. See https://github.com/hyphanet/fred/releases/tag/build01498


The rebrand/redesign is explained here: https://freenet.org/faq#faq-4


Hi Rusty! :-)


The last time I played with Freenet, it was glacially slow. Like, it took several minutes to load a basic page. What's the user experience like today?


Note that the linked video is about a complete redesign of Freenet, see https://freenet.org/faq#faq-3


The new Freenet is different from the old Freenet. And people will not necessarily know which one you are talking about.


Better-but still slow. It gets faster overtime as you build trust and more peers connect to you. But it is tolerable on startup for a "I'm bored, lets look at crazy" session.



That video is from last September, this video is intended to compliment it. Both are worth watching, there shouldn't be too much overlap.


I’m still convinced the Internet Computer has a real chance at being the replacement for AWS et al


I read much about this project. It seems like really smart people, really good work.

They have some weird feeling marketing? I don't know what to think about it, maybe its a "technical people doing marketing" problem?

I also don't know how to trust the "shared/cooperative multiparty cryptography" thing? I don't know of anyone else doing the same thing, some underlying feeling I have that it can't really work can it?


Right now we're mostly trying to talk to software developers about building on Freenet or contributing to the project. Mass market comes later.


So how does freenet prevent the spreading of slander, libel, or otherwise abusive content?

I understand not many places have an answer for this question, I'm wondering if Freenet does yet.

I'm asking this in good faith. I actually like Freenet.


Not everyone believes censorship is a feature worth compromising the design of a network.

How does the USPS prevent libel?

How does the global telephone network prevent slander?


I presume you've flipped the bozo bit on me and will take my response in bad faith but I'll do so anyway.

There's a difference between something that is intentionally anonymized, distributed and designed for perpetuity. These make freenet have an affordance for potential malicious use.

here's some recent instances, specifically with freenet: https://news.google.com/search?q=freenet%20arrest

There's no server to seize, no way to take it offline, it's there forever.

Last time I tried it, there was no way for me to blacklist content I did not want to be a node for. If a policeman had seized my laptop, who knows what they could have found in the Freenet databases, I certainly had no idea.

Maybe these people in prison are innocent non-technical normal computer users that just downloaded Freenet a long time ago, poked around for 20 minutes and it's been running for years without them knowing.

A jury of 12 non-technical people hear "This person's computer downloaded and distributed gigabytes of child porn" from the testifying police officer, and that's all that's needed I'd suspect.

You can call this the "4chan" problem. Regardless of the virtues of the people who built it, there weren't enough safeguards to prevent the bad actors from becoming the dominant users and eventually the inmates who ran the asylum.

I want the principles held by Freenet to be preserved and not polluted by propagandists, fabulists, scammers, and pornographers.


> There's no server to seize, no way to take it offline, it's there, encrypted, for perpetuity.

I usually disagree with any argument trying to compromise on anonymity, privacy, and free speech.

Fifteen years ago I was fascinated by Freenet and I2P as a teenager. A censorship-free resilient world-spanning network. One populated by actual anarchists, cypherpunks, and their manifestos, where I could perhaps learn how to make rockets from ping pong balls.

Until one day I clicked on one random nondescript link too many.

After the initial shock I got rid of Freenet, then and there, for good. And I2P. And I have never looked back on my brief cypherpunk phase.

You just reminded me that what I saw that day is still out there, probably will be until the day Freenet dies, and nothing can be done about it. Just the thought of it makes me sick.

I am aware this is anecdotal personal experience, but it’s the kind that makes it harder to remain strongly committed to absolute anonymity, censorship-resilience, and privacy, over all else.


I am sympathetic to the cyberpunk anarchist position but how the system is currently set up, from my understanding, is some non-technical person could say, purchase a used laptop from facebook marketplace where the previous owner didn't remove Freenet and then get caught up in a child porn sting and go to prison without not only having no idea what Freenet is, but also having never engaged with, sought after, or viewed the material and merely because that program was running in the background.

I'm not trying to be a scold here, I've been running Tor for 20 years, I actually care about this stuff. I just don't think people should be sent to the clinker and outcast as a child pornographer for a background process


This concern seems misplaced. You would have the same result with a used laptop that contained unlawful materials for any other reason. You have no control over what the previous owner put on the machine.

Which is why you can have a look at 18 U.S.C. § 2252A and see a lot of instances of the word "knowingly". Now, could a malicious prosecutor convince a technically unsophisticated jury that you're guilty even though you didn't know it was there? Maybe, but that's also true in all of the other cases. It's the same thing if you're infected with some trojan and then a criminal enterprise is using your PC to distribute their unlawful materials. All it takes is for a lazy prosecutor to decide they'd rather get the notch in their belt by prosecuting a victim than have no case to trumpet because the actual perpetrators can't be identified or are in a Eurasian country that doesn't extradite.

The problem here is not "Freenet exists", the problem is that there aren't enough safeguards against delinquent prosecutors.

If anything the situation would be improved by making things like this more common, so that normal people use them themselves for distributing ordinary content and then realize how it works and put the blame on the actual perpetrators instead of the perpetrator's UPS driver.


Sure but things could be done to ameliorate the situation.

This is just spitballing but you could have opt-in and opt-out lists whereby if a certain threshold opt something out then it becomes opt-out by default thereby requiring some type of agency to be exercised for more controversial material.

You could also store things using some distributed encryption whereby multiple parties have to be online for the content to reassemble in the clear and only on the viewers machine.

You could add append style annotations so lies and misinformation at least has the opportunity of being challenged

You could have a more general region specific illegal content opt in for each node that has to be explicitly reaffirmed by the user, say every 30 days.

I'm not saying ban illegal content in the same way that the existence of stabbings doesn't mean you should ban knives, just make sure innocent people don't get flagged for it.


> you could have opt-in and opt-out lists whereby if a certain threshold opt something out then it becomes opt-out by default thereby requiring some type of agency to be exercised for more controversial material.

This would be an immediate DoS/censorship mechanism. Trolls would create a bunch of nodes and then have them all opt out of something they want to DoS so it falls out of the network.

> You could also store things using some distributed encryption whereby multiple parties have to be online for the content to reassemble in the clear and only on the viewers machine.

This is already how some of these systems work. The data distributed on the network is encrypted and to download it you get an identifier (e.g. a content hash) to locate it with along with the decryption key. The decryption key is only a few bytes and it can be included in the equivalent of a hyperlink. Without it anyone hosting the data can't read it.

Mega does something similar to this by putting the decryption key in the URL fragment so it isn't sent to the server but then client-side javascript has the key to decrypt the content with. This has poor security properties in their specific implementation because the server could be serving malicious javascript to extract the key, but new custom protocols don't have to allow that. Moreover, it might be worth something in terms of keeping attackers or insiders from snooping on their customers' data because someone with access to stored content might not necessarily have access to inject malicious scripts into client pages.

> You could add append style annotations so lies and misinformation at least has the opportunity of being challenged

This operates at a different layer, e.g. you could have a browser that does this and people could use it if they want to without building it into HTTP.

> You could have a more general region specific illegal content opt in for each node that has to be explicitly reaffirmed by the user, say every 30 days.

Then you would need someone to maintain all the lists even though it would be an exercise in futility rather than resulting in effective censorship of the material, since it would only cause it to be hosted in some other jurisdiction which every client could still transparently access as if nothing happened.

The problem here is that you either have an effective censorship apparatus or you don't. As soon as you have one, Saudi Arabia wants to enforce its blasphemy laws and China doesn't want anyone talking about Tank Man, which means you don't actually want one. Building half of one is just assisting the villains who want to build the whole thing.

> I'm not saying ban illegal content in the same way that the existence of stabbings doesn't mean you should ban knives, just make sure innocent people don't get flagged for it.

Prosecutors generally know the difference between "this is a crime ring" and "this is a Tor exit node". However, some of them are schmucks. The only real way to fix this is to make sure laws and courts don't allow them to ruin innocent lives, because the details of the technology aren't going to matter when someone who shouldn't be in a position of power has a vendetta that goes unchecked.


Everything is defeatable. I was hoping more effort than "absolutely nothing" had been done for the cause here.


Your premise is that something should be done. Censorship resistance is not a bug.


None of this is censorship. If people want to look at dirty photos then I honestly don't care, it's none of my business. I would, however, like to be able to run Freenet without those being served from My computer.


We're planning a reputation system based on the idea of "web of trust" that should ensure you don't get exposed to anything you don't want to be.

It's based on a system[1] by the same name in the original Freenet that was used to prevent spam, and which worked well.

[1] https://github.com/hyphanet/plugin-WebOfTrust


Correction: that is used to prevent spam.

Having bought a used computer, phone, etc, always do a factory reset, or something even stronger. Never trust the contents to be safe.


I may have been unclear.

I meant that my first instinct when you mentioned libel and slander was that neither are worth risking compromising, even slightly, the privacy and censorship resilience these tools offer.

But your comment also reminded me of what I had encountered on Freenet.

And I wasn’t so sure anymore.


Sure but there's harmful slander that can ruin people's lives and cause real harm.

For instance, manifestos by hate crime mass murderers are littered with conspiracy theories and wild lies.

It's faster, quicker, more scalable and less effort for propagandists to lie than for researchers to carefully document and disseminate good faith efforts at reality. It's also easier to understand manufactured bullshit than messy complex reality.

It's the fundamental problem. Reality will not sort itself out because fakery is faster and can be made more compelling, so it spreads quicker and is, unfortunately, more believable by more people.

I know fixing it is hand waving a magic wand but that doesn't make the current state any more acceptable


That’s why moderation must scale better than lies.

In the original Freenet (now Hyphanet) it does, because when several people you communicate with block something, it disappears for you, too. When that was added (years ago, because having no centralized power means it was needed much earlier), communication got much more friendly in a pretty short amount of time.

You can create a new ID to see it anyway — or explicitly make a spammer visible — but there’s usually no need, because you’ll actually not want to see stuff your friends identified as lies. And you can see who blocks what, so if you should doubt your friends (i.e. because something you like disappeared), you can stop their blocking from affecting you.


Nothing about the described scenario is remotely unique to Freenet. By a long shot.


what did you see

this reads like a creepypasta from 'secure contain protect'


No creepypasta here. Just a barebones html page full of photographs of children being abused.

I didn’t think it necessary to spell it out loud, but apparently it was.


what's a lot worse than pictures of children being abused is children actually being abused, for example right now in gaza

a photo of this child being abused was published in the new york times https://en.m.wikipedia.org/wiki/Phan_Thi_Kim_Phuc and that photo is generally credited with significantly shortening the war that caused that abuse

many serial killers and other abusers have been caught, convicted, and sentenced because the recordings they made of their misdeeds were preserved

and that is the general pattern. censorship of evidence of abuse perpetuates the abuse and protects the abusers more than the victims

see, it's much easier to have a rational discussion when you spell out your position instead of relying on winks and nudges


Uhhh. Look I think this is sort of an intractable problem. Technology is advancing. Humans are terrible. No technical or political solution is going to stop atrocities. That's of course not to say we shouldn't try to educate, prevent, serve justice, etc.

But... Are you implying that random people being unwittingly exposed to CSAM is somehow going to cause a reduction in production of CSAM? Because good lord, I'm not sure I can understand that train of thought.


hopefully the explanation in https://news.ycombinator.com/item?id=40712506 is helpful to you


While it may not have been your intent, your response felt dismissive and condescending to me. Additionally, the first paragraph seems to engage in whataboutism. Which, should it be the case, doesn’t serve the point you are trying to make.

I will however grant you the benefit of the doubt and engage further.

For you, child abuse might be an abstract concept. To me, it is gut-wrenching memories. I would have spared you the details, but it seems you prefer everything to be spelled out for you. So here it is.

I grew up in a country where impoverished families in rural areas often send their children to the city, supposedly to attend school. Instead of being educated or taken care of, many of these children are sent to beg on the streets, to bring back money, only to be beaten and starved until they meet their quotas. Some are raped. Some run away.

Of those who run away, some return home. When they do, their families often send them back, either because they don’t believe their stories, refuse to believe them, or simply cannot afford to keep them.

Those who don’t return end up living on the streets on their own, forming bands of children begging, stealing, and proudly telling you how they’re the ones buying the solvent to sniff for the group.

Despite witnessing this firsthand, I can only imagine that the full horror of it is far worse than I will ever understand.

To top it all, when I came back years to visit some friends I learned from social workers actually trying to get these children off the street that as they grow up some of the older kids actually often start raping the younger ones.

So, you see, I really could have been spared the lecture.

These experiences shape my deep concerns about the impacts of absolute anonymity and privacy. While I instinctively value these principles, actually understanding real-world implications make it difficult to hold them above all else without question.

There is a significant gap in our experiences. While I am glad many have no first or second hand knowledge on any of these topics, this gap might make it challenging for the two of us to have, on this topic, the fully rational discussion you appear to want.

Ironically, this difficulty perfectly underscores my point: while privacy and freedom of expression are crucial, some experiences compel me to question how we balance these ideals with the need to prevent harm.

Now, onto the censorship of evidence issue.

First, I fail to grasp how exposing unwitting people to this kind of content would, in any way, help find abusers. Your Vietnam war example might have been valid, arguably, if public awareness on the issue of child pornography needed to be raised. If, like during the Vietnam war, public opinion was divided on the topic. Thankfully, it isn’t.

Second, moderating such content doesn’t mean it couldn’t or shouldn’t reported to law enforcement. Such systems already exist. They have their faults [0], but they exist.

> see, it’s much easier to have a rational discussion when you spell out your position instead of relying on winks and nudges

I assumed the issue was clear and didn’t want to force explicit details onto unwitting readers. I will not apologise for that.

[0]: https://www.hackerfactor.com/blog/index.php?/archives/929-On...


it sounds like this issue is enormously more of an abstract concept to you than it is for me, which may be why you're more interested in whether a proposed policy such as limitations on anonymous communication will result in you seeing pictures of child abuse than whether it will cause child abuse. because i have a lot more knowledge of the issue than you do, i don't want the policies you favor, because, as i said in my previous comment, they perpetuate the abuse and protect the abusers more than the victims

my vietnam war example wasn't some kind of analogy or metaphor. it was, literally, a photo of a naked child being horribly abused. at the time, publishing it was legal, although plausibly today it wouldn't be; if i recall correctly, wikipedia has been blocked in various countries for containing that picture, and it has been blocked on facebook: https://breakingnewsenglish.com/1609/160909-napalm-girl-phot.... public opinion was not in fact divided on the topic of horribly abusing children; it was simply uninformed about who was doing it and how to impede them. you are similarly uninformed today

but you already have enough information to understand that you are in the wrong. when you opened that page, why did you just close it? why didn't you send those pictures to the police, to investigative journalists, or to a private investigator? unlike the http web, freenet was protecting you by ensuring that whoever uploaded the pictures to freenet (probably the abuser) didn't have your ip address or any other way to figure out who reported them. perhaps you made that decision because you're living in a legal regime which penalizes their mere possession. but when you did, you personally became complicit in perpetuating that abuse, in order to comply with the very censorship regime you are defending

and that is the general pattern of how censorship relates to abuse

here's another photo of an abused five-year-old: https://en.wikipedia.org/wiki/Congo_Free_State#/media/File:N... more accurately, it's a photo of all that was left of her after the abuse. this kind of abuse continued for years in the congo free state, only stopping when foreigners with diplomatic immunity were able to visit, document it, and force the king himself to give up the colony where he had institutionalized such abuses. if the congolese had had access to freenet, it would have stopped years earlier

and that is why right now journalists and other people in the gaza strip are being killed when they try to get internet access to share the news of what is happening there. even the un high committee for refugees complains that its own workers there are unable to communicate reliably. even today, systems like freenet remain marginal and relatively ineffective, and mass human rights atrocities—including sexual and even worse abuse of children—is the predictable result of that situation

so no, it wasn't whataboutism


> it sounds like this issue is enormously more of an abstract concept to you than it is for me

Maybe. On the off chance you may have some first hand experience with these topics, I would be inclined to believe you.

However, considering the standard for transparency you yourself have set earlier, I would have needed more than a simple affirmation.

> i don't want the policies you favor

I do not want them either. I do not favour them. Quite the opposite. That’s actually my entire initial point: what I favour is hard to reconcile with my experiences and the emotions associated with them.

You could have offered some interesting, helpful, constructive, and interesting, perspective on that. And re-reading your replies, I’m pretty sure you tried in your own way. But instead it reads like you chose to come waltzing in and bite my head off.

> when you opened that page, why did you just close it? why didn't you send those pictures to the police, to investigative journalists, or to a private investigator?

I was 16, living in a foreign country.

> so no, it wasn't whataboutism

And I am glad it wasn’t. Thank you.

As for the rest, I definitely understand and agree on the need for means to securely and reliably communicate information that some states would rather censor. Despite our differences, it is clear we both care about these issues.

I am however afraid this is all the common ground you and I will be able to find. So I will stop engaging any further and step away from this conversation.


Blocking content is done on a higher level: on the level which links are shown.

The default bookmarks only include links to safe content and the core communication tools (Sone, FMS) have decentralized moderation with which people sharing horrific content get blocked without creating centralized power structures. If you see something you think should not be there, you block it, and it disappears for all who trust your judgement.

And yes, it took a lot of work to get to this point.

The advantage here is: as soon as no one accesses that horrific content anymore, it disappears. So this moderation actually removes horrific stuff. Because the original Freenet is not actually for perpetuity: when a media file is not accessed for a month, it is gone.


> worth compromising the design of a network

That depends on what the design of the network is.

In my mind Freenet is too free for reasons discussed in sibling comments (but in short, literally no safeguards against Not Safe For Life content, by design).

For me, practically, federation provides a far improved middle ground. People can still freely distribute content without being beholden to any particular middle man, but there is more visibility afforded to node operators about what they're hosting and they therefore can both remove content that they don't want to be hosting and remove users from their network who are making problems.

Gossip protocols are another solution which also works for similar reasons.

Ultimately, anonymity is an anti-pattern in the same way that the hot new zero-trust stuff is in the blockchain world. Humans, social beings that they are, fundamentally operate on trust, which fundamentally requires identification. Removing either of those creates more problems than it solves.


> Ultimately, anonymity is an anti-pattern in the same way that the hot new zero-trust stuff is in the blockchain world. Humans, social beings that they are, fundamentally operate on trust, which fundamentally requires identification. Removing either of those creates more problems than it solves.

Quite the opposite. Forced identification is an instrument of fascism. There is a reason the phrase "papers, please" is associated with the villains.

Communities can then be layered on top. But even there, what you need is a persistent identifier with which to build a reputation, not a government tracking number with which to be extracted from your bed at 4AM and shipped off to a prison cell if you're accused of crimethink.

The talk of "community standards" gets to the root of it. To have community standards you have to have a community, and each community will have its own standards. Which means the standards belong in the community and not in a generic protocol at the core of the network used by diverse communities with mutually incompatible ideals.


I think we're agreeing.

> But even there, what you need is a persistent identifier with which to build a reputation, not a government tracking number

The persistent identifier is all I mean. I agree that tying it to a government-issued identification is problematic since it then gives the government the administration/moderation power. As long as there is a persistent identifier (and one the community owns so it can take meaningful moderation/administration action when necessary), then we're good.

By "anonymity" I mean the absence of a persistent identifier (for example, someone uploading something to BitTorrent is anonymous by default, as far as I know).

> To have community standards you have to have a community, and each community will have its own standards. Which means the standards belong in the community

Also agree with this. This is where federation really shines in my book, as it lets each community apply its own standards while also enabling networking across communities with sufficiently compatible ideals while retaining the autonomy of each community.


Most communication tools in the original Freenet nowadays have persistent identifiers, because the experience within Freenet showed that not having any moderation causes constructive communication to break down.

The experience there proves that the methods used there for decentralized moderation succeeded at keeping communication friendly without centralized power or forced identification.

For more background see https://www.draketo.de/software/decentralized-moderation


> As long as there is a persistent identifier (and one the community owns so it can take meaningful moderation/administration action when necessary), then we're good.

> By "anonymity" I mean the absence of a persistent identifier (for example, someone uploading something to BitTorrent is anonymous by default, as far as I know).

But these are separate layers. It's the same way that HTTP or TCP is "anonymous" in the sense that it doesn't assign any names to users (and anybody can connect to the coffee house Wi-Fi or use a VPN etc.), but Reddit (which uses HTTP and TCP) has usernames.

A P2P content-addressable storage system can be "anonymous" and it's not a problem, because all it's doing is hosting raw data. Then you build a P2P Reddit on top of it that has human moderation etc., but that's a separate thing made by separate people, and there could be arbitrarily many of them because they're not fused together.

It's like Netflix and Hulu could both use BitTorrent in the same way that PeerTube does, even though they're independent competitors. The main reason they don't is that Hollywood wants to hold out the pretense that using BitTorrent is some kind of dishonorable activity, not that there is anything actually unsuitable about it.

> This is where federation really shines in my book, as it lets each community apply its own standards while also enabling networking across communities with sufficiently compatible ideals while retaining the autonomy of each community.

A lot of the existing protocols are poorly designed around this though. Like one of the big problems on Mastodon is that users have a "home instance" and can't migrate from it, but then a lot of big instances don't federate with other instances by default, and if your instance dies then your account goes with it. This also makes it impossible for a user to use one account for everything, because there isn't necessarily any instance that federates with all of the communities the user wants to participate in. But since users will want that, and large instances will want dominance and can achieve it by not federating with small instances, it's a centralizing force that encourages the Gmail-ification of the system.

What you want is many diverse communities that anybody can fluidly move between and participate in simultaneously, not recreating the status quo and calling it a distributed system.


> Ultimately, anonymity is an anti-pattern in the same way that the hot new zero-trust stuff is in the blockchain world. Humans, social beings that they are, fundamentally operate on trust, which fundamentally requires identification. Removing either of those creates more problems than it solves.

You will forgive me, "indigochill", if I fail to respect your argument against anonymity made from behind a fake name.


You will forgive me, if I find that that your attempt at trying to find a flaw in the GP poster's argument is itself flawed.

This use of a nickname is pseudonymous, not anonymous. HN is specifically not anonymous.


Do you have any rhetorical examples that supports broadcasting or mass communication? The USPS has significant regulations around bulk mail and commercial mail, including restrictions on the content of the mail. Likewise there is heavy regulation in the United States of robocalling, telemarketing, and mass texting.


Those are laws. It is also illegal to distribute libelous material via Tor or Freenet. The relevant point is that the USPS doesn't unseal your envelopes and read your mail and anyone can stick a stamp on that sealed envelope and drop it into a mailbox with no return address and it will be delivered.


In your analogy, what is the equivalent of unsealing envelopes for Freenet, or for that matter, the World Wide Web? I thought what we were talking about was whether there is (or should be) active enforcement of regulations around the content you distribute on a particular medium.


Huh? This entire subthread is about Freenet and moderating the medium. The analogy is that if USPS were trying to do so, it would be absurd.

It also highlights why I think this is nearly intractable. Distributed, censorship-resistant designs lend themselves to resiliency and permanence. I just have this gut feeling you don't get both. And frankly I think the people trying to shove the cat back into the bag are a bit naive.


But unsealing envelopes doesn’t work as an analogy with FreeNet or the web, which are largely not private communication. We’re talking about removing public content, not reading private messages. And my point is that, while the USPS will (presumably) not unseal envelopes, they will go after you if you send certain content via the postal service.


> Likewise there is heavy regulation in the United States of robocalling, telemarketing, and mass texting

And how is that going? After 100 years there is still very little you can do against phone abuse

Robocalls Finally Have the U.S. Government's Attention https://time.com/6513036/robocalls-government-action/

> Around 33 million robocalls are made each day to Americans

Robocall Statistics https://worldmetrics.org/robocall-statistics/


They’re incompetent at enforcing it, but that’s not really relevant.


Are libel, slander, and censorship all similar concerns in you opinion?

I see libel and slander concerns over what another person says while censorship is an entity with more authority/power stopping me from what I want to say. The implied power imbalance alone makes censorship a much bigger concern to me, though maybe I'm biased there.


Censorship is required if one wants to prevent libel and slander. There are some people who won't stop doing it unless an entity with more authority/power stops them from saying it.


In your opinion, how do you weigh collateral damage when it comes to censorship and free speech?

I've always been of the mind that I'd rather let 10 guilty people walk free than imprison one innocent person. I extend that to free speech as well. I'd rather let 10 lible or slander complaints work their way through the court system rather than wrongfully censoring one person's speech.


Everyone has more authority over their mailbox than any sender. Hence local spam filters block a lot of content without bring censorship. People can share their findings about the patterns of spam, hence collaborative / federated spam rejection lists. They are not dominated by any single entity or a small clique, and every user is free to heed or ignore any part of the list, or none at all. Hence it's not censorship.

This mechanisms formed in the email network long before the current oligopoly formed, and still works.

So I hope there is a honest way to filter out generally objectionable content by.a collective effort of the users who care, without turning the filter into a censorship tool.


This actually happens in the original Freenet (now Hyphanet): the communication tools use propagating trust and blocking. So if someone you communicate with regularly sees something objectionable and blocks it, it disappears for you, too.

But you can always create a new identity that does not trust the other person — or set your own local trust — to see the content anyway.

The important point is that this way resistence against disruption of communication scales better that the disruption (spam, libel, harassment, ...).


Anonymous libel does little damage because it has no credibility.


Freenet is not a place or a website.

This is literally the same as asking "so how does the internet prevent spreading of slander or libel?", or as another commenter pointed out, asking how the global telephone system does the same. The question has no answer because the question doesn't make sense. It's a question that shows a fundamental lack of understanding.


Basically what you are asking is "how does a free-speech platform prevent the spreading of speech I do not like?"

Any platform that provides an answer to such a question cannot be labeled "free".


I'd say that's the wrong priority to focus on in a project like this.


Even though I dislike and oppose abusive and questionable content, I'd take it over centralized censorship or state backdoors.


There were unofficial lists of content keys for child porn you could use to purge them from your local storage. That was the most you could do.


"slander, libel, abusive content" is a slope we've all seen slipped on. People in power use it to censor views they don't like.


And people use it to paint false images of people or situations to suit their agenda. Lies spread more readily than truth.


And centralized moderation tools gravitate towards favoring these lies.

Even worse: lies about climate change are nowadays being spread via paid Youtube advertisements. This kind of centralized power should never have existed in the first place.


Unfortunately until god comes down from heaven we have no way of figuring out what is slander, libel, or otherwise abusive content.


That's called moderation, dude! :P

I think some moderation is important so things don't get out of control.


So do the Imans in Iran. Least they stop hanging homosexuals on cranes.


You're mixing moderation with punishment outside of the platform. Those things are extremely different.


No, they are exactly the same, but taken to their extreme, yet inherent evolution.

Putting you in jail or hanging you for "illegal speech" are just two degrees of punishment for violating the laws of the land, as it were. The degree of punishment is just a cultural thing. The only question is one: are you free, or not?

No one is forcing you to use free-speech platforms, but to criticize them you need to understand what freedom means in the first place.


This thread is not about jail, but preventing you from posting something on a specific platform. (Moderation) If you think that's exactly the same as hanging people, you need to talk to actual people outside...


Again, the Imans would love to moderate your internet.

They don't want to hang you for it.

They just want you to never hear anything positive about homosexuality and the people they are hanging.

Understanding this is very difficult for people who support their governments form of censorship, like everyone working at googleface.


On the other hand, moderation can stop troll farms posting negative things about homosexuality non-stop. And it can help stop things like doxxing and promoting criminal activity and downright evil stuff.

I don't oppose privacy tools existing, specially for edge cases like investigative journalism and oppressive regimes as you mentioned. I mostly oppose people using and promoting unmoderated or inadequately moderated services. I guess what's being discussed here is mostly infrastructure not services, but still I think the infrastructure may be able to help promote or facilitate healthy services.

I've seen the outsized harm free for all spaces (usually "for teh lulz") can do to society, when we thought it was just innocent "shitposting".

I encourage people to participate in spaces where you know there's ethical moderation (that also leaves leeway for cultural differences), avoid otherwise, and don't encourage anyone to participate there. I think HN is a pretty good example of that.


Did you see that there is nowadays information about abortion on the original Freenet (now Hyphanet)?

I for sure did not expect that this would be needed for people in the USA. But that’s what happened.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: