Hacker News new | comments | show | ask | jobs | submit login

I want to solve this problem.

I've thought about solving this problem too.

One idea I've had is a content-type-restricted network that permits only text. That would allow utterly un-censorable communications: chat, planning revolutions, whatever, but wouldn't be useful for CP. (Unless you like ASCII-art CP.)

It could support ANSI. That would be neato. It would feel like the old BBS world. Wonder if anyone would still care if a name like ViSiON-X were stolen for it. :)

This has 2 problems:

* You can base64 encode any file, so it'll look like text. Limitations on message size might solve that.

* Sometimes, photos and videos are important. Think of the Abu Ghraib torture pictures, the Tianamen Square Tank Man. Sometimes photos & videos are censored and should be shared with the world.

Yup, that's true.

Maybe binary content propagates differently. Text that meets certain criteria is replicated indiscriminately, but binary content is only replicated when a user votes on it.

Edit: you could apply game theory to this problem. Model the network as a graph and write an agent-based modeling rule set for... say... CP-propagators and non-CP-propagators. Run iterative simulations of different propagation rule sets and weightings/parameters. Now introduce bad actors in the form of, say, government agents trying to suppress political discourse. The difference is that average-joe will cooperate in pushing out CP but will "defect" in a game with the other kind of bad-actor. You're looking for rule-sets and parameters where the CP gets pushed to the margins of the network or excluded but where the other kind of bad-actor is also excluded.

That sounds like an interesting idea. It might be helpful for spam-filtering too. Sybil attacks could be a potential problem.

Now you've really got me thinking...

Could Bayesian classification be implemented through a homomorphic cypher?


But base64 encoding is pretty easy to identify, as are most imaginable encoding schemes. Simply disallow them.

If the encoding schemes become so obscure as to not be recognizable, then the problem is still effectively solved.

How about 256 words (or sets of words) representing each byte value? It's less dense by a factor of ~5, but it would work, be easy to decode, and be very difficult to identify, especially if you used sets of words. You could even cleverly generate in a way that is grammatically correct.

You're playing cat and mouse then, and you'll never win. If base64 was banned, I'd base26 encode it (i.e. letters of the alphabet), then use the NATO phonetic alphabet "alpha bravo charlie". The message size would be massive, but the message would get through.

Limitations of message size won't solve that because the messages will be broken into hundreds of pieces, just like they are on Usenet. In theory you could limit the total amount of content coming from an end point, but it's probably not feasible in an anonymous system.

You can still encode any data as base64.

Usenet shows this kind of thing in action. It's now used for the most part for illegal file trading.

How about a reddit-like ranking system as a portal do the darknet? It would easily categorize content in niches that would be curated socially. It could even help further ostracize the CP crowd to a marginalized role even in the darknet. It would enable anyone to flag out the abhorring niches and focus on the actual benefit of enjoying free exchange of information.

It doesn't get rid of them, but that's something that we wouldn't really be able to do (they all existed prior to the internet), but it would make the darknet usable.

A reddit-like ranking system would merely ensure that only mundane, low-brow and easily-digested content would be popular.

Which is exactly what we need to get this technology to become popular, which is critical to making this kind of mesh useful - there's a quite literal network effect.

If we are discussing potential solutions, maybe friend of a friend-network could somehow help thwart this issue. Even if CP etc was traded in the hypothetical foaf-network, it hopefully would remain in an isolated island of sorts. Anonymizing foaf-network could be bit of a challenge though.

Seems to me the only possible solution is a network of trust. Start small and build outward over time. This was the approach Facebook took to solve the "real identity" problem on the Internet at scale.

Usenet was text only and it let to the widespread use of UUENCODED images and videos

But couldn't one always use some binary-to-ascii encoding to circumvent this?

couldn't you just encode the images into text?

Yeah. You'd have to be more clever than that. There's a lot of interesting work on meaning extraction and text analysis. I wonder if you could have some kind of information density threshold.

It would be possible to encrypt text in such a way that things could be said about its information density but not about its meaning, too, though that would permit steganography to be used. But it would raise the technical bar for using the system for this purpose so high that it would probably drive away all the chickenboners.

There's also a dumb way: length limits. That would force binary data to be divided up into a huge number of posts, making it an annoying medium for file trading. Plotting the Iranian revolution would not require >1mb posts to a forum.

That's true. But, it's also completely possible to make a machine learning algorithms to separate "real text" (text meant for human to human communication) from text encoded images since they differ in significant ways. Granted, you could always try to make a text encoding format which resembles real text, but I'm fairly sure the machine learning algorithm could be constructed in such a way to make its usage unfeasible.

Yes, which is why you keep humans in the loop: you design a system in which the will of humans who want the system to be used only for text prevails and which is resistant to attack by humans with contradictory agendas.

I've thought about this many times. How can you create a system that preserves the anonymity of publishers in persecution situations but excludes the child pornographers? The difficulty comes down to the fact that there is no technical difference between the two classes of publisher, just a moral one. Suppose the persecuted wants to share pictures of children being abused by government forces?

One idea I had was for the system to be semi-anonymous. Publishers would form public groups and the publication of content comes from the group as a collective. The members of each group are known, but the specific originator of the content within the group is not. This is the spartacus model of anonymity :)

Oh wow. There's a discussion about an anonymity network being used for horrible stuff, and people say "I want to solve the problem and make a network that can't be used for horrible stuff"?

I want to stop people from doing horrific shit to other people in the first place. Unfortunately, I have no idea where to start...

I think you could make an argument for a 'war on paedos' (for example) being as useful as the war on drugs, crime, terror, or whatever.

The difference with this one is that you can conduct your 'business' entirely online, with almost complete anonymity. And what liberal solution is there that doesn't involve children or agitating the mob?

Technology is definitely the wrong thing to look at, I agree. I think, historically, it would be like blaming speakeasies for allowing people to drink illegal booze.

I think it's pretty easy to argue that. Child molesters cause way more suffering than the rest (well, depending on stats, but still), so I think that's a much better way to spend your resources.

With a sufficient technical background, you could work for the FBI helping to monitor, decrypt, and track these a-holes.

That... is actually a very good idea!

I think that all you can do is to have some system of classification and rating validated by a web of trust reputation system.

As I evaluate content on the network, I classify it and rate it. My identity associated with those things is established though (possibly) anonymous. Over time, islands of trusts within the web will form that can be used to help filter large amounts of content.

If I start a node, I can link to islands of trust to only allow verified content acceptable to my filters to pass through my resources.

It's not perfect. Some will attempt to game the system by building up trust and then attempting to sneak content through. Some will attempt to hide illicit content in innocuous content.

What if you only connect to those you know? If you don't trust someone, don't connect to them. It causes other problems, but those are much more solvable.

*Edit: If Google can use reputation to solve search, why can't reputation be used to solve this?

I think you would need to limit the scope from an entire decentralized , anonymous 'network' to just a decentralized anonymous website or discussion forum..

Something like an anonymous decentralized HN or reddit with mods that have the ability to ban posts, topics , & users. It wouldn't be as 'free' as tor or freenet, but with the right group of benevolent dictators it could be as free and useful for a certain niche topic like politics or news.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact