
ZeroNet – Uncensorable websites using Bitcoin crypto and BitTorrent network - handpickednames
https://zeronet.io/en
======
freedaemon
Love the ZeroNet project! Been following them for a year and they've made
great progress. One thing that's concerning is the use of Namecoin for
registering domains.

Little known fact: A single miner has close to 65% or more mining power on
Namecoin. Reported in this USENIX ATC'16 paper:
[https://www.usenix.org/node/196209](https://www.usenix.org/node/196209). Due
to this reason some other projects have stopped using Namecoin.

I'm curious what the ZeroNet developers think about this issue and how has
their experience been so far with Namecoin.

~~~
tokenizerrr
Also, if you ever lose control of a namecoin domain you can say goodbye to it
forever. A squatter will take it instantly and hold on to it forever unless
you buy it from them for actual money.

~~~
marssaxman
Isn't that true of normal domains, too?

~~~
loup-vaillant
Depends on the toplevel suffix. For instance, .fr (France) domains have a "no
taking" period after the expiration date, where nobody can take it from their
previous owners. The owner can then take it back, but it won't be re-activated
for a couple of weeks, I believe. So the punishment for screwing up is a
temporary blackout of your domain name.

.com, .net, .org domains are handled differently, and may be easier to lose
permanently.

------
shakna
Has the code quality improved since I was told to screw off for bringing up
security?

* 2 years out of date gevent-websocket

* Year old Python-RSA, which included some worrying security bugs in that time. [0](Vulnerable to side-channel attacks on decryption and signing.)

* PyElliptic is both out of date, and actually an unmaintained library. But it's okay, it's just the OpenSSL library!

* 2 years out of date Pybitcointools, with just a few bug fixes around confirmation things are actually signed correctly.

* A year out of date pyasn1, which is the type library. Not as big a deal, but covers some constraint verification bugs. [1]

* opensslVerify is actually up to date! That's new! And exciting!

* CoffeeScript is a few versions out of date. 1.10 vs the current 1.12, which includes moving away from methods deprecated in NodeJS, problems with managing paths under Windows and compiler enhancements. Not as big a deal, but something that shouldn't be happening.

Then of course, we have the open issues that should be high on the security
scope, but don't get a lot of attention.

Like:

* Disable insecure SSL cryptos [3]

* Signing fail if Thumbs.db exist [4]

* ZeroNet fails to notice broken Tor hidden services connection [5]

* ZeroNet returns 500 server error when received truncated referrer [6] (XSS issues)

* port TorManager.py to python-stem [7] i.e. Stop using out of date, unsupported libraries.

I gave up investigating at this point. Doubtless there's more to find.

As long as:

a) The author/s continues to use out-dated, unsupported libraries by directly
copying them into the git repository, rather than using any sort of package
management.

b) The author/s continue to simply pass security problems on to the end user

... ZeroNet is unfit for use.

As simple as that.

People have tried to help. I tried to help before the project got as expansive
as it is.

But then, and now, there is little or no interest in actually fixing the
problems.

ZeroNet is an interesting idea, implemented poorly.

[0] [https://github.com/sybrenstuvel/python-
rsa/issues/19](https://github.com/sybrenstuvel/python-rsa/issues/19)

[1]
[https://github.com/etingof/pyasn1/issues/20](https://github.com/etingof/pyasn1/issues/20)

[3]
[https://github.com/HelloZeroNet/ZeroNet/issues/830](https://github.com/HelloZeroNet/ZeroNet/issues/830)

[4]
[https://github.com/HelloZeroNet/ZeroNet/issues/796](https://github.com/HelloZeroNet/ZeroNet/issues/796)

[5]
[https://github.com/HelloZeroNet/ZeroNet/issues/794](https://github.com/HelloZeroNet/ZeroNet/issues/794)

[6]
[https://github.com/HelloZeroNet/ZeroNet/issues/777](https://github.com/HelloZeroNet/ZeroNet/issues/777)

[7]
[https://github.com/HelloZeroNet/ZeroNet/issues/758](https://github.com/HelloZeroNet/ZeroNet/issues/758)

~~~
fiatjaf
Well, it is better to concentrate on getting users in than to solve some small
quirks.

Nobody is going to attack ZeroNet if it doesn't have users anyway.

~~~
nickpsecurity
That was OpenSSL's attitude. It resulted in harm to many more users who
would've been better off with something else or with its own developers
actually trying to prevent security vulnerabilities. A project advertising
something to be "uncensorable" based on "crypto" or whatever should be baking
security in from the start everywhere it goes. Or it's just a fraud.

~~~
fiatjaf
You're right.

------
roansh
We need more projects like these. Whether this project solves the question of
a truly distributed Internet* is out of question. What we need is a movement,
a big cognitive investment towards solving the Big Brother problem.

*I am referring to concentrated power of the big players here, country-wide firewalls, and bureaucracy towards how/what we use.

~~~
fiatjaf
We need multiple internets, a big confusion, governments can't handle
confusion, but if everything is standardized over Facebook and WhatsApp it's
easy for them.

~~~
antocv
They can.

Well. Look, even if you have multiple internets, decentralized everything,
distributed all systems, no more Google no more Facebook. What does the
communication patterns in such a system look like? Do you use the system after
work before going to sleep? Your and everyone else usage patterns, traffic can
be analyzed. The endpoints, many of them would be honeypots run by Spooks,
revealing even more what you are up to and giving you a false sense of safety
while the Spooks could run the entire decentralized inter-network.

So your system would have to fake it somehow, fake requesting some hashes here
and there, fake request/post comments and follows. Otherwise, the social data
available when the SPOOKS join your social-network even if it is distributed
like patchwork on scuttlebot, defeats its purpose.

That is what bitmessage does, but then you pay in high bandwidth costs. And
yet, you cant just do random shit, random can easily be filtered out so you
need more advanced method of finding fake social relations and using those to
do fake data to actually conceal what you and everyone else is doing on the
interweb.

EDIT: Im not saying "give up", its a very worthy cause, just the problem is
harder and enters the social space quite fast - the problem is the same as "we
are all nice developers and hackers" yet 99% seem to be employed by
NSA/similar-services/Google and think they are doing great James Bond like
type of jobs, while they are actually anti-hackers and anti-developers, in
fact, anti-society.

~~~
dandelion_lover
I believe i2p solves most of issues you mentioned with galic routing. Though
at a cost of speed of course.

------
eeZah7Ux
The project looks very promising but relies on running a lot of javascript
from untraceable sources in the browser.

Given the long history of vulnerabilities in the the browsers, trusting js
from a well-known website might be OK, trusting js from zeronet is
unreasonable.

If ZeroNet could run with js code generated only by the local daemon or
without js it would be brilliant.

~~~
blitmap
Chrome added a feature a long while back I really wanted for ages. The ability
to specify the checksum of a linked asset, so that it can be verified as it's
downloaded (and untrusted/discarded if not). I just can't find the docs for
it. :( My Google-fu is not strong.

EDIT:

Found it :D

[https://w3c.github.io/webappsec-subresource-
integrity/](https://w3c.github.io/webappsec-subresource-integrity/)

~~~
gog
It's called Subresource Integrity, see
[https://hacks.mozilla.org/2015/09/subresource-integrity-
in-f...](https://hacks.mozilla.org/2015/09/subresource-integrity-in-
firefox-43/) and [https://www.w3.org/TR/SRI/](https://www.w3.org/TR/SRI/)

Browser support is noted here: [http://caniuse.com/#feat=subresource-
integrity](http://caniuse.com/#feat=subresource-integrity)

~~~
blitmap
It's kind of a shame they didn't let their imagination fly with that one... I
wish integrity were a global attribute, because I could totally see using it
for things like images and audio/video.

~~~
Ajedi32
It might work (though I'm not completely sure) if you specify a hash in the
img-src directive of the CSP header: [https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Co...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src)

Another option would be to just use a subresource-integrity protected script
to check the hash of a downloaded image/video before displaying it.

~~~
blitmap
That is clever and I like you.

------
emucontusionswe
I would recommend use of Freenet over ZeroNet. More or less the same
concept/functionality however with 15 years more experience.

Freenet: [https://freenetproject.org/](https://freenetproject.org/)

~~~
fiatjaf
Freenet is a great idea with 15 years of failure to get traction with sane (by
which I mean non-paedophile) people.

Also, it's written in Java.

~~~
mee_too
Just curious: are the gay people insane as well?

~~~
averagewall
I think he was using the word "sane" loosely to mean common people, not a
niche group. I agree though that it's hypocritical to label some sexualities
as a disease or insanity but object to doing the same for other minority ones.

~~~
chobytes
are you actually trying to say that being a pedophile is a legitimate
sexuality?

or even that it's anything like being gay?

------
0xcde4c3db
> Anonymity: Full Tor network support with .onion hidden services instead of
> ipv4 addresses

How does this track with the Tor Project's advice to avoid using BitTorrent
over Tor [1]? I can imagine that a savvy project is developed with awareness
of what the problems are and works around them, but I don't see it addressed.

[1] [https://blog.torproject.org/blog/bittorrent-over-tor-isnt-
go...](https://blog.torproject.org/blog/bittorrent-over-tor-isnt-good-idea)

~~~
nonsince
I'm also suspicious, since they say that your blockchain address is used for
authentication - couldn't colluding websites track your public key and use it
to track you between websites?

~~~
Ajedi32
Seems like that's only for publishing new content, not for merely browsing.

Though I guess unless you create a new identity for every site you want to
post a comment on, your comments on one site could be proven to be posted by
the same person as your comments on another site.

------
avodonosov
As for uncensorable, if the content is illegal, the torrent peers may be
incriminated distribution of illegal content

~~~
ballenf
Neither argument has been tested, but the defense would that you were acting
as an ISP with dumb pipes.

Which logically leads to an unrelated question -- if ISPs are doing DPI on
every packet, they at least theoretically 'know' whether you're transmitting
'illegal' content. If I were a rights holder, I'd be making that argument
against ISPs. I don't know how I'd sleep at night, maybe, but I wouldn't let
ISPs have their cake (valuable user data) and eat it too (immunity based on
status as ISP-only).

~~~
mirimir
It's been tested for Freenet. LEA adversaries can participate, and identify
peers. Judges issue subpoenas. Many defendants have accepted plea bargains.
Plausible deniability doesn't work. What works is using Tor.

~~~
j_s
Even Tor isn't a magic bullet, specifically because of other technologies used
in combination, such as a web browser.

[https://www.eff.org/pages/playpen-cases-frequently-asked-
que...](https://www.eff.org/pages/playpen-cases-frequently-asked-questions)

~~~
mirimir
Yes, the FBI exploited a Firefox vulnerability to drop NIT malware on Playpen
users. And said malware phoned home to FBI servers, bypassing Tor.

However, any Whonix users would not have been affected, for two reasons. One,
this was Windows malware, and Whonix is based on Debian. Two, Whonix comprises
a pair of Debian VMs, a Tor-gateway VM and a workstation VM. Even if the
malware had pwned the workstation VM, there is no route to the Internet except
through Tor.

~~~
houst0n_
If the workstation vm is pwned what stops it from hitting the usual home
router internal network address and/or changing the route?

Is there some network isolation going on which prevents that?

~~~
mirimir
The workstation VM has no route to the home router except through the Tor
gateway VM. With Whonix, the gateway VM isn't even a NAT router. Plus there
are iptables rules that block everything except Tor. The gateway VM only
exposes Tor SocksPorts to the workstation VM. You'd need to break the network
stack in the gateway VM in order to bypass Tor.

~~~
houst0n_
Right so can't I just add one then? Most vm setups I might have a default
route to the other VM running tor but I can still talk to e.g 192.168.0.1 even
if I'm not putting traffic through it.

Is this some kind of 'vm specific' virtual network which can't talk on the
real lan? Is that implemented on the hypervisor?

~~~
mirimir
Yes, for Whonix it's a VirtualBox internal network. There's no direct routing
through the host, only among VMs. You can do much the same on VMware.

Edit: I forget that I'm writing on HN. When I say VM, I'm referring to full
OS-level VMs, not namespace, Java, etc VMs.

~~~
houst0n_
That sounds like a pretty neat setup. I know I can just google all this so
please forgive me the inane questions; it depends on virtualbox though?

That's a bit of a nonstarter for a few of.

We probably aren't the target base for the project though so maybe it doesn't
matter...

~~~
mirimir
Yes, it depends on VirtualBox. But there are versions for KVM, and for Qubes.
More of a nonstarter, though. Or even using physical devices, such as
Raspberry or Banana Pi.

Years ago, I created a LiveDVD with VirtualBox plus Whonix gateway and
workstation VMs. I had to hack at both Whonix VMs to reduce size and RAM
requirements. But I got a LiveDVD that would run with 8GB RAM. It took maybe
20 minutes to boot, but was quite responsive.

------
dillon
There's also GNUNet: [https://gnunet.org/](https://gnunet.org/) As others have
mentioned there's also FreeNet:
[https://freenetproject.org/](https://freenetproject.org/)

I haven't looked deep into any of these projects, but I do think they are neat
and hoping at least one of them gains a lot of traction.

~~~
shakna
Considering a Freenet user is currently [jailed
indefinently]([https://arstechnica.com/tech-policy/2017/03/man-jailed-
indef...](https://arstechnica.com/tech-policy/2017/03/man-jailed-indefinitely-
for-refusing-to-decrypt-hard-drives-loses-appeal/)), there does seem to be
some problems.

------
ThePadawan
Cannot access this at work for zeronet.io being involved in P2P activity.

I cannot help but feel disappointed and unamused.

~~~
akerro
[https://zeronet.readthedocs.io/en/latest/](https://zeronet.readthedocs.io/en/latest/)

------
Kinnard
In other news ZeroNet has been banned from giving its TEDtalk:
[https://news.ycombinator.com/item?id=14039219](https://news.ycombinator.com/item?id=14039219)

~~~
Neliquat
I quit giving TED my clicks long ago. They occasionally have some good talks,
but many more that are pseudoscience garbage. Don't even get me started on
TEDx. I hope ZeroNet find a better stage for their talk. Perhaps an organizer
could contact them.

------
jlebrech
I always wondered why you couldn't just download a torrent of torrents for the
month.

~~~
mirap
It would be interesting to create a distributed catalog of torrents itself.

~~~
omginternets
Maybe I'm not picking up what you're putting down, but how does this differ
from DHT?

~~~
forgottenpass
It might be possible with the DHT alone, but I think for what the grandparent
poster wants it would depend on the ability to query the DHT. Both in general
and by popularity and insert date.

That might be possible, but with the prvelence of magnet links instead of
everyone using that, I just assumed not.

~~~
icebraining
It's possible to do something fully distributed, but not just with the
existing DHT network: [https://www.tribler.org/](https://www.tribler.org/)

------
lossolo
There is single point of failure, kill the tracker = kill the whole network.
You can get all the IPs from the tracker that are visiting certain site, it's
not so secure if someone is not using tor.

~~~
the8472
Bittorrent supports more peer sources than just trackers, DHT being the most
important one among the others.

~~~
lossolo
It's not supported by ZeroNet.

~~~
Ajedi32
You sure about that? Their presentation [says][1] "Tracker-less peer exchange
is also supported". Any idea what that's referring to?

[1]:
[https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9...](https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9Yu7Au2l551t3XBgyTSvilew/pub?slide=id.g9a3b93883_1_0)

~~~
lossolo
That one peer can send their peers. It's called peer exchange.

------
daliwali
I don't see how this could decentralize web _applications_ though. Wouldn't
each client have to be running the server software? Someone has to pay for
that, too.

~~~
bachstelze
Yeah every client have to run the software or you use a proxy. If you have a
site with many spreads you don't need a running instance. But if you have a an
unknown site you would have to run a little server permanently.

------
wcummings
I thought it was pretty easy to disrupt / censor torrents, hasn't that been
going on for a while?

~~~
kbart
_" I thought it was pretty easy to disrupt / censor torrents, hasn't that been
going on for a while?"_

Not torrents themselves, only torrent search engines. Torrents are distributed
by design, but traditional torrent directories/aggregators/search engines are
centralized, thus easy targets for DCMA take-downs, ISP blocks, trials etc.

~~~
thriftwy
...And that's exactly the first thing they should put in ZeroNet.

~~~
dane-pgp
Yup, torrent search engines are the weak link when it comes to protecting the
public's access to arbitrary large files, and also the front lines in the
battle between the media industries and an uncensored internet.

ZeroNet is perhaps not enough on its own to solve this problem, though, since
a good torrent search engine suffers from the same limitation as a good forum,
which is the need to have some form of community-based moderation. If people
can't remove spam search results, and spam comments, then the medium can be
too easily exploited (using Sybil attacks, etc.) and become useless.

The missing piece which is holding back so many decentralised technology
projects is a lack of a decentralised trust platform. A necessary step towards
this would be a decentralised (and privacy-preserving) identity platform,
which would have the added benefit of removing the "Log in with
Facebook/Google" problem from the web.

~~~
thriftwy
Just sort search results by torrent popularity. People aren't going to seed
bad content.

------
rawells14
Sounds incredible, we'll probably be seeing much more of this type of thing in
the near future.

------
vasili111
Lack of anonymity in ZeroNet is a big problem.

~~~
Ajedi32
Seems like it's just as anonymous as the existing web; you can use Tor to hide
your IP, but that's optional.

------
jwilk
> Page response time is not limited by your connection speed.

Huh? What do they mean?

~~~
emucontusionswe
If you have previously visited a page then the response time will be limited
by your computers ability to locate and open the correct html document.

If you haven't previously visited a page then the response time will be
limited by how many peers are available <b>and then</b> by your connection
speed.

------
hollander
Several years ago I had Tor running on a server at home. It was a regular Tor
node, not an exit node. Later I was put on a blacklist because of this. What
is the risk of using this?

~~~
JBReefer
Would you mind defining "blacklist" in this context? That's kind of scary!

------
DeepYogurt
Presumably you only download the site you want when you visit it. If that's
the case then can you view revisions of the web sites or do you only have the
current copy?

~~~
r3bl
If you click on "How does it work?" you get redirected to a short and sweet
presentation[0]. According to the presentation, when you, as the site owner,
push an update, content.json gets updated, the peers get a notification (using
the WebSocket API) that a new content is available, and then they download the
new version of content.json that contains the sitemap of the updated version
of the website. Cleverly thought out!

[0] -
[https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9...](https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9Yu7Au2l551t3XBgyTSvilew/pub?start=false&loop=false&delayms=3000#slide=id.g9a1cce9ee_0_4)

~~~
shakna
Unless your site is too big: [0].

Then you can have users end up browsing stale versions of the site. Still an
issue as of before Christmas last year.

[0]
[https://github.com/HelloZeroNet/ZeroNet/issues/598](https://github.com/HelloZeroNet/ZeroNet/issues/598)

------
mtgx
Speaking of which, what's the progress on IPFS?

~~~
diggan
We're moving forward as always. Latest features would include distributed
pubsub, filestore (allows you to add files without duplicating them) and
interop between the browser and desktop nodes. Any specific part you're
looking at?

~~~
ashark
1) What's the status of (supported as a real feature, not just manually
changing the bootstrap nodes and hoping everyone else does too) private IPFS
networks? If it's there already, how stable is its configuration ( _i.e._ if I
get my friends on a private IPFS network will I likely have to get them all to
update a bunch of config in 6 months or a year)?

2) Does filestore also let you store, say, newly pinned files in your regular
file tree? That is, can you pin a hash for a file (or tree) you don't already
have and provide an ordinary file system location where it should go when it's
downloaded? Or do you have to copy it out of IPFS' normal repo manually, then
re-add it in the new location? Also: how does filestore behave if files are
moved/deleted?

3) What rate of repo changes requiring upgrades can we expect for the future?
That is, how stable is the current repo structure expected to be? Is the
upgrade process expected to improve and/or become automated any time soon?

4) Is there a table of typical resource requirements somewhere? I'm looking
for "if you want to host 10TB and a few 10s of thousands of files, you need a
machine X GB of memory. If you want to host 500MB, you only need Y GB of
memory. If you have 2TB but it's in many, many small files, you need Z GB of
memory", or else a formula for a achieving a best-guess for that. For that
matter, how predictable is that at this point?

The use case I've been excited to use IPFS for since I found out about it is a
private, distributed filesystem for my friends and family. Easy automated
distributed backups/integrity checking on multiple operating systems, access
your files at someone else's house easily, that sort of thing. Filestore
finally landed, which was a big piece of the puzzle (the files _have_ to
remain accessible to ordinary tools and programs or I'll never get buy-in from
anyone else), so that's exciting. Now I'm just waiting for docs to improve (so
I'm not searching through issue notes to learn which features exist and how to
use them) and for a sense that it's stable enough that I won't be fixing
brokenness on everyone's nodes several times a year.

~~~
Kubuxu
1) [https://github.com/ipfs/go-
ipfs/issues/3397#issuecomment-284...](https://github.com/ipfs/go-
ipfs/issues/3397#issuecomment-284341649)

2) The latter. Former is nice idea you for sure should rise it on go-ipfs
tracker.

3) The repo update is currently automated (run daemon with `--migrate` flag so
it will migrate itself)

4) Unfortunately not but it is very interesting question. If you could ask it
on [http://ipfs.trydiscourse.com/](http://ipfs.trydiscourse.com/) it would be
awesome.

------
jlebrech
a youtube replacement in zeronet would rock

------
HugoDaniel
It would be great if a simpler webtorrent version was available just for fun.

~~~
shakna
There's a year-old project called peercloud that might scratch that itch:

* [https://github.com/jhiesey/peercloud](https://github.com/jhiesey/peercloud)

* [https://peercloud.io/](https://peercloud.io/)

------
vitiral
This seems similar to ipfs. What are the main differences?

~~~
diggan
IPFS is more low-level in terms of that IPFS is a protocol (in reality a
collection of protocols) for P2P data transfer. Together with IPLD, you'll get
a full suite of protocols and data structures for creating fully distributed
P2P applications.

ZeroNet is a application for distributing P2P applications, using Bittorrent
for the P2P layer. In theory, ZeroNet could leverage IPFS to get a better and
more modular stack for the actual connectivity and transfering.

~~~
vitiral
gotcha, thanks for the explanation. It sure seems like they have many similar
goals so it makes sense that ZeroNet could leverage IPFS

------
thriftwy
This is what I've waited for for quite some time.

~~~
dublinben
Freenet has done this for over a decade.

------
arcaster
This project is cool, but I'm more interested in future releases by the
Askasha project.

------
Jabanga
A little known fact: the Namecoin blockchain's cost-adjusted hashrate [1] is
the third highest in the world, after Bitcoin and Ethereum, making it
unusually secure given its relative obscurity (e.g. its market capitalisation
is only $10 million).

[1] hashrates can't be compared directly due to different hashing algorithms
having different costs for producing a hash.

~~~
mccoyspace
Namecoin has a number of innovations. It's the first 'alt-coin' fork from
Bitcoin, and it pioneered the technique of "merge mining" where a miner could
do Proof of Work on both the Bitcoin chain and the NameCoin chain
simultaneously. A lot of mining pools implemented merged mining. Even though
the alt-coin space has become much more crowded and noisy, NameCoin retains
that early hashing advantage. It's a very secure chain.

~~~
glitch003
> It's a very secure chain.

But this guy said that a single miner has 65% of the hashing power:
[https://news.ycombinator.com/item?id=14043038](https://news.ycombinator.com/item?id=14043038)

~~~
Jabanga
IIRC, that's one mining pool, not one miner. The power of mining pools is
relatively limited. If the workers see that the pool is attacking Namecoin and
devaluing their NMC (not to mention ruining a cool project), they're liable to
switch to a different pool.

------
digitalzombie
Anybody read this as Netzero the free internet dial up in the 90s?

------
mirap
The zeronet.io is hosted on vultr.com. Why don't they use zeronet to deliver
its own website?

~~~
thatcat
Like bittorrent, zeronet requires a client. The client acts as local server
and displays the pages in your browser

~~~
mirap
That explains it, thank you.

------
tfeldmann
No comment about ZeroNet itself, but am I alone in the opinion that this
website takes grid layout too far? It looks outright cluttered and overloaded.

~~~
CraftThatBlock
Looks great on mobile however

~~~
mirimir
Yeah, the advantage. Tables on steroids.

~~~
nojvek
I love the design. Looks great on mobile. Not sure about desktop. Loads really
fast too.

