It has a great community too
By the way, I'll mention this here because it isn't evident on the Hearth page right now: You can get your own username on Eternum for free, and it will redirect to the latest hash that Hearth publishes (i.e. the latest version of your website) automatically.
You can do this even if you don't use Hearth (we have an API for it), here's an example with my username:
Beaker does this...
In terms of getting a free username on a HTTP website, https://hashbase.io/
Disclaimer: I haven't used Hearth, so I can't compare, but Beaker doesn't seem like it could be much easier.
- use Beaker to publish/"host" dat:// websites (as you can use Hearth to publish ipfs websites)
- use Beaker or any dat:// client to browse dat:// websites
- use any browser to browse dat:// websites via http through hashbase.io (similar to eternum.io)
Ahh, iframes and tables, so cutting edge :)
How much extra work would it have been to add a "drag some files to a folder and run publish_website" interface, for Linux and Windows users? Hardly any, given that the functionality is already there.
Not only that, it would make the system more easily scriptable. For example, it is probably easier on MacOS to get a Makefile to run an executable than to drag files to a folder.
Easier for whom?
Beta registration was opened last week, although it is still trying to fly a bit below the radar :)
IPFS + Firebase = Lovechild ^
However, for everything else, the criticisms no longer apply.
GUN is an CRDT based AP (highly-available, partition tolerant) strongly eventually consistent system, and therefore should not be used for banking-like systems.
I hate the jargon so sorry to reply with it, but it allow for a quick/concise summary of GUN's tradeoffs. Some more resources below:
- Cartoon explainer of academic stuff: https://gun.eco/distributed/matters.html
- CAP Theorem tradeoffs: https://gun.eco/docs/CAP-Theorem
- How to implement the CRDT and how it solves for Split-Brain failures: https://gun.eco/docs/Conflict-Resolution-with-Guns
There are a bunch of other resources, but I'm more than happy to reply/answer any specific questions if you have them. Cheers!
I also read Neo4j was an inspiration. Is there any "human-made" comparison between the two? All I could find was "A vs. B" comparison tables.
- Neo4j is Master-Slave, GUN is Master-Master (or Multi-Master, or Masterless). Basically, GUN is P2P/decentralized, Neo4j is centralized.
- GUN has realtime updates/sync built in, Neo4j does not.
- GUN has offline-first features, Neo4j does not.
- Neo4j has its own query language, GUN has a FRP (Functional Reactive Programming) based JS API.
- Neo4j is over a decade old, GUN since late 2014.
- Neo4j is more monolithic, GUN is more microservice-y.
Otherwise, you will need a dat:// client - Beaker is the main web browser with dat:// support, other clients sofar have more specialised uses, e.g. https://github.com/codeforscience/sciencefair
Beaker/Hashbase dev here. Not quite, imo. The key difference is that a service like Hashbase has no special authorship priveleges (unlike Google Docs or Dropbox), and is instead an agnostic peer. We think that's an important distinction.
Is there a first HN reference to a valid dat:// website already?
Not really. Need ways of bridging the gap.
The DAT network seems interesting and is new to me, but like you said, needing a central node to cache the data seems like breaking the p2p story.
The main benefit is that users of http browsers can visit your dat:// site.
This comment is just advertising for the Beaker browser. It could at least be open about it and say: "Also check out Beaker, with similar functionality".
It's not nice or intelligent to say "don't do this in a different way because we are already doing it in our own way".
1) Someone accesses your "site" directly on your computer via ipfs/dat whatever it is. This is static content I guess? And it's like your computer is just running a static webserver (kinda/sorta? if not, or something more dynamic please advise). So, are there security implications here? Is there an attack surface that a client/viewer of your content could leverage?
2) Sort of opposite question as in 1. You are the client/viewer of the "site," directly connected to some dude's computer via ipfs/dat or whatever. Obviously I should use standard security measures and not just go clickity on anything/everything I see, but beyond that, are there any other security implications for the client? I could download a virus I'm sure by clicking or downloading a malicious file, but beyond that, could I get hacked by the target in a more dynamic way than that?
3) How do I "navigate" in ipfs/dat-land? How do I know what URL to go to, and if this "URL" might be a safe site, or like what the hell is this URL if it's just some kind of hash? Is there a "google" of ipfs/dat sites/content?
Stuff like this. :) Any feedback would be muchly appreciated!
2) Again, as long as the IPFS daemon is secure, standard security practices apply. It's just static files.
3) It's all just static files. You get linked to from other files. Same as you navigate the rest of the web. It's not hard to wrap your head around it, really, just imagine that the web was all static files. That's it.
Content Addressable .. content is truly unchanging. Which, imo, is something the modern web is lacking.
It's like static content served by bittorrent. The content might be served directly by your computer, or by other nodes in the ipfs/dat network which are caching your content.
> So, are there security implications here? Is there an attack surface that a client/viewer of your content could leverage?
An attacker could try to exploit vulnerabilities in the IPFS/DAT network. This would be similar to exploiting vulnerabilities in bittorrent.
Or they could try to steal the private key used to publish your content, and publish malicious updates.
If access to your content is restricted (ie with a secret hash or url), an attacker might try to get access to it.
Overall the security model is quite straightforward, with a smaller attack surface than typical dynamic web stacks.
> 2) Sort of opposite question as in 1. You are the client/viewer of the "site," directly connected to some dude's computer via ipfs/dat or whatever. Obviously I should use standard security measures and not just go clickity on anything/everything I see, but beyond that, are there any other security implications for the client? I could download a virus I'm sure by clicking or downloading a malicious file, but beyond that, could I get hacked by the target in a more dynamic way than that?
It's pretty much the same security model as regular web content, as you mention. The only exception might be a vulnerability in the ipfs/dat code, since your computer might be running a local node.
> 3) How do I "navigate" in ipfs/dat-land? How do I know what URL to go to, and if this "URL" might be a safe site, or like what the hell is this URL if it's just some kind of hash? Is there a "google" of ipfs/dat sites/content?
IPFS has a naming system called IPNS, where addresses are based on a unique crypto keypair. Addresses look like `/ipns/<hash>`. You can use specialized ipfs software, like the `ipfs` cli, or navigate your regular browser to an http gateway. For example, with the official gateway you can navigate to `https://ipfs.io/ipns/<hash>`. Ipfs also has a facility to easily alias a DNS domain to an ipns key, so you can navigate to `/ipns/<mydomain.com>`
Dat I believe has a similar system. The main difference is that, thanks to Beaker, you can bypass the gateway system - just navigate to `mydomain.com` and, if it's dat-enabled, the browser will lookup the corresponding key and fetch the content via dat instead of http.
There's no reason IPFS couldn't work the same way - you just need a browser to implement iofs support directly in the same way. Personally I hope Beaker will find the time to support both!
> Is there a "google" of ipfs/dat sites/content?
Not that I know of. But I think it's inevitable!
I can't say much about IPFS, but this is not quite true of Dat. The Beaker team is working on a complementary set of APIs to read and write to Dat archives, so you could have, say, a TiddlyWiki-like site that can update itself.
It's always been a big problem that the security of file:// URIs is unstandardized. I can publish a repo, for example, that includes some tests/runner.html. You could open it Firefox to run the automated tests, but Chrome will choke on it because Chrome doesn't allow XHR for file:// URIs. The only way around this is to either put it online or run a local webserver (e.g., `python -m SimpleHTTPServer`) and access it through localhost. This is perverse. Dat is solving this in a way that's very similar to what would happen if file:// got first-class attention from the browser makers. If it's in a Dat, it doesn't actually matter if it's on your computer or hosted somewhere. All that matters is that you have the content.
This might not be exciting to HN types who are comfortable and willing to go rent some DigitalOcean instance and put up with administering it, but this will have a huge impact on the long tail of businesses—both small and large—that are quietly chugging along on little more than Excel and email (maybe the odd SharePoint installation).
On the other hand, developers should be excited about this, too, because it means they can start building apps that are _actually_ serverless (compared to Amazon and Google's definition). We're back to where we were in the 90s: you can go write some neat client software and share it with me, without having to think about the headache of needing to run a server in perpetuity just so that it can continue being useful.
From a security point of view, it does slightly expand the attack surface, since there could be a vulnerability in that code which isn't present in other browsers. But given the simplicity of the APIs, I think it's a very acceptable risk given the benefits.
I think it's better to think of this more as a data store than web sites/apps.
It's true that there are some types of web apps that exist on the traditional (HTTP) web but that cannot be made as a Dat app, but it's also true that there are a class of apps that could be built with Dat but that cannot be built on the traditional web.
When you update your site, the hash must change, and thus the link changes as well.
Security stuff... Yeah it can be used maliciously just like bit torrent can, but ipfs hashes can be accessed also by ipfs.io/hash and I haven't heard of any problems with malware or anything of the sort, things get filtered out pretty quickly it seems.
Ipfs is awesome. Read the white paper ... It's such a nice middle layer for projects wanting to use a distributed file network without building out their own torrent network.
I hope Hearth or someone like them expands beyond websites to self-publishing of store front ends, crypto payment, social feeds, real time video, and all the other services we expect AmaGooSoftBook to provide today. And to integrate Tor as necessary.
Agree but I hope we take super small baby steps and get what above is accomplishing right the first time without complicating the use cases just yet.
On this subject—
I'm going to be that guy this time because I haven't yet seen a good solution to the problem, and I ask with sincerity:
What happens when somebody decides they're going to start an illicit website/content of some kind? (let's avoid semantics because I'd rather not go down that path)
Everybody then has a forced part in hosting it, and there are just some subjects that do not have any redeeming aspects.
Is there a resolution built in for those scenarios?
The mistake I'd made was thinking it worked more or less similar to Bitcoin where every node holds the whole chain, where it really works more like torrents in that you download the set of files and then reseed. Thanks for pointing out the difference.
Because of my error I guess the question changes. It sounds like it may be even more damning for those who regularly view illicit content, but at the same time making it harder to shut down caches of the same illicit content. I recognize this is already a problem with torrents, but then my question becomes how do we not repeat the mistake?
Is there a mode of resolution outside chasing down every single bearer of the node to try and remove the content?
Just because something CAN be used to commit a crime (say, the expectation of privacy in your own home), we don’t eliminate it to avoid that edge case.
Not all technology needs a law enforcement backdoor, in my view.
The first analog that is at all comparable is hypertext and the web interface to the internet. This is much the same except it's harder to clean up.
Who decides what is illicit? Like I said, I'm not trying to start a semantics discussion. I'm not talking about morally debatable subjects. There are some blatantly disgusting things out there that have ruinous effects on people who've made no decision to take part. A platform like IPFS allows that to gain a new root, and deeper this time.
Hence my posed question.
Without determining who decides what is illicit and what isn't, talking about moderation is meaningless.
My expectation that moderation will be done by law enforcement, as is the case with other protocols like bittorrent.
The mistake to avoid here would be trying to design censorship into an otherwise open and decentralized content distribution system.
Don't try to shut down the content. If someone was harmed in the making of the content, then use it as evidence to deal with the root of the issue; otherwise it's none of your concern.
I think you're missing the larger picture, though. It's never isolated to singular instances. If the content is decentralized, then so can the problem be.
You wouldn't say smallpox shouldn't be completely eradicated because "it never hurt me, so it's none of my concern."
Your solution is reactive. The problem with reactive solutions is that they don't solve the problem, they can only continually try to keep up with it.
Your comment inspires another question: what is everybody so concerned about having censored? It's not exactly a problem [in North America or online].
Sure, but the content is not the problem, and eliminating the content does not eliminate the problem. The content is merely evidence of a problem. It may even help you locate the actual parties responsible. Dealing with the root cause would be harder if the content were more difficult to find.
What I said was that "it's none of your concern" unless "someone was harmed in the making of the content". Smallpox actually hurts people. Protecting people from smallpox is a worthwhile concern. We do that mainly by making people resistant to smallpox so that it doesn't get a chance to infect people and spread. Censorship is not like vaccination, much less eradicating the smallpox virus; it's more like eliminating all knowledge of how smallpox works. It doesn't address the problem, makes actual solutions harder, and has far-ranging side effects.
Your solution is reactive.
Taking down content you've identified subjectively as "undesirable" is a reactive response. Worse, it's reacting to and addressing only the surface symptoms, which means it isn't a solution to the real problem, that someone was harmed to create the content. You aren't going to stop people from being harmed in the real world by redacting the evidence.
On the other hand, designing a system to be resistant to censorship from the outset is a proactive approach to dealing with the threat of censorship.
what is everybody so concerned about having censored? It's not exactly a problem [in North America or online].
You have cause and effect reversed. It is (mostly) not a problem in North America or online because everybody is so concerned about it. It is a problem in places where censorship is treated as a normal and routine practice—for example, China.
I get the feeling we're discussing this on differing levels of severity, here.
I'm not talking about dissenting political opinions or pseudoscience or conspiracy theories or counter-culture or drugs or anything silly like that, but things more impactful and corrosive to humanity.
The persistence of certain content that isn't proliferated as a study has [arguably] the effect of normalizing itself, and whatever it resulted from.
I've been avoiding semantics, but let's go with a relatively specific example:
A person was raped. It was recorded.The gruesome story was detailed as entertainment for a sick (yeah, letting my bias out here) group of fans and interested or curious parties. The guilty party is found and apprehended, sentenced, and punished according to local law. Justice was served according to society's mandate. We consider this good.
Now of the persistent content. It's now in the hands of hundreds to tens or hundreds of thousands of people globally. Curious parties become interested parties, interested parties become fans, fans become culprits.
And all the while the victims get to live, not only with the experience, but with the knowledge that there are swathes of people enjoying the repeat viewing of the situation and the knowledge that it will never go away.
It's a big 'what if'. I could say the same about your position that if we don't gain total and open ability to publish whatever information whenever and have it persist forever that we will succumb to evil, oppressive forces. It's speculative.
I don't want to come across as if I'm coming down on the IPFS platform, or the interface Stavros ingeniously developed for it. Quite the opposite. I think that not enough thought goes into how powerful the platform really has the potential of being.
Software and network developers have a lot of inherent power to move large ideas with relatively few resources, and it should be recognized more often. And it should be discussed, honestly, whether one implementation or another is the best we can do before releasing it on society.
I'm of the opinion that ethical questions should be asked of engineers, and that empathy needs a larger role in the process of [not necessarily research, invention and development, but] implementation.
"We must always take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented."
Feynman on a Buddhist saying:
“To every man is given the key to the gates of heaven. The same key opens the gates of hell.
And so it is with science.”
If you don't want to see bad things on IPFS, don't access them.
If you wish to force other people to not see them either, the process is similar to anything else on the Internet.
There's no need for moral panic.
I was posing the question as to how the community could deal with something horrific. At this point in time the power is in the hands of the community developing the technology. Surely there were lessons to learn from the implementation of the web.
I posed it again because each time there is no discussing potential for improvement and only responses crying censorship with pseudo-Orwellian lingo.
There's a line someplace. Nobody is preemptively stopped from producing snuff, but we don't support it by allowing it to be stocked at the local library in the name of anti-censorship.
What am I missing?
In the typical deployment where nodes refer to their own local IPFS services, it's often the case that the primary IPFS pin serves the majority of traffic.
If we're going to call that "distributed" then so are normal webpages with random caching options.
Which I guess makes you right.
Let me revise my opinion to say that if there was native browser support for IPFS then we could call it distributed like bittorrent.
When the most charitable critique of your ICO is, "We don't think this is a scam because the people involved aren't famous scammers but it certainly looks scammy" you've failed to solve your problem.
Going back to the IPFS/But torrent analogy, I don't understand how bitorrent hosting problem is any more solved than the IPFS hosting problem.
I bring this up because my orginal point that I think IPFS counts as distributed, because bittorrent counts as distributed. So far you haven't really convinced me that they're all that different.
In short, IPFS is basically an alternative content addressed routing system that tends to have some slight endpoint caching.
Bittorrent at least heavily penalizes nodes that don't play ball rather quickly. So it rewards nodes that disseminate info as they acquire it, and make it trivial for storage to participate.
I don't see IPFS as solving distributed storage problems at all. Neither do the creators, which is why they started a related project called FileCoin to help with that. Too bad about that.
If you think they're not "all that different" then I refer you to the white papers. I'm disinclined to play a longer adversarial lecture game.
With regards to Bittorrent... I guess where you and I differ is that I think Bittorrent is just as broke as IPFS when it comes down to it. Or maybe same type of broke, but less so than IPFS. While the seeding incentives certainly help bittorrent, I don't think it fundementally makes it better than IFPS. I know what when I've torrented things, I've always seeded less than I leeched. Long-lived torrents are either a) pretty popular or b) have some people intentionally keeping it alive... which is what I would expect in IPFS too.
I will have to go read those white-papers. If I'm reading your comment right, it sounds like you consider the broken-ness of IFPS to be so much worse than that of Bittorrent that you think it deserves its own category.
Or perhaps that the expectations of IFPS compared to its own broken-ness puts it in its own category. I think of Bittorrent and IPFS as distributed distribution, not storage. And while Bittorrent people see Bittorrent as distributed distribution (caching), IPFS markets itself as distributed storage. Which, that makes sense given the use cases IPFS seems to want to fill - it wants to replace static http stuff, whereas bittorrent seems to serve small more canonical files, stuff that's already always identical independent of the source. Like ISOs or other things that people were generally sending each other directly anyways.
I think it too until you last comment for all of the above thoughts to congeal. And I'm sorry it came off as adversarial. Thanks for taking the time to respond.
With all due respect...
> I will have to go read those white-papers.
In practice, it does mean you need an origin available all the time, and it's either you providing it, or some other service you pay to do it. I don't think there's any magic.
Otherwise, you can use https://www.eternum.io/ (which is a service we also made) to basically seed your site for a few cents a month.
If you're wanting to make something with DNS in front of it to mask the long IPFS content hashes, you can do that too, there's a good overview of the process in their examples:
(xmpp might be a special case here - I'm uncertain of the underlying transport for jabber; could be that js+http(s) is enough to simulate actual jabber client?)
But since most browsers don't support IPFS natively yet there are IPFS gateways that allow you to access IPFS sites through them.
I'm building a small web app and want to use IPFS for image storage. In testing I was able to create the files but after a few months of not running the IPFS daemon the images are lost as they are no longer pinned.
Edit: Just saw https://www.eternum.io/ in other comments. Perfect!
I've added some files to my Hearth folder, the menu bar icon changed to the "syncing" state and then seems to have got stuck like that. The finder integration doesn't work either, so I can't copy the Hearth link.
How many files have you tried to add and how big were they?
If that doesn't stop after a while, consider restarting Hearth app.
Regarding the Finder integration, if neither the context menu item to be shown nor the Hearth toolbar button then make sure the Hearth Finder extension is enabled (Visit the System Preferences > Extensions > Finder and enable Hearth) and the restart Finder by option+right clicking the Finder icon in your Dock and selecting 'Relaunch'.
Hope that helps!
ETA later this year
Your link redirects to twitter, and I can't find anything about it following links to the https://oswg.oftn.org/ or on your own website. So it looks completely non-existent, which makes me skeptical of "soon"