Hacker News new | comments | show | ask | jobs | submit login
Show HN: Ipfs-dropzone, a subclass of Dropzone.js that publishes to IPFS (github.com)
109 points by fiatjaf 6 months ago | hide | past | web | favorite | 80 comments



> I don't know how to publish this package in a way all JS transpilers and bundlers out there can understand. Please help me.

Don't do this:

    module.exports = IPFSDropzone
Do this (and change documentation accordingly)

    module.exports = {IPFSDropzone}
or to go full ES6 do this

    export {IPFSDropzone};
add the package babel-env and add this script to package.json:

    "scripts": {
      "prepublish": "node node_modules/babel-cli/bin/babel *.js -d build/"
    },
and change main to "build/index.js".

Run "npm run prepublish" to test.

Edit: Now that I think of it, compiling with babel is probably unnecessary. But it doesn't hurt and can help old setups.


> Don't do this:

> module.exports = IPFSDropzone

> Do this (and change documentation accordingly)

> module.exports = {IPFSDropzone}

Why? Exporting a single class for this kind of lib is preferable. I'd do:

    export default IPFSDropzone;
See react-dropzone which does this [1]. (For that matter the author's questions around project setup would probably be answered by looking at react-dropzone's which looks pretty good).

Also with npm scripts, local deps that expose executables are added to your PATH so you can call them directly, e.g.

      "prepublish": "babel *.js -d build/"
[1] https://github.com/react-dropzone/react-dropzone/blob/master...


Because having a mix of libraries/modules which some export defaults and some don't makes it cumbersome to require them here and there: I have to remember which was exported as default and which had the class inside. Also it makes more obvious visually which variables are modules and which are classes/functions. For my projects I avoid default exports even when a module only has one class (which happens frequently).

Additionally, ES6 default exports don't mix well with commonJS, sometimes I have to add require('foo').default for no apparent reason.

> Also with npm scripts, local deps that expose executables are added to your PATH

I think it didn't work reliably on windows or cygwin or something. I just tried and it works everywhere. Good to know! The only gotcha is that ";" and "&&" must have spaces around.


Thank you. But here are my concerns:

* If I go ES6 I'll require all people importing it to also use an ES6 module bundler -- which means Babel only, or Rollup, which never works.

* If I precompile and expect non-Babel people to use the precompiled then these people will have duplicated dependencies everywhere.

Not complaining, but all this compatibility mess seems hard to me. I use Browserify!


I see. Then don't go "full ES6" but I prefer exporting an object with the class for reasons I've explained in a comment above.

But I do see ES6 modules in the wild, some of them I'm using with other languages. So I guess it's fine to leave it as-is. Or if you want to ensure maximum compatibility, use babel as I suggested but add the sources to .npmignore so they're not duplicated in the published packages.


Why not send a PR?


For other people in the comments that may have a similar problem.


"The problem with doing something right the first time is nobody appreciates how hard it was."


teach a man to fish


Give a man a fire, he's warm for a day. Set a man on fire, and he's warm for the rest of his life.


This is brilliant.


Terry Pratchett, RIP.


... the PR would do that as well


Not necessarily. PR on GitHub can be merged with the push of a button so it's more likely to be looked at only cursory before getting merged and therefore will be forgotten whereas when the author needs to retype it they will have a stronger memory of it and they remember where to go back and look next time


One thing I've failed to find out about IPFS: who pays for hosting? The user? Or is it donated by some peers?


This is the #1 thing I see people confused about with IPFS. Basically, there's 3 ways for content to become available:

1. You add the files to your own node. This is how content gets added, but obviously it only lasts as long as your node is connected to the Internet, just like an ordinary HTTP server.

2. Someone views your content using their node. This causes their node to cache the content temporarily (IIRC for 30 minutes by default) and publish it to other nodes. In theory, if your content got at least one view every half hour it could live on in the users' caches forever.

3. Someone tells their node to pin your content. The node will then keep it permanently (until they unpin it) and serve it whenever it's connected to the Internet. Generally they would do this if they believe it's valuable--either because they want to keep it themselves, or as a public service to make it available to others (for example, pinning the Turkish Wikipedia to help evade censorship).

There are also several pinning services, which you can pay to have them pin your content on their node, in much the same way as you'd pay a hosting provider to serve your content.


In this case, it appears the node is in your browser, right? Does it go away when you close your tab? In which case your link will survive for 30 minutes, or is it just gone?


It looks like ipfs-dropzone is running an in-browser node, which would go away as soon as you close the tab (since it's just a web page like any other). Content on IPFS stays available for as long as there's at least one node which has it and is connected to the network, so how long the link would survive for depends on whether other nodes have it and how long they will keep it for. If nothing else has downloaded the content then it will go away; on the other hand, if someone has pinned it then it will stay forever (or until they unpin it or go away).

The 30 minute interval would apply to the IPFS gateway that someone else would use to download your content (generally a standalone process though I believe there are efforts to make a browser with an embedded gateway). IPFS nodes automatically publish their cache, effectively making an autoscaling distributed CDN. However they will only cache content that they have gotten on behalf of their user; they won't go out and pull random content from the network to host it.


Man, I typed a whole reply saying basically the same thing but you beat me to it! To expand a bit further, in this demo app, the "Save" button calls an API of a pinning service (Eternum), which pins the file on an external node so it will stay there as long as it's being paid for. Right now most pinning services require that you pay them to host your file, but the plan for IPFS is to use something like Filecoin to incentivize pinning. With Filecoin, instead of paying a single service to host your content, you would put out an order to the Filecoin network which says "anyone who hosts my file gets $x per GB per second", along with some settings (e.g. I want minimum 5x copies of my file on the network). Then, anyone hosting a Filecoin node will automatically fulfill requests by hosting content in exchange for a cryptocurrency payout. So, instead of a single hosting service, you are paying the entire network to automatically host your file in many places at once.


Wait, I thought IPFS was P2P. What's this with the payment stuff?


To compare it with torrenting: The only incentives to "seed" a torrent file is usually just good will, or to build a reputation to get into invite-only sites.

We lose the latter incentive with IPFS, and so Filecoin is replaced as the new incentive to give up storage space for the P2P network.

You can still put your content up on IPFS for free, it's just that only your IPFS node will be hosting the content, and thereby won't be truly decentralized. You're the only "seeder", so to speak.


Thanks. I went to the Filecoin site to see more but it said the same has ended and to not fall for scam sites. How do I obtain filecoin? When will I be able to earn filecoin from hosting content for others?

Is hosting content the way that it is mined? Or is mining it something else?


That, I'm not 100% sure about. Filecoin had their ICO, now it just needs to be implemented with IPFS.

Once implemented, you should be able to buy Filecoin from market exchanges. To earn it, Filecoin will use two "proofs-of-storage". One is "proof-of-replication" wherein your IPFS node proves to be replicating data, and "proof-of-spacetime" wherein your IPFS node proves to have stored said data for a certain amount of time.

You can read more about from their whitepaper: https://filecoin.io/filecoin.pdf


Here’s the latest Filecoin update: https://filecoin.io/blog/update-2017-q4/


It depends on the setup; if you're running go-ipfs, the node is a program that runs on your computer and becomes available in your browser, either through its IP address (yours, optionally public with proper network setup) or through the ipfs.io domain, which access the entire network and can pull from any node that's connected, which includes your own.

That's just the Go implementation though, there's an implementation entirely in JavaScript, and it would be more than plausible to bake the service into a browser bundle so it works like you describe. However, I believe there's strength in separation personally. The acts of hosting an IPFS gateway, and accessing IPFS content through a gateway, are separate.


Is there an nginx/Apache equivalent that can be used for hosting one's own content?

From what I've read I could just run the ipfs-daemon on my VPS/hosting server, but is there a user friendly way where I don't have to manually add every file everytime? How easy is it to host a web page with it?

How will does IPFS handle NAT? Could I host my own pinning server on a Raspberry Pi on my home network?


I wish there was an easy way to just point it at a folder, but at the moment it seems that you have to manually handle adding the content. If anyone knows of such a solution I'd love to hear about it.

The IPFS daemon handles NAT in much the same way that BitTorrent does: NATed peers will dial out to un-NATed ones to establish a connection. I'm not sure if it also handles UPnP and the like to automatically forward ports if possible.


You could probably write a script to add content from a folder, maybe as part of the deploy process?

Either way, that was the most glaring shortcoming I saw in my brief foray into IPFS. There's lots of talk about pinning services and paying, not much talk about hosting your own pinning service, which was shocking to me since the idea of IPFS is to be decentralized, especially when most of the traditional bandwidth concerns of hosting your own content are mitigated by the P2P aspects.


You can add a folder.


But you still have to do so, either by hand or script.


The `ipfs add` command has a -r recursive flag for adding folders.


Yes, and this is the command that you'd use - in fact, running it from cron every minute is probably a decent solution - but my point is that you have to run something to do so; there isn't a "ipfs add --watch" or so. This isn't even bad; I prefer separated concerns.


Any recommended pinning services? It'd be swank if all CDNs supported this functionality, but in the interim...


I can recommend https://www.eternum.io, because it's the best (also I built it, so I'd know).


Win win, as I'm a huge fan of your work.


<3


is there a group pinning option for individual files in case $.15 per gig per month is too expensive...


How do you mean? The price is for your whole account, not individual files. Plus, too expensive compared to what? :P


say ten people wanted to pin the same gigabyte file they could pay .015 instead of .15


Oh, right. I don't see how this could realistically work for a service like Eternum, since there's no way to know how much data the IPFS node is actually using for a hash at any time.

Can these ten people not just share a single Eternum account?


How do others here feel about IPFS compared to systems that charge tokens for similar functionality?

Personally I believe IPFS to be far superior. BitTorrent has already shown that P2P works great. We don't need tokens for this.


I think it makes the most sense to put tokens as a separate layer on top of the raw storage/communication layer. (If I understand correctly, this is what Filecoin is intended to do.) The same content may be stored and served by both donated and charged-for nodes cooperating.

For example, imagine someone has a blog, for which they've paid the network some number of tokens to host. I find one of their articles to be particularly useful, so I pin it on my own node (perhaps with a bandwidth limit if I don't want to get flooded should it become popular) and thus donate a small amount of hosting to the owner of the blog.


What prevents someone from changing that 30 minutes into 0 minutes? Iow, what do they gain by not doing this?


They only lose the caching effect.


How can ipfs thrive then, without an incentive to seed?


Well, even if nobody else seeds you're still no worse off than you would be with just an HTTP server. At present the main incentive is generosity, which works pretty well for a lot of things; for example a lot of people started pinning the Turkish Wikipedia mirror when it was announced purely as a stand against censorship. In addition, you can also pay a pinning service. In the longer run the IPFS developers are also working on Filecoin, which is a cryptocurrency that adds a payments layer on top of IPFS so you can pay to have someone else serve your content.


Ethical/moral interest in the content, for one (wolfgang42 gave an example of that). That's a pretty small market though.

The other content could be academic and an alternative to information akin to the old Usenet (pre binaries) and things like Gopher and BBS.

The major demand for a system like this, I'd wager, is pirated content. That is, if its better than the current models. The two big models currently being used for piracy are:

1) Usenet/NZB

* Centralized. Only a handful large networks with long retention. Rest are resellers.

* Usenet servers adhere to DMCA; indexers generally don't

* Pseudonymous, potentially anonymous

* Doesn't require upload; provided quick download speeds

* Based on open source and open standards

* Pretty much requires payment for payserver & indexer

* Little bit rougher on resources due to PAR2 and quick downloading/unpacking of large files has I/O impact. Nothing huge on recent desktop systems.

* Requires a bit more software for everything to work though nowadays there are all in one packages doing the dirty work for you.

2) BitTorrent

* Decentralized.

* Not anonymous. Requires at least VPN, who likely log, even though they claim they don't.

* Free as in beer

* Requires upload

* Based on open source and open standards

* Self-hosted. Some indexers adhere to DMCA; generally they don't

* Private indexers ("trackers") have a reputation system requiring more than 1:1 download/upload.

* Doesn't work at all with DSLite (native IPv6 + IPv4 behind NAT).

* BitTorrent, like Usenet, also has abstraction software (full stack) to make life easier.

BitTorrent is far better known among the general public but I regard BitTorrent as a poor man's Usenet. If it ain't on Highwinds, it ain't anywhere (of course not completely true given DMCA but still, they got 3,3k days retention).

With Usenet you pay for a sub to a payserver & indexer, with BitTorrent you pay by uploading and VPN.

Now, my question is, where does IPFS belong in this list? Can it compete with these 2 technologies?


This hasn't been launched yet, but IPFS is also planning to introduce their cryptocurrency, Filecoin, to offer cheap hosting. People will then host your site (aka pin it on their IPFS node) for Filecoins, which should be cheaper than traditional file storage services.


You host anything you want to host. If nobody wants to host something it disappears.


When you put it like that, it doesn't sound much different than how website hosting works today.


The difference is that if you decide to stop hosting, it doesn't just disappear. If there is anyone else in the world who wants it, it stays.

It also means you don't need to trust the server to send you the correct content, since everything is automatically checked against its content hash.

It also means it's automatically a CDN - you don't need to be able to serve every single person who wants to look at your content, because they'll all be serving it to each other.


I'm having a hard time imagining this at scale.

I have a few thousand web pages on my site, blog posts, etc. Most of it won't be seen in any given day. So I have to pin it as it's realistically unlikely anyone else will. Not much different from today. This probably describes 90% of web content out there, where a single node is all that is keeping that content alive.

Now imagine something slightly more popular than my website. Maybe a particular Medium article. If it's two years old, it's unlikely to be getting any hits at this point, though it might have been popular at one time. So again, Medium has to pin it (or maybe the author would). Popular stuff would stay floating around while it's being viewed.

It's almost a dynamic CDN/torrent which could help with bandwidth, but I don't see any advantage for persistence in most cases.


I think the advantage is that it's easier for _anyone_ to be able to keep a site/file alive without having to implement an archive.org style of rewriting all the URLs/links. If you want, you could pin that medium article you talk about and now even if the medium node(s) decide its no longer worth pinning, you've still got it pinned so it's accessible to anyone at the expected "location".

Similarly, this makes take-downs harder. It's no longer one node/machine hosting a file, but as many nodes as who find it valuable.

If users never pinned anything, you'd be correct, it would be basically the same as the current internet, with one organization maintaining everything. But IPFS gives you the ability to change that.


Thank you for that perspective - that makes sense. I hope there is an easy way to pin an entire site and not page-by-page so, for example, a full documentation set could be kept.


Yes, content on IPFS is represented as a Merkle tree so if you pin the top of the tree it will traverse down and recursively pin all of the subtrees as well. (You can, of course, pin a subtree explicitly if there's a only certain part you want.)


Maybe you'll find an article interesting and want to pin it yourself. Maybe you yourself will link to it and want to pin it for that purpose. Maybe someone else will link to a very influential article you wrote and that will make it stay.

Overall, it's a subset of the availability of a single server, plus the page always stays at the same "place" and never moves.


>Maybe you yourself will link to it and want to pin it for that purpose.

This is an important point. If you pin all the content you link to, the links on your site will never break so long as your node stays up.


Which essentially means you node is now a web server with all the responsibilities that entails (keep backups, always connected, etc). It feels fragile - I can't imagine pinning everything I ever read (or think might possibly be important some day).

What about all the .EXEs (games, etc) and SDKs I download? I'll bet over the past 10 years I've downloaded and viewed a few hundred GBs (or more) of content.

If I ever want to see anything again, I have to pin (cache it locally). I could do _that_ (cache locally) with today's web.

I like the idea. But just don't see it working well at scale.


you need to review the technology if you think ipfs is more fragile and needs data nodes to always be up.

you need one data node up. You can have as many as you want.

if nobody wants to keep the old exes, they'll be lost forever. ipfs was never meant to keep everything forever. Its meant to guarantee you that the file you're viewing (if it still exists) is the same one that was published under this hash. With no regard as to the source of the file. It could be shared with you by your neighbor or the original uploader. And, if you see value in the content, you can also share it with others by pinning it.

its essentially bittorrent as a filesystem. as long as nodes exists, the files will be available.


we built a little tool for organizations to share their "pin lists", so hackerspaces could back up each other's content https://github.com/c-base/ipfs-ringpin


The difference is that if you pay someone to host this site for you you are paying for bandwidth, DNS registration, and disk space. One DDOS attack could mean the end of your project.

With this kind of system the idea is that the cost difference between unpopular content and popular content isn't so pronounced. You might have to pay a little more up front, but there are fewer surprises in your bill.


> It also means you don't need to trust the server to send you the correct content, since everything is automatically checked against its content hash.

This is the confusing part to me. How does updating content work? Lets say I want to visit the jstanley blog. Does this mean you can only ever publish once, because adding an entry would change the content hash? Does everyone have to contact you for the new info hash? Does ipfs work with existing DNS?


On top of the content-addressed IPFS is an IPNS layer which maps addresses to content hashes. The address of a piece of content is a public key, which is used to sign the content hash. This means only nodes in possession of the private key can change where the IPNS address points to. In addition, IPFS gateways also support a special SRV record in DNS, which can point either directly to a hash or to an address which points to a hash. (This is how you can associate a domain name to content served by IPFS. If you also point the A record of the domain to a public gateway, the gateway will automatically look up the SRV record and return the content when someone retrieves it by HTTP.)

At the moment, go-ipfs (the reference implementation) only supports pinning content hashes, so if you have someone else pinning your content (like a pinning service) you have to notify them of the change. There are plans to add support for mutable pins in the future.

Put in the vocabulary of git: any given commit is referenced by hash[1] and is therefore immutable; IPNS addresses are like references (branches, etc) which can be updated at any time to point to a different commit.

[1]: Since the content is a merkle tree, if they've only changed a few files most of the pinned content will retain the same hash.


That's pretty neat. Do you know if there is protection against rollback?


IPNS records have a fairly short expiry, so you have to keep republishing them. This also keeps the DHT from getting overly full of records that nobody is using anymore, but does mean that a node with the private key must be kept online. They're working on improving this situation; here's a discussion on the topic: https://discuss.ipfs.io/t/confusion-about-ipns/1414/4


I am still learning about IPFS but I participated as a minor party in the design of one of its intellectual predecessors, Freenet.

IPFS is not just going to act as an ad-hoc CDN, it's also going to work as a regional CDN.

If someone does a report on your funny video on WGN-TV and ten thousand Chicagoans hit your tiny little website in the next ten minutes, your server will get a few hits because nobody else has it yet. After that the network effect kicks in, and eventually everyone is loading it from an IPFS server somewhere in Chicago. When they start 'retweeting' it and everyone is looking for it, it'll end up scattered all over the country.

Until people lose interest, and then the copies will begin to disappear as the next fad automatically pushes it out.


The content addressing and peer-to-peer architecture are the main differences.


Similar to with HTTP, the ones hosting the content. You'll have to access the content or explicitly pin it to share it with other peers.


So in the case of the demo, is their server is hosting the content for 4 days?


They seem to be using https://www.eternum.io to "pin" the file (keep a copy of the file alive). Since eternum is a paid service they might unpin after a while but it's up to them. If they didn't do that, the file might only be available as long as your browser is open.


In this case the file is served by a javascript implementation of ipfs running inside the browser over websockets. I'm not sure if the server is pinning the files as well.


Yes.


It's just like BitTorrent.


You can't have dug too deep into IPFS without knowing that because it's one of the fundamental aspects of the design.

Try reading their documentation. https://ipfs.io/docs/


There was someone (victorbjelkholm) talking about the zoom-out limits of https://filemap.xyz/ on IRC and I couldn't reply there:

I've arbitrarily imposed these limits because the purpose of the app is not for casual visitors to wander around the world browsing everybody's files, they're supposed to go to specific addresses and browse only their files there.

This measure will not protect anyone absolutely, since an "attacker" can easily read the entire database and figure out where are all the files, but it protects users from 99% of the casual visitors.


I have to say that DropZone is supercool and this project makes a lot of sense. We also use Dropzone.js to store files and their Hash values at VisiFile. A demo is here:

http://139.162.228.5/

The code is here: https://github.com/zubairq/visifile/blob/master/public/index...


Really cool demo. One suggestion is to make the dropzone more obvious. I didn't realize you had to scroll down to find the dropzone.


Thank you. I'll try to think in a better layout.


Wondering if anybody would be up for adding this to uppy.io


The code is amazingly simple: https://github.com/fiatjaf/ipfs-dropzone/blob/8d7d8808ba578a...

If someone knows where to plug into Uppy's upload function then it should be really easy.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: