It sounds like your marketing will need to explain:
1. How updates work and how you guarantee staying universes away from NSFW/illegal content -- answered separately because nobody wants to find out what IPFS is infamous for as a part of discovering your service. (Both explanations need to function at two levels: one for techies and one for their bosses controlling the purse strings.) This may be possible to pull in quickly from existing CC-licensed documentation.
2. How users can link to content through multiple redundant WWW gateways/other services, and how to pay (you and others) for more reliability. Reliable reachability is absolutely key here, and should be built on multiple existing 3rd parties that are already used as status pages. For example: pay extra to whitelabel an auto-Twitter retweet bot.
Opportunity to be the "enterprise IPFS contact" here - explain from soup to nuts how to push status updates from anywhere (setup and simplified through your service but without requiring it be reachable) with any of multiple Yubikey/hardware auth tokens.
Consider a completely separate marketing push/landing page that doesn't even mention IPFS until potential customers ask how it works. IPFS is a buzzword here but commercial customers probably count it as a negative (analogy: ICO on HN). Focus on the unique features IPFS offers and write up how those features solve the status page problem!
My perception was that it's still much too easy to trace the origin of content on IPFS and therefore not suitable for illegal things.
I will have to do a bit more research to see how the default clients handle caching popular, unrequested content.
I think there must be something wrong with the way ipfs presents itself, because I see this misconception (that just running ipfs causes you to host anything) often.
> you only host [...] things that you have recently requested
Thank you for taking the time to correct my misunderstanding, but is this not restating what I wrote?
Or are you rejecting the other portion of the comment discussing caching popular content (never requested/received)?
I think the automatic re-sharing of recently requested content is only for a short time period (elsewhere someone mentions 30 minutes). Probably not anything anyone should rely much on; it just sounds like a bonus to maybe soften the blow a little bit if you get a lot of activity suddenly.
For anonymous content, you'd probably have to handle the encryption/decryption yourself and use IPFS as the distribution.
Changing a single byte changes the entire hash, so blacklists aren't a great solution if censorship is what you're after.
However, I am hard pressed to find any sizable HN discussion that doesn't mention it? It's basically worrying about what shows up as downsides when Googling, since most potential customers are starting from zero.
I think another contrast is Heroku vs AWS.
If the graph contained at least one other link between the nodes than via the center one, it would make it more clear.
I think this famous graph (from Paul Baran https://www.rand.org/about/history/baran.html) does a good job of showing the difference: https://ipfs.io/ipfs/QmdYtMUTnz6vaQNRUAzhh8YiSSGzBCyQrCXE5Ag...
That has been my biggest drawback so far with IPFS, there isn't really an easy way to get other nodes to pin the correct content, without passing around big nasty hashes (e.g., QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG) by hand.
Beaker is a web browser for Dat sites. The 0.8 release is supposed to happen sometime in the next month, I think. The same team set up hashbase.io to make it trivial to create a short name for a Dat site, and so that you can have Hashbase act as a fallback superpeer/permaseed for your content.
Additionally, this project utilizes IPNS, which clears it's DHT of entries that haven't refreshed within the past 24 hours. So you DEFINITELY need at least one node online, pretty much always, for this content to load.
What about do not depend on a centralized infrastructure to deploy you status page and kept always alive?
This project aims to deploy you status page on a decentralized infrastructure IPFS (see ref-2), after installed it, you will be running a status page service on top of a local IPFS node. So you’ll be able to to publish you status pages on IPFS while being part of the network.
I thought this use case fits perfect on a decentralized environment.
You can deploy this service on a VPS for 1/4 of the price you pay for your current status page service provider.
See an example of a status page deployed using D StatusPage:
What are you thoughts?
This software is still in alpha state with basic status page service functionalities, feel free to ask for a feature or address any issue you have:
- ref-1: https://blog.statuspage.io/a-birds-eye-view-of-the-amazon-s3...
- ref-2: https://ipfs.io
The software will be distributed for free and open source under MIT license.
IPFS nodes don't really rehost content for any substantial period of time (especially the gateway) so you're still stuck with some major problems:
1. You're still hosting off your IPFS node. This isn't worse, but it isn't better. You need to have a node and it needs to have connectivity.
2. IPNS resolution is glacial and it's a known issue without resolution currently. So any gateway trying to resolve your current version of the IPFS-hosted status page through a gateway using IPNS can often end up waiting seconds (sometimes even tens of seconds) for name resolution, giving the impression of a downed status page.
Sadly, IPFS is more of a decentralized presentation and perhaps caching framework. It doesn't really achieve the goal of decentralized storage until there is some reliable way to persist the data on the network beyond immediate use. Pinning services exist, but most seem quite expensive to me.
2. In the meantime, you can set up a script to update your DNS TXT record to point to the most recent IPFS hash. I've got a static site generator that does this upon the completion of a build.
But it's a bad coin.
As for your #2, I don't use this solution because of propagation times. Instead, I use an nginx proxy that rewrites incoming requests on a specific path to root on IPFS node. When I rebuild, I rebuild that site config there.
But tbh, I'm going to undo that. I get absolutely nothing for being part of IPFS and there's effectively no reason to host content there. It's a DHT and while that's cool, it's actually substantially less efficient than alternatives.
I've been enthusiastic about IPFS because it's a neat white paper, but after using it for months I've concluded it's a tech demo with no real direction to go other than a deeply flawed cryptocurrency.
Make it as reliable as IPFS claims it can be (this has to work in the real world!), and as easy to use as a Twitter account, then market that combo.
Look at the competition for pricing ideas, and charge more since your offering delivers.
Second would be for paulogr to include js-ipfs in the webpage, so when users visits the page, they also reshare the website (if there is enough resources/not on battery/$other_criteria). Users would send the data for the website in-between them, just verify the data's signature.
Disclaimer: I work for Protocol Labs on IPFS
Not exactly, as far as I understand it; you still need to manually install an addon, it's just that the addon can now handle ipfs:// links when you click on them.
For the extension, the IPFS community have been developing one which will use this new feature: https://github.com/ipfs-shipyard/ipfs-companion
PR for tracking the protocol handler is here: https://github.com/ipfs-shipyard/ipfs-companion/pull/359
I was surprised to see that the page served via IPFS supports HTTPS, do you happen to know how the secret key is securely shared among nodes in the decentralized environment?
Yes, but critically they can't modify it thanks to content addressing .
> and serve them in your behalf.
Yes, over IPFS. So anyone with an IPFS client will have a robust way to view your status page. If people want to view it via HTTPS, they hit ipfs.io, which is an IPFS/HTTP gateway. While it's possible for other people to run gateways, I believe ipfs.io is the main one. It could theoretically be a bottleneck.
If the IPNS record wasn't signed, it would indeed be a huge flaw as it wouldn't be tied to a key from a peer. That would defeat the entire purpose of IPNS. Luckily, we don't have that flaw in IPNS :)
False information - no. Outdated information - why not? What you've described in this comment doesn't solve it. If I signed that the name N points at hash H1 yesterday, and then signed that the name N points at hash H2 today, why can a malicious node not simply keep telling people asking for N that it points at H1?
Do IPNS signatures expire in a similar way to DNSSEC signatures? (Some poking around github says "maybe".) If so, does the owner of the IPNS name have to regularly connect to the network to refresh them? This would suggest that IPNS records can very easily disappear with no way to reinstate them, even if other nodes are keeping the data they point to up. Is this documented somewhere? Can I set a much shorter expiration time (e.g. 5 minutes for quickly-updating information)?
So unless an attacker can completely disconnect you from everybody else who's interested in a particular IPNS address (and in that case you're lost anyway), they can't hoodwink you into going back to an old version.
The traditional internet solves the problem of not being able to trust your internet connection (say, in a coffee shop) with public key infrastructure so that the most a rogue internet provider can do is DoS you (they can't get a certificate for google.com and TLS is protected against replay attacks), so this sounds like a downgrade in actual security.
The corresponding attack against IPNS would be if the attacker could make your perspective of the world go backwards, and that is prevented by the sequence number.
And you have the math proof of that, right?
What's more, calling a node fault in a distributed quorum a "replay attack" suggests that application logic is hosted on IPFS. Since it is not and cannot be and redundancy is ultimately the responsibility of the storing agent, this seems like at best a misapplication of the term and at worst a disingenuous scare attempt.
In either event; IPNS is still considered a second tier, less complete that other "beta" parts of the protocol. It's not as experimental as pubflood, but less reliable than pinning.
It's all a moot point anyways, since IPNS is so slow as to be unusable in all but the least interesting cases anyways.
While replay attack might not be exactly the correct terminology (although I think it is), the result is that you cannot trust any information pointed to by an IPNS record to be up to date. There's fairly trivial attacks I can think of that revolve around this - for example, if a git repository is hosted in IPFS with an IPNS record linking to it, you might actually get an older version of the code with known security flaws. This just isn't something you think about if you were using a more traditionally hosted git repository hosted on a trusted developer's server (or someone they trust, etc).
You don't until github accidentally rolls back your content, which they have done.
Unlike the github scenario, particularly popular content will have more than 1 node relaying it so you can form a consensus. It's also the case that only 1 value can be at consensus in the GHT at any given time, so the proper content node is verifiable from many content sources.
Now, do the clients DO this? No. They don't.
But in general this is so far down the list of IPNS concerns as to read odd. They have bigger fixes to make besides concerns about highly visible attacks like this.
What's obnoxious about that is that existing IPFS daemons aren't really good at managing multiple identities so if you have multiple trees to maintain you're left writing custom software or using docker containers.
Is there a way to link to the head so that it always references the latest version?
If you put an IPFS hash in the TXT record then you need to update that every time. I personally do this (domain name jes.xxx) because it means you don't need to leave your IPFS node running constantly in order for your IPNS name to be resolvable.
The record is:
jes.xxx. 300 IN TXT "dnslink=Qme12vJPtMpeUwmG2NLG11Q47jy2unSonegNJxQb9QgYax"
Edit: currently watching this - https://www.youtube.com/watch?v=BA2rHlbB5i0 (10 minutes)
This is why it can support billions of people daily in every corner on the planet and have services like https://archive.org/.
Time to move on I guess.
I wonder why do we need an http to IPFS gateway for that video he just uploaded though.
In a centralized system, like HTTP, a site has a single address: something.example.com. Any new entry (e.g. blog post) I create gets its own entry address, but is also referenced from the main site address.
As far as I understand, IPFS contains static copies of documents, so any document that would be the "front" page would have to be copied before updating with new entries, and would receive a new address.
How to let readers know that a new entry is available then? Would there need to be a centralized place referencing all existing entries?
It's a DNS-like system, though much slower than DNS due to propagation delays. It points to the latest version (hash) of a document, much like a git branch.
So, depends on your use case in the end.
(I ask this as someone very unfamiliar with IPFS, so please forgive if this is a stupid question.)
I've been trying to get into IPFS recently. One thing I can't wrap my head around is that IPNS only seems to persist for about 12 hours and then stops working.
It seems to defeat the purpose of a truely distributed name system, as my computer that needs to keep republishing the name becomes a critical part of the system?
Am I missing something or are there plans to resolve this in the future?
Yes, I know that is supposed to be its main feature, but (again, naïvely speaking) it seems to induce a huge, possibly even intractable overhead... Think of, say, programming in C without being allowed to ever rewrite memory contents, or change what a pointer points to!
Again, not an expert; and I'm sure with all the money they raised, people with actual technical expertise in network protocols are hard at work on this and can vouch for the design... But I'm curious anyway how this is not an obvious dealbrealer, given the extreme latency requirements in networking..
Really great ones show that all services are up, and often have timing information, graphs, or metrics, etc. An example of this would be https://status.bitbucket.org/
More basic status-sites generally only show useful detail(s) if something is currently broken, and perhaps will show you a summary of recent problems over the past few days. An example of that would be https://status.github.com/messages
(I wrote a simple status-page for my own site, but I elected to go the simple route. I do monitor availability and response-time(s) of various parts of the service, but I only update the site when there are problems, manually. This works for me because problems are rare, and my site is small.)
So... This still seems problematic though - if you manually update it, then the hash of the page is going to change, and then you'd need to retrieve a different IPFS item than the original status page. And so on... No? This just seems like an odd loop.
I don't personally use IPFS, but I imagine if I did then I'd have a script to change DNS, or update the hash IPFS uses. I see from other replies that updating the "most recent" version of a site is simple enough that it shouldn't be a problem in practice.
I am a happy IPFS user, and it seems that it has more traction than DAT. But on the other hand, the Beaker Browser project (which is based on DAT) is very interesting, and doesn't seem to have an equivalent in IPFS. I'm also worried that the IPFS team might get distracted by Filecoin and not invest seriously in the IPFS ecosystem beyond what Filecoin needs.
I'm interested to hear what others think?
Probably the same ones used against P2P, lawsuits and pressure on national governments to crack down against the offenders :)
I love the idea behind IPFS but it just doesn't feel mature enough yet. The documentation, examples, consistent APIs, etc, don't seem very solidified.
If you have anything concrete to suggest or help out with, please open up a issue in the relevant repository, this would be the entrypoint to find your way around in our Github organization: https://github.com/ipfs/ipfs
If you have some Golang experience, https://github.com/ipfs/go-ipfs/issues?q=is%3Aopen+is%3Aissu... should work to find you some beginner issues.
That's configurable. Check the format for the config here: https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#a...
Keyword being `Announce` and `NoAnnounce`
Though it really should be the default behavior.
Now FileCoin, that's a different issue...
With plain IPFS that cannot happen, because you're only downloading what you're interested in. That's what I meant when I wrote that IPFS is like Bittorrent.
This can easily be addressed by choosing hosts which have gigabit upload links if you are hosting high demand files. Personal computer backups may opt for hosts with 10mbit upload that offer much cheaper storage. It may not be possible to do so atm, but eventually the platforms will get there. It's still very early and most distributed storage technologies are nowhere near complete.
The URI on the footer of the status page points to here: https://www.statuspage.co - but it doesn't appear to load, is this an artifact from development?