
IPFS 0.5 - willscott
https://blog.ipfs.io/2020-04-28-go-ipfs-0-5-0/
======
carapace
I really _really_ want to use this, I have used it in the past. It works great
as far as I could tell except for one thing: bandwidth limits[1].

Now _I_ can do this myself, because I'm all "+337" and what not. (I used
trickle as described in the comment[2]. Seemed to work fine. Nice and stable.)
But I can't recommend Joe and Jane Consumer to install IPFS _and_ some other
thing with a straight face, because they'll say, "Well _bittorrent_ can do
it!?" and I don't have a good answer.

Maybe there's an opportunity there to rent IPFS VMs to normal people? I dunno.

[https://github.com/ipfs/go-ipfs/issues/3065](https://github.com/ipfs/go-
ipfs/issues/3065)

[https://github.com/ipfs/go-
ipfs/issues/3065#issuecomment-415...](https://github.com/ipfs/go-
ipfs/issues/3065#issuecomment-415907797)

~~~
tribler
plus IPFS wants to enforce copyrights worldwide: [1,2] Businesses principles
plus monthly bandwidth usage matter for real people.

[1] [https://github.com/ipfs/community/blob/master/code-of-
conduc...](https://github.com/ipfs/community/blob/master/code-of-
conduct.md#copyright-violations)

[2] [https://discuss.ipfs.io/tos#8](https://discuss.ipfs.io/tos#8)

~~~
londons_explore
For IPFS to work, there needs to be no central authority who can block files.

Todays IPFS is a long way from that - any Joe Random can DoS any particular
hash by getting their node at the right place in the DHT and blackholing
requests.

~~~
diggan
> can DoS any particular hash

Can you explain more how this is possible?

So we have one evil user Karen who wants to block access to content ABC.

She will spam the DHT with requests to content ABC. After a while, nodes will
stop responding as she hits the rate limit. Now her DHT requests goes into the
void.

Now Joey wants to request content ABC too. He requests the content, and
because no other nodes are responding to Karens requests, they responds to
Joeys request for the content. Now he can fetch the content.

~~~
londons_explore
There's a smarter attack...

Every node in the network 'owns' some of the keyspace.

Karen can simply keep reconnecting to the network (brute forcing her PeerID,
which determines which bit of the keyspace her node will be responsible for)
till she gets assigned that bit of keyspace. Then she can black hole requests
to it.

You can defend against that by having multiple owners for a given bit of
keyspace (known as quorum in the IPFS design), but evil Karen can simply
pretend to be all of the machines hosting that bit of keyspace.

The brute forcing sounds hard, but in a million node IPFS network, on average
you only need to do 1 million sha256 hashes, which takes under a second on
modern hardware.

------
jillesvangurp
Good progress. I looked at IPFS a few months ago from the point of view of
fitting a decentralized file store. The decentralized requirement came
primarily from the point of view wanting to fit this in a broader product that
is all about decentralized for various reasons.

Key problems in this space:

\- decentralized is something techies obsess about but that has as of yet no
business value whatsoever. Customers don't ask for it. Centralized
alternatives are generally available and far more mature/easy to manage. The
business ecosystem around IPFS is basically not there.

\- this space is dominated by hobbyists running stuff like this on their
personal hardware doing this mostly for idealistic or other non incentivized
(i.e. money) reasons. Nothing wrong with this but using it for something real
brings a few requirements with it that are basically hard to address
currently.

\- Filecoin has been 'coined' as the solution for this for years but seems
nowhere near delivering on it's published roadmap. Last time I checked it had
undelivered milestones in the past. As of yet this looks increasingly like
something that is a bit dead in the water / a big distraction for coming up
with better/alternate solutions.

\- Uptime guarantees for content are currently basically DYI. Nobody but you
cares about your data. If you want your content to stay there, you basically
need ... proper file hosting. As incentives and mechanisms for others to agree
to host your content (aka pinning in ipfs) are not there, this is hard to fix.

\- integration with existing centralized solutions is kind of meh/barely
supported. We actually looked at using s3 as store for ipfs just so we could
onboard customers and give them some decent SLA (bandwidth would be another
issue for that). There are some fringe projects on github implementing this
but when we looked at it the smallish blocksizes in ipfs are kind of a non
starter for using this at scale (think insane numbers of requests to s3). This
stuff is not exactly mainstream. Obviously this wouldn't be needed if we could
incentivize others to 'pin' content. But we can't currently.

~~~
jonathanstrange
> _decentralized is something techies obsess about but that has as of yet no
> business value whatsoever._

I don't really understand that point. If it works, then it has the business
value of saving you all running server and maintenance costs. For most larger
businesses these costs may be insignificant and easily recovered, but for
small businesses with a lot of customers they can make a huge difference. For
example, I'm looking into P2P options for implementing a decentralized message
forum in a game-like emulator and it would make no sense to even implement
this feature with constant running costs for server space.

Now getting the decentralized data management to work reliably out-of-the-box
from behind various firewalls and different platforms, that's the big problem.
So far, none of the libraries I've seen are very easy to use, some require a
difficult installation and configuration or you need to your own STUN server
or gateway, which kind of defeats the purpose.

~~~
ghayes
Many service providers, e.g. CloudFlare or AWS, among others, will offer
sufficient resources for smaller sites for free, in hopes of capturing larger
returns if that company scales. This largely negates the specific benefit
(cost for small orgs) that you're referring to.

------
xur17
> The Inter-Planetary Name System (IPNS), our system for creating mutable
> links for content addresses, now provides faster naming lookups and has a
> new experimental pubsub transport to speed up record distribution. Providing
> an IPNS record is now 30-40x faster in 1K node network simulations!

IPNS performance was abysmal last time I tried to use ipfs (took > 30 seconds
a fair amount of the time). Curious to see what it's like now.

~~~
jarfil
The ping from Earth to Mars varies between 360 and 2600 seconds, so 30 might
still be fast enough for an "interplanetary" thing.

~~~
espadrine
Doesn’t IPFS still rely on Kademlia?

Since node IDs are random, node lookup may require multiple interplanetary
hops, no?

For instance, from N1 on Earth to the nearest XORwise node in its routing
table which might be N2 on Mars, whose routing table finds the target node N3
on Earth.

~~~
momack2
We do use Kademlia - but note this release actually runs 2 DHTs - one for LAN
connections and one for WAN connections. You could easily imagine a small Mars
outpost as the LAN DHT where you can do fast retrieval for all content already
available locally without hitting interplanetary lookup times.

[https://docs-beta.ipfs.io/recent-releases/go-
ipfs-0-5/featur...](https://docs-beta.ipfs.io/recent-releases/go-
ipfs-0-5/features/#improved-dht)

~~~
omginternets
What are your plans WRT Coral? I noticed there's a work-in-progress Go
implementation. Is it going to supercede Kademlia eventually?

~~~
whyrusleeping
Yeah, thats roughly the idea. We can evolve the current DHT into something
coral-like, or if something better comes along, we will use that.

------
api
This uses DHTs. How resilient is it against Sybil attacks? Also how does it
work under global netsplit conditions, like if someone borks or attacks BGP in
such a way that 1/3 of the world is not reachable?

My impression is that DHTs fall down pretty hard under the latter scenario and
are also pretty vulnerable to the Sybil scenario if the attacker has enough
resources to mount a really serious attack. They're okay for low-value simple
stuff that doesn't have much of an intrinsic bounty attached to it (like
BitTorrent magnets), but trying to put a "decentralized web" on top of a DHT
seems like a scenario where the instant it becomes popular it will get
completely shredded for profit (spam, stealing Bitcoin, etc.).

My rule of thumb is that anything designed for serious or large scale use (in
other words that might get popular) needs to be built to withstand either a
"nation state level attacker" threat model or a "Zerg rush of hundreds of
thousands of profit motivated black hats" threat model. The Internet today is
a war zone because today you can make money and gain power (e.g. by
influencing elections) by messing with it.

~~~
willscott
For BGP partitions / Significant network outage conditions: You may not find
content partitioned on the other side of a netsplit from you, but the degraded
condition is that you can still query and find content present in the same
part of the network as you. This is better than centralized solutions face at
present - e.g. google not being available in China, and isn't too far off from
what you might hope.

For Sybils: You've left the attack you're worried about pretty vague. IPFS
itself doesn't need to tackle many of the sybil-related issues by being
content addressable (so only worrying about availability - not integrity) and
not being a discovery platform - so not worrying about spam / influence. For
the remaining degradation attacks - someone overwhelming the DHT with
misbehaving nodes - there's been a bunch of work in this release looking at
how to score peers and figure out which ones aren't worth keeping in the DHT.

~~~
jude-
> You may not find content partitioned on the other side of a netsplit from
> you, but the degraded condition is that you can still query and find content
> present in the same part of the network as you.

This becomes hugely problematic the minute you start using IPNS. On one side
of the split, the name `foo` can resolve to `bar`, but on the other side, it
resolves to `baz`. If you're trying to impersonate someone, then a netsplit
would make it easy for you to do so (barring an out-of-band content
authentication protocol, of course).

> You've left the attack you're worried about pretty vague. IPFS itself
> doesn't need to tackle many of the sybil-related issues by being content
> addressable (so only worrying about availability - not integrity) and not
> being a discovery platform - so not worrying about spam / influence.

On the contrary, a Sybil node operator can censor arbitrary content in a DHT
by inserting their own nodes into the network that are all "closer" to the
targeted key range in the key space than the honest nodes. This can be done by
crawling the DHT, identifying the honest nodes that route to the targeted key
range, and generating node IDs that correspond to key ranges closer than them.
Honest nodes will (correctly) proceed to direct lookup requests to the
attacker nodes, thereby leading to content censorship.

Honest nodes can employ countermeasures to probe the network in order to try
and see if/when this is happening, but an attacker node can be adapted to
behave like an honest node when another honest node is talking to it.

> For the remaining degradation attacks - someone overwhelming the DHT with
> misbehaving nodes - there's been a bunch of work in this release looking at
> how to score peers and figure out which ones aren't worth keeping in the
> DHT.

Sure, and while this is a good thing, it's ultimately an arms race between the
IPFS developers and network attackers who can fool the automated
countermeasures. I'm not confident in its long-term ability to fend off
attacks on the routing system.

~~~
willscott
Censorship / availability is the issue. As you lay out, Sybils can be used to
target a specific address / piece of content in an attempt to prevent it from
being found. The good news is that there are some pretty mechanical things -
like maintaining a consensus of known-trusted nodes that can be used to
validate a node/piece of content isn't under attack - that should be
sufficient until IFPS is quite a bit larger than it is today

I don't think I see the impersonation / bad resolution problem though. IPNS
records are content addressed to the key. Having control of a portion of the
network isn't sufficient to compromise that (you can prevent availability
though).

~~~
jude-
> The good news is that there are some pretty mechanical things - like
> maintaining a consensus of known-trusted nodes that can be used to validate
> a node/piece of content isn't under attack - that should be sufficient until
> IFPS is quite a bit larger than it is today

What would I be trusting the nodes for? If I'm trusting them to just keep my
data available, then why not just put it into S3? What role is IPFS playing at
all, then, if I find myself having to pick trusted nodes to defend against
low-cost route-censorship attacks?

Also, the size of the network doesn't really seem to make large DHTs resilient
to Sybils. BitTorrent in 2010 had over 2 million peers [1], but north of
300,000 of them were Sybils [2]. That's pretty bad.

> I don't think I see the impersonation / bad resolution problem though. IPNS
> records are content addressed to the key. Having control of a portion of the
> network isn't sufficient to compromise that (you can prevent availability
> though).

Correct me if I'm wrong, but IPNS resolves a human-readable name to a content
hash, right? If all I'm going off of is the name and the DHT (no DNS), then
having a network that can return two (or more!) different content addresses
for that name can lead to problems for users, no? If IPNS/IPFS is supposed to
be a hypermedia protocol bent on replacing HTTP/DNS, then its inability to
handle the case where `google.com` can resolve to either the legitimate Google
or a phishing website sounds like a showstopping design flaw that just begging
to be abused.

[1]
[https://www.cs.helsinki.fi/u/jakangas/MLDHT/](https://www.cs.helsinki.fi/u/jakangas/MLDHT/)

[2]
[https://nymity.ch/sybilhunting/pdf/Wang2012a.pdf](https://nymity.ch/sybilhunting/pdf/Wang2012a.pdf)

~~~
AgentME
IPNS names are based on public keys. A sybil attack can't make an IPNS name
resolve to an arbitrary attacker-chosen value because the attacker can't make
signatures for the public key.

A sybil attack could be used to cause part of the network to see a recent old
value for an IPNS name, but clients keeping recent values cached for a while
is already something that happens naturally, so it just seems like a more
minor example of the general problem that a sybil attack could do a denial-of-
service.

------
StavrosK
I run [https://www.eternum.io/](https://www.eternum.io/) (an IPFS pinning
service) and created Hearth
([https://hearth.eternum.io/](https://hearth.eternum.io/), a Dropbox-like way
to publish files on IPFS), and this is a very welcome release.

IPFS has been a real pain to work with in the past (the node would just
consume all RAM and CPU and had to be restarted a lot), but it's been getting
better, which is great to see.

I really hope it gets good enough to run on everyone's desktop machine, since
that's the way IPFS is meant to be deployed (rather than just on gateways). It
seems that it doesn't take up too much RAM or CPU now, but it looks like it
might be a problem bandwidth-wise, if you host some popular content.

Still, great news overall.

------
mad44
Here is a summary of IPFS white paper
[http://muratbuffalo.blogspot.com/2018/02/paper-review-
ipfs-c...](http://muratbuffalo.blogspot.com/2018/02/paper-review-ipfs-content-
addressed.html)

The improvements announced are substantial.

~~~
gunshai
You mean TRON /s

Edit : makin jokes about plagiarism people.

------
fg6hr
IPFS is really just a mix of a torrent tracker with a torrent client, but once
IPFS VMs start paying for themselves, people will sign-up in masses. I guess
that's what Filecoin is about.

Edit: The way I see filecoin working is anyone can post a reward for a file
and once the file is provided, the reward is paid. In other words, it's bit
like a brokerage that connects downloaders with uploaders. The difficultly is
that this brokerage needs to be distributed and resilient.

~~~
rglullis
I am yet to write more about this, but one thing that bothers me with
Filecoin: the economics don't add up. The price per GB stored will tend to
reach an equilibrium around the commodity price - i.e, costs of
disks+computers+electricity+internet. Unless I am missing something, if the
price gets higher, more people would buy disks and put them online, bringing
the price down.

If that is true, it means that a node can only be profitable if you are
freeloading. And if you are freeloading, any price you get will be good which
means that it tends to go even further down, perhaps even below the commodity
cost. I may be missing something, but I really don't see this going beyond
techies with spare disks playing around and definitely no way to run a
Filecoin node on a VPS profitably.

~~~
AgentME
If there's enough enthusiasts using spare disks at home to offer enough disk
capacity for everyone, then that means Filecoin hosting will be available
cheaper than cloud hosts. If all that capacity gets used up, then supply and
demand will cause the price to go up until more people add capacity. If the
price hits the price of cloud hosts, then people will just run tons of
Filecoin nodes on cloud hosts, so the price of cloud hosting will be the upper
bound on Filecoin hosting prices. Then anyone that can undercut the cloud
hosts will be free to run their own Filecoin nodes, and the price will be
pushed down as people do this.

~~~
Nursie
> If there's enough enthusiasts using spare disks at home to offer enough disk
> capacity for everyone, then that means Filecoin hosting will be available
> cheaper than cloud hosts.

Only if these enthusiasts are willing to take a loss on power.

> If the price hits the price of cloud hosts, then people will just run tons
> of Filecoin nodes on cloud hosts

Or the cloud hosts themselves will enter the market. The upper bound on the
market will be their costs, which will always be lower than the costs of
enthusiasts using spare disks at home.

~~~
ric2b
> Only if these enthusiasts are willing to take a loss on power.

An 8TB HDD doesn't use more power than a 100GB one.

~~~
Nursie
> An 8TB HDD doesn't use more power than a 100GB one.

But access to it by third parties does, it changes the usage model and
increases your electricity costs. This may not be a lot _but it doesn 't have
to be a lot_ to make your costs more than those of a datacentre operator, not
to mention you also need to be online all the time.

The dream of edge-node storage utilisation has never survived in the face of
economic analysis.

------
imglorp
TL;DR to skip the glossy page.

> The primary focus of this release was on improving content routing. That is,
> advertising and finding content. To that end, this release heavily focuses
> on improving the DHT.

> A secondary focus in this release was improving content transfer, our data
> exchange protocols

Other improvements to data store, libp2p, etc.

source: [https://github.com/ipfs/go-
ipfs/blob/master/CHANGELOG.md](https://github.com/ipfs/go-
ipfs/blob/master/CHANGELOG.md)

------
truth_seeker
>> Opera became the first major web browser to offer default IPFS support on
Android, shortly after Brave started directly embedding the IPFS Companion
extension (complete with a built-in js-ipfs node). This means millions of
people around the world now have access to the decentralized web built
directly into their browsers.

Wow. Thank you IPFS community.

~~~
StavrosK
I believe Opera only redirects to a gateway and doesn't implement IPFS itself,
though, which is rather less exciting.

------
skyde
IPFS need a way to get paid for a block over a period of time. Does not need
to be real money just token you can use later to pay for your file to be
seeded.

Basically I host your file and you host my file. With some algorithm to make
sure number of seeder for each block never go to 0

~~~
momack2
Check out Filecoin ([https://filecoin.io](https://filecoin.io)) - token for
buying/selling file storage for IPFS data. You can earn tokens for renting out
your disk space and use them to purchase backup/hosting from others.

------
aiyodev
Can someone provide an example of a service that ipfs enables that can’t be
provided on the regular web?

~~~
StavrosK
You can publish static content that doesn't go down when the server goes down
and scales on its own as the content becomes more popular. It also has hashing
built-in, so you can be sure the content you got is what you wanted,

Things like distribution of apt packages becomes much more exciting when each
computer can choose to redistribute the packages it got to others in the LAN
or area, even offline.

It's also very interesting to me how all the visitors to your website become
servers as well, so content can never be "hugged to death" or links can never
go stale, as long as at least one person has the content somewhere on their
node.

That, to me, is huge, as links now go stale with some regularity. Think of all
the Geocities sites (and all versions of them) just existing for ever,
regardless if Geocities decided to shut down.

For example, here's my site on IPFS:

[https://ipfs.eternum.io/ipfs/QmVW6JejQkjLnBJacR8qcZi88WNTMwi...](https://ipfs.eternum.io/ipfs/QmVW6JejQkjLnBJacR8qcZi88WNTMwiBwNDMCFDzijtGAw/)

That can now "never" be lost, as long as someone cares enough about it to
visit.

~~~
aiyodev
That's "how it works", not "what it can be used for". Regular websites can be
designed to handle high traffic. Regular websites can make backups.

What new thing does ipfs make possible that would make users install the
software to use it?

~~~
StavrosK
Nothing has any advantage except the advantages it has. I don't want to repeat
everything I said in the last post, but what you mentioned isn't a
counterargument to any of what I detailed.

~~~
aiyodev
If you're not going to answer a question, don't bother replying.

~~~
StavrosK
> Regular websites can be designed to handle high traffic

IPFS lets them not have to.

> Regular websites can make backups

Backups don't help when the company shuts down and takes the site down.

> What new thing does ipfs make possible that would make users install the
> software to use it?

"Think of all the Geocities sites (and all versions of them) just existing for
ever, regardless if Geocities decided to shut down."

I don't think you even read my post.

~~~
aiyodev
Again. If you're not going to bother to answer the specific question I asked,
do not bother replying.

END OF CONVERSATION

~~~
StavrosK
It looks like you aren't even reading my replies. I feel lonely.

OVER.

------
kevincox
I think IPFS is a great technology, and the protocol appears to be fairly well
designed but to be frank, I tried to use go-ipfs and it is a very junky
implementation. There are numerous issues that show an overall lack of good
quality standards in the codebase.

\- Pin management is far to simple for any real use case.

\- GC is a complete GC on an arbitrary threshold.

\- API is based on the CLI and really weird (ex GET
/api/v0/cp?arg={src}&arg={dest}, why don't the parameters have names? Why is
it GET?)

\- They don't prove a way to upload a directory with references to existing
IPFS objects, so you need to manually encode the data.

\- The manual encoding uses go-style names, they seem to have let their
language choice leak into the protocol.

\- Lots of minor issues, such as the API implementing block size limits by
simply cutting off the input data at the size limit, this is even before
encoding it into the format where the size limit should be calculated and
cause data corruption in some cases.

\- Their one abstract API for creating directories (MFS) has a number of
issues.

    
    
      - It is incredibly slow for some reason.
    
      - It kindof but not really automatically pins all content recursively references.
    

There are other weird choicess such as their directory sharding using hashing.
It isn't clear to me which use case this improves over a btree but someone
probably thought it sounded cooler. Additionally the sharding appears to be a
single layer of sharding which means that it still has a size limit (just
larger).

I ended up sinking a ton of time into [https://gitlab.com/kevincox/archlinux-
ipfs-mirror/](https://gitlab.com/kevincox/archlinux-ipfs-mirror/) and in the
end the number of small quirks were incredibly frustrating. I might go back
and implement their hash-sharding logic to get the mirror working again but at
this point I don't really want to interact with go-ipfs again.

I thought it would be an interesting project to get involved in, and I have a
lot of expertise that would be valuable however it appears that it is all a
bit of a mess which is a real shame.

Maybe with more success it can be re-written in cleaner fashion, as I said
most of the issues are with the implementation not the protocol so there is
definitely hope. I do honestly wish all of the success to the project.

------
lachlan-sneff
I wish they hadn't picked go. Go isn't a bad language really, but it's a huge
pain to integrate a go library into an app written in another language.

~~~
echelon
C or Rust would be perfect. No GC runtime, no name mangling, fast, great ffi
story.

Rust needs to start edging out Go for new tools. Kubernetes, Envoy, IPFS, etc.
would have benefitted from it.

It might not have been time four years ago, but it's time now.

~~~
stryan
Rust will start edging out Go once Rust becomes easier than Go to
prototype/start projects.

Rust is a great language, but right now I can start a project in Go and have
someone else start a project in Go and have something up and working in a
week. Rust still takes significantly longer to learn and/or find skilled
developers for. Go is up there with Python in terms of "Get something out
_now_ ".

~~~
rosywoozlechan
Rust isn't that difficult to prototype or start projects with? What trouble do
you run into? I think the hardest parts to understand are the concepts of
borrowing, lifetimes, and ownership, but you don't have to know all the
subtleties of that to prototype. A basic understanding is fine to get started
with. Just like a developer doesn't have to understand all the subtleties of
generics to get started with either, lots of devs don't know what covariant or
contravariant means and use generics.

~~~
plerpin
Understanding the concepts is one thing, applying them is another thing
entirely. The Rust compiler is an incredibly difficult beast to placate.

~~~
hobofan
Especially in prototyping you can mostly .clone() everything, and stop
worrying about any lifetime problems.

~~~
plerpin
Lots of deferred pain for when you productionize.

~~~
hobofan
Yes, but this was on the topic of "is Rust suited for prototyping?", and I
would say that cloning makes it much more suited, as it sidesteps a lot of the
lifetime pains. You will have similar deferred pain if you e.g. prototype
something in JS and then when it's time to move to production gradually move
it to Typescript.

From personal experience of Rust projects in production, you can get by with
cloning for a long time, and optimizing later is not too big of a pain in most
cases. Performance problems due to excessive cloning are also very obvious in
code and easy to profile for.

------
LockAndLol
Hold up... You can pull docker images from ipfs? How?

And do IPNS entries finally last more than 30 seconds? It would be nice to not
have to constantly keep a node up just to have an IPNS entry.

~~~
momack2
IPNS records still have a default 24hr lifetime, but republishing your IPNS
records is now much MUCH faster, so it should be much cheaper to run a process
to republish these regularly. I assume you can also bump up your lifetime to
>24hr if you want - but note that will mean nodes not using pubsub (aka
getting updates pushed to them proactively) might be slow to get new updates.

Here's what's new for IPNS in the release: [https://github.com/ipfs/go-
ipfs/blob/master/CHANGELOG.md#ipn...](https://github.com/ipfs/go-
ipfs/blob/master/CHANGELOG.md#ipns)

------
fit2rule
I really wish there was a C/C++ implementation of IPFS that could be dropped
in to a Linux, Windows, MacOS, or iOS cross platform app.

I already ported the Go code to iOS once, and it wasn't that painful, but
functionally it would be a lot more useful as a C++ code base. I don't build
browser apps - I feel there is still a need for native apps especially in this
particular space.

------
woodandsteel
I was impressed to read that the IPFS public network scaled 30X in 2019. It
sounds like they have gotten over a big hump for usability and utility.

------
dzonga
anyone know good python ipfs resources + decentralized sync-able embedded
databases ?

~~~
momack2
orbitdb is a distributed database built on IPFS - looks like they have a
python http client: [https://github.com/orbitdb/py-orbit-db-http-
client](https://github.com/orbitdb/py-orbit-db-http-client)

the py-ipfs-http-client library is also actively maintained (but I think needs
some small changes to work with IPFS 0.5): [https://github](https://github)
.com/ipfs-shipyard/py-ipfs-http-client/

------
omginternets
I have a Go project that imports IPFS and libp2p libraries. What's the
recommended way of upgrading those dependencies?

~~~
willscott
The release notes contain a pretty extensive list of what APIs have changed.

The path to upgrade the dependencies is probably to run `go get -u <dep>` for
the direct dependencies you're including, and then fixing errors that pop up
from doing so.

~~~
hecturchi
Depends what exactly you are importing and for what. I suggest you ask in the
forums and post a link to your project. go get -u is dangerous as it updates
all subdependencies used to latest.

Usually take go-ipfs's go.mod as a guide on what versions to use.

~~~
omginternets
>go get -u is dangerous as it updates all subdependencies used to latest

Yes, this is the problem I initially had; I would end up with a host of
incompatible deps (leading me to wonder what the point of go.mod even was ...
but I digress...).

Sounds like this is going to be one of those super fun dependency hack-jobs :C

Ah well, this is the price to pay for playing with beta software!

