During setup, you git-clone the application code and put it into a dat site you create. The app code publishes by modifying files within its own dat. When you visit somebody's profile, you're actually visiting a new site, so there's no uniform software layer on the social network. A lot like indie Web software.
There's a new 'application' pattern we're working on for the upcoming 0.8 release. In that pattern, the app dat and the data dat will be separated. This wont replace self-mutating sites, but it has some benefits as an alternative: apps will be able to update independently of the data, the browser will provide install and signin flows, and apps can provide persistent UIs.
There's a lot of other ideas we're kicking around (intents) and some kind of out there experiments (the app scheme ) so we'll need lots of feedback, esp from folks like @neauoire. Pre-release of 0.8 should happen by the new year.
EDIT: updated for clarity EDIT EDIT: and brevity
Why not make profiles something modular that can be easily published and shared online, but also in a 'mesh network' setting where you publish and receive from devices within your general vicinity.
The profile should be a bundle of data about the individual, published by the individual. More of a standard than a service. There could be 3rd-party services to scrape and store what pepople publish. There could be 3rd-party services to produce nice cookie-cutter profiles, like resume templates. Services for hosting, collating, searching, viewing profiles. And people could have loads of different profiles for different parts of their lives, and separate them as much or little as they feel like. But at the core of things, the profile would belong to the User.
I was able to build a 'Slack clone' as a one-day hack, using Hypercore + Electron: https://github.com/lachenmayer/p2p-slack-clone-poc
I think we'll be seeing a lot of these kind of projects soon :)
The reason I ask is that me, Mafintosh, Juan at IPFS, Dominic with Scuttlebutt, Feross at WebTorrent, Substack, and others all met back in 2014 for our different P2P projects. All of us had slightly different approaches. Juan and others seemed mostly interested in hash addressing, which I think is great but doesn't solve the end problem of data sync. Seems like Dat deals with that fairly well, but not for highly mutable data (versus large scientific files). Meanwhile we ( https://github.com/amark/gun ) tackled that problem first, because it seems like CRDTs are the most relevant for killing traditional centralized Facebook/Twitter/Reddit/gDocs like apps, and hashing is more applicable for killing centralized YouTube/imgur like apps. Both are necessary, but it certainly seems harder/easier based on which underlying P2P tools you use.
You are one of the few people actually jumping in and building end apps (thanks for sharing your chat app!), and we need more people doing that (not just all of us who are trying to rebuild the underlying architecture). So I would be curious to hear your experience comparing the different use cases behind dat/IPFS/gun/WebTorrent/etc. it would make for an interesting and informative comparison article. Thoughts?
Is random access something you guys have added since then? Or, could you clarify how it reduces the overhead on apps that would have shared mutable state (even if it is composed of immutable streams)? Don't you still have to back-scan through the log to recompose the current view? Which then wouldn't scale for read performance. Or is there an alternative approach?
This was one of the main things I was discussing with Dominic Tarr about Scuttlebutt back then. We had done a lot of event-sourcing (immutable logs) at my previous startup, but had problems with compaction or generating views/state of those streams. Which is what lead me to the CRDT approach as the base, not the thing on top. I know Tim Caswell/Creationix is using DAT for one of his products he is building with @coolaj86 .
But that was 4 years ago now, that me and Dominic were talking about those problems? Does random access solve that? Would love to read about it, shoot me some links! Also, Dominic and I are starting to have Web Crypto meetings with some folks at MIT (anvil/trust, ex-riak/basho) and they are in town (the Bay Area) this week. If you are around, we should all get together again to discuss identity, security, crypto, and P2P!
Also is there any chance of switching to Janea Systems' NodeJS builds so that we get 32-bit and x86 Android support?
In time, I think that a lot of backend plumbing around DAT and dex web stuff will not necessarily be JS, and if there are native mobile apps (if Google and App allow for them), those will be written in those native languages. But the front-end is almost certainly going to JS and modern web stack.
Just make sure the protocol doesn't depend on weirdness from JS land to allow other clients in and you're good to go.
The developers do plan to support Windows, but there are some missing dependencies that need to be built first.
Also, is it really necessary to git clone the rotonde repository? Why not go to a starter Dat url and use Beaker to fork your own copy?
Any POC project is going to start out with a highly manual install process.
Also, how secure is beaker? Are there any security audits?
There is even a third element which is missed by this false dichotomy, which is that "blanket anonymity" is not the same as "managing how much of my identity I want to reveal".
Without the latter, you actually end up with a digital commons whose social dynamics asymptotically approach the chans or Youtube comments.
* Everything is encrypted at rest
* We have fine grained capability-based access control
* If I share a file with two people, they can't see that I've shared it with the other person
* Access revocation with key rotation (on the same fine grained level)
* Peergos is resistant to quantum computer based attacks (unshared files are already safe, and shared ones will be)
* Peergos doesn't rely on out of band sharing of keys (we use TOFU on a publicly auditable append only PKI)
If so, which one is winning?
I don't think you can decide "who is winning" right now. They both are still in development and both have strengths and weaknesses resulting from different development focus that do not have to remain as they are.
Nobody follows me yet though...
PS: I think this would pick up more steam if you changed the title to "Decentralized Twitter clone with Beaker and Dat Project"
Here is my feed: dat://73bf68c7e480d53f231d0f077e2865afa098d0b6b1bd3eb65364b9b7cb149d0c
edit, a quote from @neauoire:
>>> Good morning, I will fix up the client now. Make sure it catches the timeouts, and also make @ names clickeable by everyone.
How is Dat different than IPFS?
IPFS and Dat share a number of underlying similarities but address different problems. Both deduplicate content-addressed pieces of data and have a mechanism for searching for peers who have a specific piece of data. Both have implementations which work in modern Web browsers, as well as command line tools.
The two systems also have a number of differences. Dat keeps a secure version log of changes to a dataset over time which allows Dat to act as a version control tool. The type of Merkle tree used by Dat lets peers compare which pieces of a specific version of a dataset they each have and efficiently exchange the deltas to complete a full sync. It is not possible to synchronize or version a dataset in this way in IPFS without implementing such functionality yourself, as IPFS provides a CDN and/or filesystem interface but not a synchronization mechanism.
Dat has also prioritized efficiency and speed for the most basic use cases, especially when sharing large datasets. Dat does not make a duplicate of the data on the filesystem, unlike IPFS in which storage is duplicated upon import. Dat's pieces can also be easily decoupled for implementing lower-level object stores. See hypercore and hyperdb for more information.
In order for IPFS to provide guarantees about interoperability, IPFS applications must use only the IPFS network stack. In contrast, Dat is only an application protocol and is agnostic to which network protocols (transports and naming systems) are used.
But why not build version control on top of IPFS?
It has been discussed here: . It seems a matter of correctly working with immutable data structures (which IPFS provides storage for). And it seems a bit silly to rebuild that storage layer from scratch.
DAT is fundamentally a portable, self-contained, data repository. Replicating DAT archives across a broad network and whatnot is definitely a problem that needs to be solved, but IMO that should be solved at a different layer, without rolling in all sorts of complecting concerns such as network ports, routing, and payments for storage and whatnot.
Someone could built a storage that uses IPFS, similar to our dat-http storage .
You could prob swap this question for "what is the difference between DAT and Bit Torrent" since ZeroNet also uses BT and DAT is similar tech... but this has been answered already.
Outside of this, DAT is agnostic and modular while ZeroNet is focused on p2p website. So a comparison of ZN and Beaker (rather than DAT) would make more sense since both have same focus. Beaker's approach is to use a web browser wrapper (electron/chrome) while ZN is headless and allows you to use your existing web browsers. Beaker is able to offer more/different features because of the tight relationship with a browser app and this affords a powerful path forward beyond just serving static files as web pages. However, ZN also lets you leverage browser database and ability to make simple webapps.
It's really just different approaches to accomplish more or less the same goal of a p2p web. You could say Beaker is more cutting edge because it uses newer DAT (something new) and open to integrating IPFS etc vs being bound to BitTorrent (older tech).
The nice thing is, you can run ZeroNet and Beaker at same time and enjoy both p2p web networks ;-)
I think that things like p2p transport via Tor, bt, etc. are all red herrings. The robust computing infrastructure for the next-gen, distributed information system that the world needs, should not be tied to transport layer concerns like that. It should work reasonably well via flash-drive-sneakernet as it does over fiber and LTE.
> No, it’s a fork of the Chrome browser.
What you see here is actually the “old” version of Beaker - a lot has changed in the upcoming release. Here are a few tweets with screenshots if you want to see a preview:
- Beaker is an app framework built on Electron/Chrome because without gorgeous apps you don't get users. But all of the data is stored within DAT, and there's nothing that prevents alternative apps (even textual ones) being built on that data.
- Three sentences into your incoherent rant, I lost the antecedent, but if you think involvement with something like 18F invalidates someone's interest in this kind of tech, then I think you're a Russian troll bot unless you can prove otherwise. QED.
- The authors of Beaker are very well aware of the history and original purposes of the project.
- DAT is not geared for centralized anything. You have no idea what you're talking about. Content-based addressing is the way past the centralized transport networks of today, which masquerade as information networks, and they are the single biggest threat to existing powers that gate/throttle/control the internet. Full stop.
If i had to, I'd guess you're referring to Google as "they" at first, then the Dat project, with some Github "they" mixed up in the middle. It's honestly hard to tell.
I'm not sure why you think Dat is "centralized decentralization" and how using it would lead to "big brother" getting your research data. What part of Dat is centralized? This seemingly poorly informed hot-take on Dat leads me to question the other assertions in your screed as well.
I see freedom and privacy as something which cannot be combined with this concept as the project currently stands, due to reasons which are not immediately apparent but which I believe have at least enough substance to raise an eyebrow and question things.
I am left with the following questions after examining SEC documents, SM accounts, financial relationships, and company activities of parties involved and technologies used:
1) Do I want to build on a platform which can never be truly safe
because the stakeholders have a compelling interest in undermining
its anonymous usage? (See explanation below)
2) Why do things smell fishy...
2c) Realizing I personally equate P2P with privacy, free speech, etc.,
I wonder, why Chrome? Then I think of all of my compatriots. How
many of them would like using hacked-chrome to access sites? Why
not mainline it on Chrome?
Google doesn't do privacy <flag> hmm.
2d) Where the heck is Firefox in this... or anything free/open...?
WHAT KIND OF PEOPLE ARE THESE?!!! ZOPMG?!
[Exhibit A] The guy who designed the protocol this depends on says in his paper on the subject that he offers an alternative to GitHub, then they build this derivative project on Electron and host on GitHub lol. o.O Okay, not by itself suspicious but weird and it stuck in my head, spurring more curiosity about individuals/projects/affiliations/home planets.
[Exhibit B] An ex-Mozillan building on a Chrome fork. Huh? Okay. It's a free world, but odd nonetheless. This makes me imagine where the project will go in the future. Will this get mainlined and become a feature in Chrome? What might prevent that? What if I don't wanna... Where's the alternatives? I don't want a Chrome-fork of ill repute on my systems to create more security vulnerabilities. Who reviews their changes? How quick do they roll out patches from upstream? Ack... Hang on a minute.. Google wouldn't want a P2P distributed web.
[Exhibit C] A handful of logos, a little namedropping... That makes me question who/why. Okay, let's see what their actual affiliation is. Code for Science turns out to be legit, and cool, but a tiny group so funding is... personal donations? The others seem to be foundations granting them some cash. Let's see who they are...
[Exhibit D] Upon looking up the Knight Foundation's recent dealings, I find they're now owned by a media company making its money from advertising, according to their SEC filings. Woah now, not friends of privacy, or P2P. What gives? Maybe the company has nothing to do with the foundation's activities, so I dig. Well, they're not in a position to spend money on bleeding edge tech, holy cow they're hemorrhaging money and have been for a while. Let's Google em and see why... Googling turns up fiascoes with the NSA, undermining counter-terrorism activities at a level the Inspector General's office deemed greater than all of the leaks by Edward Snowden. Wow that's a lot of heat, it can change a place - and who runs it. $1,000,000,000 USD/yr is a big fucking crowbar to leverage a company with. Susceptible to control? Yes. Motives to control? Yes. Opportunity to infiltrate? That reminds me that I haven't Googled the rest of the staff. This yields information that an adviser on the project is a GSA employee, in 18F - data. By itself that means little, but...
[Exhibit E] Giving their Fed (lol can't resist, sorry Jay-quith, it's meant in good fun) the benefit of the doubt, I Google him and find his anti-Trump tweetfest. Lol, ok, but you're a fed right? So why the Hillarsque feed? When I was in service, I wouldn't have undermined POTUS publicly, but kids these days are different, still seems like a weird fed. So I look up the 18F department handbook, hiring policies, and what kinds of people work there. He wouldn't fit in for a second by the sound of it, and... what is this? Don't they need clearances? Yes... For Open Data, we need an SF85a/SF86 do we? Huh, okay. Wtf? Moving on... Secretly Open Data?
One adviser is employed by the US government in an agency concerned with these matters, which seems fine, but I don't like single government anything really <tin foil hat>. Where is everyone else at the party? Curiouser still: When does gov+P2P anything mix? Who is accountable when I serve pirated media content I am unknowingly hosting via P2P using beaker? In some places using such software is illegal for that reason. Who takes down the page when I serve up bomb plans? There's one strong reason privacy may be intentionally broken, or at least cast aside. Deniability for people hosting the mirrored content is there, but it leaves nobody accountable for a DMCA notice or law enforcement action right? Unless they can come kick my door, then it's fine. See why they might not wanna have any kind of anonymity on such a network? Call it paranoia if you wish - whatever. It demonstrates a conflict between the design, and the objectives of involved parties. There are dozens of reasons why gov+p2p typically have nothing to do with one another, which would give some compelling reasons for a gov to want to put some boots on the ground, maybe manipulate the playing field a little. At least, they're solid grounds for gov to be anti-(beaker+privacy) combos.
One company which owns a foundation supporting the project makes its money primarily in an industry which is infamous for tracking, privacy invasions, selling and mishandling of user data, and exploiting user browsing behavior, but they are asking me to trust their modified browser and server, you need to run a modded httpd to serve "legacy browser" users with normal DNS etc.) I was under the impression that the contemporary cybersecurity concerns of users and governments were focused on improving privacy, not creating monetary partnerships with media companies.
So, wondering what the biz model is, where the money flows and why, and why government (read: THATS _YOU_ FED! lol) _may_ be interested and might present challenges to using it in the way I would like, for anonymous and open exchange of data. If you've been involved in research, defense, or fedgov the reasons are apparent. Well, doesn't mean they _are_ involved, or even _care about it_, but they may at some point care a lot, if history is an indicator. GitHub stands to lose a little here, maybe, so I doubt they'll jump to the front with their credit card in hand to help. Google sure won't benefit, and that sure is a lot of work for such a small team to tackle, so how are they gonna maintain this? Is this gonna be a forever-separated fork of Chrome? Will Google get shitty and try to break compatibility or prevent usage of Beaker or its features to protect their investments? Doubt they'll help at any rate.
It seems like they're a project which is working for open data and an open web with the very people who want to prevent this at any cost and are in a position to be forced by those people to alter their behavior. The software this is built on is not privacy focused or even aware, and the project itself in no way ensures privacy or anonymity, and is controlled by parties who have interests counter to the goals of the project, so why would I invest my time-money in helping something which is at best naive, and at worst doomed to fail. I love the concept but WTF, how is _this_ the way to accomplish the goals of Dat, Beaker, or the pro-P2P community? By building in anti-privacy technologies and stakeholders?
I hope this makes more sense. Thanks!
Beaker uses Electron. We chose that because we're from the nodejs world and it allows us to move really quickly.
The Knight foundation has given dat money to pay attention to a specific use case (sharing of large data sets)
You have to be clearer if you want to insist that either of those things compromises dat (or beaker, although beaker is only a client for dat) as a project and makes it unsuitable for its other goals.
edit: my bad, there's also markdown support.
It also needs to have a wasm backed canvas.