Hacker News new | past | comments | ask | show | jobs | submit login
ownCloud Infinite Scale: Go instead of PHP, microservices instead of LAMP (heise.de)
104 points by veddox on Jan 22, 2021 | hide | past | favorite | 119 comments



Here's a more detailed (German) discussion of what the rewrite entails: https://www.heise.de//news/ownCloud-Infinite-Scale-Go-statt-...

I wonder what this will mean for the competition with Nextcloud? So far, NC seemed to be more active and slightly better than OC, but this move may change the playing board quite drastically.


I have so many problems with core syncing with my NextCloud here - I wish I had used something else. But I guess "competition" is over who has more features.


What issues have you encountered?

I can't imagine using anything else, it works so flawlessly for me on multiple devices across different platforms.


First, how many people are working on those files? I have more problems with more people.

The main problem are multi files that are all written together, e.g. Scrivener, Git, ... - I get sync conflicts which are hard to solve.


I'm not sure what your use case is, but syncing source code repositories to a shared folder doesn't sound like a great idea to me.

Collaboration on individual files is also best solved using online office suites and similar tools.

The remaining scenarios indeed benefit from file locking and may need conflict resolution from time to time.


"Collaboration on individual files is also best solved using online office suites and similar tools."

Yes, as I've said, I'd use something else, I've fallen for the "Share and collaborate on documents" marketing slogan of NextCloud on their homepage.


Why would that change the competition? I don't think that at the technical underpinnings have been a problem to rollout one or the other. Honestly curious.


I run Nextcloud on a Pi 4 and it's a struggle to get acceptable performance out of it. The ecosystem is nice, but I would absolutely trade a few less plugin for a smoother file management experience.

I will definitely be checking out OCIS, assuming I find no reason for concern from a license / privacy / security scale.

Golang isn't an automatic guarantee of performance, and "microservices" has me pause in the context of a personal server, but it happens that most of the Golang-based applications I host tend to perform extremely well. This coming from someone who strongly dislikes the "simple is better than correct" design of golang.


The problem is probably the pi4.

I run nextcloud 20 on a i5-3470 CPU @ 3.20GHz (2c/4t) and for a small use case it works very well. The storage is a rotational (magnetic) hard drive (2TB, seagate barracuda, with zfs 0.8.something)


FWIW I always had performance problems running nextcloud on cheap VPSes while owncloud behaves more smoothly. Anecdata though.


Maybe I hadn't been sufficiently clear: the hardware I mentioned is physical hardware, running at home.

No virtual OS or sharing the physical cpu with other tenants.

Cheap vps provider often do overselling on the bet that most customers won't be actually using the cpus in user-space 100% of the time (and that's often true). If your provider isn't doing anything nasty, you should be able to see "stolen time" (often indicated with 'st') in top/htop: that's when your vm wasn't actually scheduled on the real cpu.


I am inclined to believe you. I am giving nextcloud another try this weekend on my pi (I just found out about nextcloudpi).

edit: Aaaaand nextcloudpi is stuck in an endless loop after first login. What was I expecting.

https://help.nextcloud.com/t/docker-install-says-initializin... And I am not the only one. I'll try again in a year or two.

edit2: doesn't explain why owncloud runs smoother than nextcloud though. They must be doing something right.


I just wanted to add some additional things:

- from what I see owncloud is now written in go whereas nextcloud is still PHP. That might give it a performance boost on a raspberry pi (it basically avoids all the overhead from reading and byte-compiling all the PHP files on every request)

There are things that you can do to speed up nextcloud though:

- look into php byte-code caching (opcache and stuff like that)

- make sure your database indexes are set up correctly: I can't remember where in the admin ui (I'm on the phone rn) but there's some sort of self-test/diagnostics page that will tell you about database indexes to create (and the SQL commands to run too, iirc)

- you can tell nextcloud to use a redis for caching too, that could give an additional performance boost

But yeah, again, the raspberry pi isn't really that good for this kind of things. For pretty much the same (total!) Price of a RPI and related accessories I'd get something like a Lenovo m93p tiny off eBay and slap an SSD into that. You'd get proper performance albeit with a higher tdp.

I hope this helps :)


Thanks, I'll apply those tweaks next time I try nextcloud.

I am mainly working with docker on VPS/remote server now so if the image isn't tuned I'd have to dig into it and maintain it.

Owncloud 10.6 still uses PHP, I think so far only enterprisesomething versions are using Go.


On a whim I tried the vps docker setup on the pi. Runs much smoother (seems like the db was the bottleneck but).

edit: Aaaaaaand now I can't login because I foolishly tried to setup webdav and nextcloud is adding 30 seconds to my login and it never actually cool downs. Why do I even bother.


> The problem is probably the pi4.

Jellyfin (NET Core backend with SPA client) runs quite well off my Pi4, even without hardware decoding.

If the device can stream 1080p video, it should be able to serve an online file explorer.


Keep in mind that streaming video likely translates to a read-mostly and sequential-read workload for the underlying storage, and that's really an ideal use-case for storage (you can do stuff like file-system caching, stream buffering and file read-ahead).

Things like nextcloud/owncloud are a lot more random-io-bound, if you think about it.


Other than possibly gaining performance from Go vs PHP, as well as less memory consumption. I have no idea how large scale Next/Own-cloud implementations behave, but i'd assume you just throw more hardware after it.

What will change though is the relatively well tested and hardened code that is Owncloud today. Switching to Go doesn't just magically make your webapp secure.


That sounds scary. Going from a monolith to a microservices architecture doesn't seem like a great move for a software like ownCloud. I think rewriting this as a Go application with a modern JS-based frontend might make sense, I just don't get why you need to throw microservices into the mix. But let's see if they can pull this off, historically such rewrites don't have a great track record though.


The problem is not refactor vs rewrite. What most people oversees is the domain knowledge.

If the domain knowledge is on the code, you can only refactor. If the domain knowledge is on the team, then rewrite tends to be faster.

Most rewrites fails because the team rewriting is not the team that did the initial development.


> Most rewrites fails because the team rewriting is not the team that did the initial development.

Looking at https://github.com/owncloud/core/graphs/contributors, most of the initial core contributors contribute now to Nextcloud instead.

I am interested in seeing how the rewrite affects end-users. Currently, it seems to be mainly focused on File Sync and I don't see things such as calendar or contacts management. (they totally could appear in the future though)

I'd assume that limiting the scope of components, makes a rewrite also easier. Nextcloud for example has a ton of hooks that allow you to write apps to customize the behaviour (want users to sign the ToS before downloading a share? Should be doable etc.). When you leave these out, implementing new things just got a whole lot easier :) (so I guess I am envious on the fact that they got rid of a ton of backwards compatible APIs to maintain :-) )

Disclaimer: Contributed to ownCloud for a few years, then to Nextcloud.


Lukas! Yep, for now, we focus on file sync and share.

Personally, I am looking into kopano or etesync to replace calendar and contacts. We already embed kopano konnectd as the OpenID provider so, there are synergies.

We do maintain backwards compatibility for the WebDAV, OCS and OCM apis and the clients are all working. The inner hook system of oc 10 obviously is gone, just as well as the internal PHP APIs. Instead we now rely on the CS3 APIs that describe a set of services in a microservice architecture to extend or customize functionality. Something that has always been fragile, because a bug in an oc/nc app could affect all other services.

Furthermore, we did not have to start from scratch but contributed to reva, which has been powering CernBox for a few years. It is the reference implementation for the CS3 api.

So, we are very much Protocol driven, which makes changing the implementation language possible if needed. And everyone is free to implement his own service replacement if he needs to. The extensive acceptance testsuite will tell you what is not working compared to oc 10 or ocis. For now, we are pretty happy with go and the initial benchmarks.

One more thing regarding core developers having moved from oc to nc. The server is just one piece of the puzzle. None, literally, none of the desktop, iOS or Android developers paid by ownCloud moved to the fork. It feels as if they were not even considered important. The sync protocol and the end user experience on his daily driver really is what matters. I am very happy that we as ownCloud finally grabbed the chance to really tackle the file sync and share part with the right tools for the job. Without being pulled in a gazillion of directions. I cannot express my gratitude for the rest of the oc sales and support team that keeps oc 10 well running and bought us the time to actually make this step. Truly exciting times

Keep in mind this is still tech preview. There is still a lot of work to do. Helping hands welcome. And yes we need to work on our communication skills...

Full disclosure: oc employee no 7 and ocis tech lead here. Originally hired to work on full text search in ownCloud 10. Good old times. Cheers to all the oc and nc devs: it should be possible to implement the internal PHP API of oc/nc using roadrunner. That might allow existing PHP apps to be wrapped in a dedicated microservice. But that is not something we are investing time in. Feel free to ping me on https://talk.owncloud.com


Are you looking into leveraging Syncthing for filesync? I always had the experience that, even recently, it was more reliable and faster. Its also written in Go.


No, but thx for the pointer. AFAICT the difference in the sync protocol is a block (syncthing) vs file based (owncloud) protocol. There are several reasons why we prefer the file based protocol.

Currently, all our clients sync using webdav. Changing the protocol would require rewriting that part of every client.

Another reason is that ownCloud as well as ocis are used to access files that resides in other existing storages, eg. S3 or CEPH. Translating those protocols to a file based sync is a lot easier than adapting syncthing (just my gut feeling).

Another aspect is that the file based sync is also state based. We can use the etag to detect changes and immediately start syncing in a breadth first approach. We are not doing that, yet. But the windows cloud vfs we implemented for the desktop clients makes this one of the next steps.

The sync in ownCloud feels slow because the clients are polling. There is currently no persistent connection that the server could use to push changes. But to be honest we have been bitten by firewalls between server and client so often that I doubt we can do real push notifications. Which is why I personally am aiming for long polling. For mobile devices we have to rely on the existing notification infrastructure from Google and apple anyway. Anyway, there are ways to speed up sync. Which switching to go made a lot more implementable. But one step after another.

We already implemented TUS for uploading files in ocis and all clients. It is a well designed extendable upload protocol that covers a lot of corner cases we have experienced first hand with all our clients. We contributed our experiences and are planning a batch extension that we would use to group lots of small files in a single upload. Delta sync should be a TUS extension as well, IMO. But that is mostly in my head.

First, we want to get the basic file sync and share features fully implemented.

So ... Yeah ... Changing the language and the architecture kind of made a knot disappear and we are moving forward.

In that regard, I personally have an eye on the Microsoft graph api which we could use for file management. Input welcome.


You’re describing the CADT (Cascade of Attention Deficit Teenagers) development model.

The longer I work in this industry, the more I’m convinced it is the dominant software development paradigm.

Search for “jwz cadt” to read more (can’t link it here).


That's very insightful. It's also interesting that the tendency is to do the opposite: if it's your code you may not want to trash it completely and instead work on an incremental refactor, while if the codebase is foreign it's tempting to just get rid of it and do it your way.

Anecdotally I've been recoding from scratch one of my applications at work (converting from python to rust) and while it's a lot of work and it put new features on hold in the meantime I'm fairly optimistic that it's going to be a big win in the long term. Not only will the Rust code perform better and be more maintainable, the rewrite lets me address architectural flaws with the previous solution, some caused by lack of foresight and others by tacked-on features that weren't expected when the code was originally designed.


Really well said, this is also my own experience in many projects.


This is one of those statements that seem very obvious in hindsight! Is this common knowledge? I'm hearing this argument so precisely for the first time.


You suggest a 'modern' JS frontend... Microservices were a 'modern' architecture a few years ago. Whichever reasons lie behind choosing a modern JS frontend would similarly support (or not support) the microservice backend.


I'm curious about past examples of failures, can you provide some? Thanks!


The thing with Owncloud/Nextcloud is the complexity of the system. When many users are working with the files at the same time, there's a lot of messaging going on in the backbone. If people are working in shared folders, that traffic goes visibly up.

Messaging between systems is hard. When things are async, it's harder. When there's async concurrency, it becomes spicy.

I'm managing a Nextcloud instance at the office. Even with this relatively mature state, it can bork itself pretty decently. Also, it has separation of responsibilities and a system admin is not an admin the traditional sense so, a lot of things are hidden between logs and other layers.

Complicating this further by adding microservices is a brave idea at least. We're going to see what's going to happen.

I hope they succeed but it's not a piece of cake.


There are many examples where complete rewrites have caused problems and killed or hampered companies, Netscape and Firefox come to mind.

Regarding microservices I don't know any OS projects that have failed because of that (because there aren't many OS products that use a microservices architecture), but personally I witnessed many corporate software projects burn and fail when trying to rewrite an "ugly" but functional monolith as a collection of microservices.

The thing is that the energy which you invest into rewriting your entire software in a new stack could also be invested into improving your existing stack. LAMP is not dead and actually evolving quite well, so there are definitely ways to improve the stability and speed of a LAMP application without rewriting it in Golang (even Facebook still uses PHP, albeit with some modifications). Having written extensive applications in both scripting languages and Golang I can say that the latter has its advantages but is also slower to develop in and in many respects not as agile as a scripting language like PHP or Python.

Doing a rewrite of a popular project with thousands of deploys in the wild will also force you to split your attention between keeping the old system running and building the new one. Since most of your users will want to migrate from one system to the other without losing their existing data and configuration you'll have to ensure that there is a clear migration path that works, which can be quite hard. Also, all external systems that are interacting with existing installations (e.g. via APIs) will also need to be supported by the new system.

In the end it might be possible to make it, but in my humble experience I think a complete rewrite is almost never a good idea and the energy you put into it is often better invested in making gradual improvements to your existing codebase. But that's just my 2c of course.


I'd say in your examples there were problems at companies which killed or hampered them and also caused rewrites.


Probably! From my experience I think the main problem is that developers often don't like to work with "legacy" code, where legacy means anything they didn't write themselves. Also, most developers tend to drastically underestimate the cost of a full rewrite as they don't know all the intricacies of the old system. From the outside most systems seem easy and straightforward, so people think they should be easy to rewrite. During the rewrite people then discover all the little edge cases that the legacy system handled and that they didn't think about.

There are legitimate reasons to rewrite a system in a new language, it's not a a decision that should be taken lightly though. From a business perspective it's also very expensive, as having a team of 5-10 developers rewrite a system that already exists and works can cost millions of dollars, so you should have a really good reason and clear ROI objective when deciding to do this.


Hm, let me give a short history of owncloud reva and ocis.

Many years ago CERN chose ownCloud over other solutions because of the state based sync. They could use that to let researchers sync petabytes of data residin on EOS, their custom built storage solution. That ownCloud supports custom storage implementations made this a lot easier.

The did suffer some database bottlenecks and decided to extend EOS with features that the sync clients needs: tree modification time propagation (so that the etag of the root changes when anything in the tree changes) and size tree accounting (so you can see how many bytes are hidden in a folder including all children).

The basically maintained a fork that was half PHP, half c++. Together, we added APIs and interfaces to the codebase to make the file cache implementation exchangeable.

They went ahead and implemented a golang service that could serve the api requests, while the web ui was rendered in PHP. It should be possible to dig into the details by looking at the public CernBox repo. Code archeology ;)

Anyway, all that was before the nextloud fork.

To be honest, CERN has tried to convince ownCloud to switch to a different architecture for years. With some long held opinions leaving the company we were free to reassess our options.

ownCloud has long had problems with long running operations being killed by eg. php-fpm timeouts. So I was evaluating and comparing different PHP frameworks like reactphp, swoole and amphp. I wanted to be able to offload workflows that are triggered after an upload has finished to a proper background job. And I really wanted to stay in PHP land because of all the already written code and existing apps.

But I noticed that they all had one thing in common. The all reimplemented a redis and a mysql library which made me wonder why?The existing drivers would block network IO, killing every concurrency gains you could gain by using a reactive framework or the go like coroutines of swoole.

It finally dawned on me that PHP may not be the best language to implement a service that has to deal with file IO. A systems language is more fitting.

Go or rust? Two years ago that was way easier to answer. Furthermore, CERN had a working server side API of the ownCloud webdav and ocs endpoints written in golang.

We sat down and discussed how o file sync and share solution would have to look like at the protocol level. What services are necessary and how could we make the existing code more modular to support other storage backend a than EOS? How can we get rid of the centralised database?

The result were changes to the CS3 api, making reva a reference implementation and using ocis to tie it together with user management openid connect and thinking about migration strategies. There is still the possibility to wrap PHP in a sandbox like service using roadrunner.

I would not have dared to start from scratch. But with an existing codebase that was used in production the decision became easier. The story is not over, yet.

Yes, we are leaving things behind. But we can embrace new things as well. And I am happy to be able to work with an awesome team to see this through. Every helping hand is welcome.



It is telling when a single example is used for 20+ years.


And in hindsight it doesn't seem a terribly good example.

Gecko was a technical success.

On the non-technical side, it isn't at all clear that Netscape-the-company came out worse than it would have done if it had tried to stick with the buggy Netscape 4 rendering engine.


I'd argue that their attempted rewrite of Firefox in Rust also caused them to actually fall behind in terms of features and speed, which is reflected in the diminishing market share of Firefox. Technologically Rust is an awesome language and a boon to secure software development, I'm just not sure it was such a great idea for Mozilla to put so much energy into this, which could've gone into improving their main product instead.


I'd argue not because I switched from Firefox to Chrome at v1

https://ubuntuforums.org/showthread.php?t=1398220&page=11&p=...

Chrome was absolutely faster in web rendering than Firefox and more stable and the UX was cleaner, it made Firefox with its slow buggy performance and awful theme's look like myspace to Chromes facebook.

I also remember being very frustrated with Firefox on Ubuntu ~10.04 at the time and when Chrome came along it was exactly what I'd been waiting for.

Quantum/Servo/Rust/UI refreshes seems like this was Firefox catching up to Chrome v1 and honestly in 2020 it feels like Firefox has finally caught up significantly.

It's still not as stable as Chrome but it's getting there, webrender is a massive leap forward in performance:

https://testdrive-archive.azurewebsites.net/Performance/Chal...

That test is now showing at 6.65 seconds down from 45 seconds without it.


I know 3 examples from personal experience. One that I worked on myself, that was scrapped before completion. Two other two were finished but they have severely impacted the business. But they are not so useful to share because you would not know about these products. My assumption is that we all know these kind of projects, but the Netscape story is a great reference because it explains why it is a bad idea.


Incremental rewrites are just always better. The issue with full rewrites is that you have to rewrite everything - which I suppose sounds good to some engineers, but I dread being forced to spent weeks on parts of the product that are not an issue. Usually you do the rewrite to make things faster, more maintainable, and easier to add features. The issue is, the larger the rewrite, the worse you make all those things in the short-term.


There is nothing stopping them from offering a fat binary. Not everything has to run on Kubernetes.


I just shut down my Nextcloud instance yesterday. Not that there was anything wrong with it, but all i really need is a file synchronization "platform", and there are others that don't open quite as large an attack surface as a complete nextcloud instance does.

Not saying Nextcloud is insecure, and i've never had any problems with it.

Currently trying to decide if Seafile is the way for me, though i dislike it's on disk fileformat (IIRC it's some adaptation of Git).

For now i use Resilio Sync.


We've been using Seafile for two years at my workplace. So far it's been very reliable and much faster than Nextcloud for file transfers. The current drive clients work well, but aren't as polished as Google File Drive Stream, for example. On macOS, I hope they will transition from macfuse.fs to the new FileProvider framework soon.

Seafile's file storage format has the advantage that it's easy to revert a file or folder to some earlier revision, for example after accidentally deleting files.


How do you justify the risk model of the chinese developer?


Store things encrypted. (For example I sync my KeePass DB via Seafile.)

Plus as far as I know neither ownCloud nor nextCloud went through a security audit and they are big piles of PHP with a lot more complexity than Seafile. So it's very likely that there are more bugs in phpCloud than in XiFile.

If you want some real security buy a DropBox/GoogleDrive/MSOneDrive subscription, hm?


> Plus as far as I know neither ownCloud nor nextCloud went through a security audit

This is inaccurate. Nextcloud does receive security audits and is in fact also used by quite some security-conscious organizations (to name a few: German Government, Siemens, ...)

There's also a bug bounty program that pays pretty decently considering the company size: https://hackerone.com/nextcloud. (Remote Code Execution = 10k, Auth Bypass = 4k - compare that to rewards that the FAANG pays and you'll see it's not that bad)

> and they are big piles of PHP with a lot more complexity than Seafile

I did a small audit of Seafile years ago and I don't think that argument flies.

For example, they copied https://github.com/django/django/blob/23c612199a8aaef52c3c7e... to https://github.com/haiwen/seahub/blob/b6f8935c0f355cc70145f9... and removed some security-critical checks. They removed the check for the password hasht here. (https://github.com/django/django/blob/23c612199a8aaef52c3c7e...)

Furthermore, the Django secret key was generated as shown at https://github.com/haiwen/seahub/blob/b6f8935c0f355cc70145f9....

``` def random_string(): """ Generate a random string (currently a random number as a string) """ return str(random.randint(0,100000)) ```

That's not really secure and copy-pasting Django core code and then removing security checks ... is shady at best.

Disclaimer: I wrote a significant part of the ownCloud code (https://github.com/owncloud/core/graphs/contributors), then forked it into Nextcloud. After some years I moved to Facebook to do application security there :-)


That the German government (specifically, ITZBund) chose to use Nextcloud is one of the most reassuring things I ever heard about it :D

Thanks for this comment, and your work on {own/Next}Cloud!


Oh wow, thanks for the quick reply. I searched for nextcloud audit but haven't found the reports, just docs about the "monitoring and audit" and the "security scan" feature. (I still can't, but maybe that's because these audits/reports are not public, I don't doubt your word.)

The bug bounty is very reassuring!


Thanks for these code deeplinks. That's scary condisering seahub. I'm a happy Nextcloud user, thanks for your work!


Thanks Lukas! It's fun to see people whose code I rely on daily. Best of luck with Gatekeeper!


Curious, since much hardware (including CPUs) is fab'd in China, how do you model this risk?


In my experience I've not found software developed by engineers based in China to be developed with any particular care. It is very common to see trivial backdoors, massive amounts of data collection, and plaintext protocols. For situations where the developer is being security conscious, the language barrier often means that reports of concerns are either ignored or misinterpreted.

It is often the case that software developed outside of China, for devices produced in China is alright, but on the other hand many companies like Honeywell simply contract all of their software development there as well, and it painfully shows. I shouldn't be able to buy a product in 2020 that has a linux kernel from 2012 and multiple remote code execution vulnerabilities just from public CVEs, but the Honeywell Tuxido security system managed it with ease.


I have experience of the similar software quality issues, however none of this is unique to China, north America, Europe, India, China. My question was specifically related to the hardware most people run, which is often fabd China, given most people take it for granted that this is "safe".


The hardware is obviously suspect as well, but I can only speak for the number of actual backdoors I've been able to find in my own devices. Root shells on random sockets, "accidental" eval() in web UI elements, hardcoded passwords, actual processes just called `backdoor`. I especially liked being able to remove the IPMI password from a SuperMicro board I bought from eBay by making a HTTP request to a "buggy" endpoint that printed the root password back in plaintext.


Wouldn't the relevant part of most user's hardware far more likely be manufactured in Taiwan, USA, Singapore or South Korea?


It's always about trade-offs. For our use-case, I trust it well enough to prefer it over something cloud-hosted in another country. But I don't doubt that Seafile contains security holes and I wouldn't be surprised if there were backdoors. But I assume that for quite a lot of the gear I manage, so... :)


I am curious if you have tried https://syncthing.net/ ?


I have, but syncthing doesn't have iOS clients, or at least it didn't last time i checked.

There was a client on the appstore called "f-sync", but that is gone now.

Also, Resilio (with paid license) supports features that Syncthing didn't support like selective sync, and encrypted folders, which allows you to share a folder with someone for redundancy and have things stored encrypted on disk.


There a newly released third-party iOS client https://www.mobiussync.com. Unfortunately it closed source, which is what has stopped me from purchasing it (the free version is limited to 20mb sync).


I'm not too worried about the client being closed source, especially not when the server is open sourced.

For Syncthing there is of course the potential problem of the client leaking the secrets to the author, giving them unauthenticated access to the server.

Thanks for the pointer, i'll check it out, though i've had a "lifetime" license for Resilio for years, and it scratches my itch, so there's no pressure to switch.

I have used syncthing in the past for server to server synchronization, a task it performs extremely well, but previous attempts at creating a "road warrior" setup from iOS (with f:sync) all ended in clients taking minutes to connect to the backend, where Resilio would do it in seconds. I'll give it another try.


> I'm not too worried about the client being closed source, especially not when the server is open sourced.

> For Syncthing there is of course the potential problem of the client leaking the secrets to the author, giving them unauthenticated access to the server.

Yes, that is the risk. It is significant because the credentials entered into the closed source Mobiussync app (that wraps the open source Syncthing node) would allow the author (if malicious, which I have no reason to believe they are) to access all of your files (even if your other nodes are behind firewalls, by design).

Now, I’d like to believe Mobiussync is doing the right thing. It aligns with their economic interests to not steal credentials, since nothing would kill their app sales faster if found out. I imagine it also would be easy to detect if the app was exfiltrating credentials by monitoring app communications. I’ve also read the announcement post: https://forum.syncthing.net/t/isyncthing-ios-client-for-sync... and appreciated the way the author engaged with the Syncthing community here: https://forum.syncthing.net/t/mobius-sync-ios-client-now-in-... . Based on my assessment of their conduct and the factors above, I feel almost certain Mobiussync does the right thing by its users.

But economic incentives change, authors change, bugs in code happen, and a good feeling is not the same as verifiability. The risk may be small but at stake is all your data.

I’d certainly pay more than the (very reasonable) price the authors ask for, for the additional peace of mind given by open source.


I've just purchased it, and it looks good.

It's nice how when you click "Purchase" a second time, they tell you to donate to SyncThing instead.


I just purchased it as well. It's less than a cup of coffee, and most importantly not subscription based.

I may or may not use it (gonna evaluate Syncthing to replace Resilio), but at least i can support the developer for making a thing that was VERY much needed.


> encrypted folders, which allows you to share a folder with someone for redundancy and have things stored encrypted on disk

Fwiw Syncthing is working on this feature, not fully baked yet. On mobile so can't link but it's in nightly builds.



aren’t you worried about Resilio being closed source?


No more than i worry about my operating system or office suite being closed source.

But then again, i don't put my sensitive information like passwords, ssh/pgp keys, tax returns and stuff like that in Resilio. I very rarely need those documents "on the go". Instead i have working documents, books, notes and more that i need access to, and while i'd rather not share them with the rest of the world, it would probably not make much difference if it was.

Furthermore, i can completely "wall off" Resilio Sync. It runs in a container on my public server, and files i need access to are mounted as NFSv4 shares "outside" the container. Access to the shares is managed through Kerberos.

So even if you make it inside the container, you can (probably) wreak havoc with the files on the NFS shares, but those are backed up, and unless you can find a way out of the container, or a bug in NFS, that's pretty much it.

The container has only the absolute minimum of binaries to allow Resilio to work, so you toolkit is kinda limited, at least when compared to Nextcloud which requies a lot of binaries/libraries to work, along with a PHP interpreter.


I made the transition from Resilio to Syncthing and I am very happy.


What where the noteworthy features, which make Syncthing a better solution?


Yeah, NextCloud is overkill for that. I see it as the back-end of the phones of my family though, calendar/contacts sync, auto picture uploading (which you can view on a map even), sharing files within the family, of-site backups for everyone. I'm just waiting for NextCloud Talk to support federation so I can talk between servers (I say family but it means different families, in my language there is a word for just parents and kids and when you also include more ("gezin" vs "familie"), not sure what to use, but we have multiple domains and servers (all in my basement though ;))


US usage: parents and kids = "nuclear family"

Everyone who lives together = "household"

Family not living together = "extended family"


Interesting – for basically the same reason we made the inverse switch from Resilio to Nextcloud with a small startup team with <10 devs.

Nextcloud provides us custom shares, user groups, public shares, URLs, better local clients, compared to the Resilio performance. Resilio was especially bad on Linux (but worked remarkably well on my android with a large SD card). With Nextcloud I can even choose to use WebDAV only if I don't want to mess with clients.


I guess it depends on your workload. My Nextcloud was only for myself and my family, and we only used it for "files on the go".

Calendar/contacts is handled by iCloud (Apple household, it's a Danish thing...)

Notes are handled by whatever each person finds the easiest. My wife defaults to the iOS notes app, i switch between various clear text editors.

File synchronization on desktops/laptops is handled by Synology Drive, which syncs beautifully whenever the machine is connected to our LAN, either directly or through VPN.

The only problem i needed to solve was ad-hoc access to files on mobile devices, preferably without opening ports, and since VPN doesn't always work from other private networks (ip scope clash usually), i chose not to use Synology tools for this. Besides, Synology Drive doesn't support selective sync, and while documents probably wouldn't be a problem, synchronizing gigabytes of books to my phone isn't really an option :)

Resilio on Linux does have a nasty habit of doing disk IO all the time, a habit that syncthing doesn't have. When i look at running processes, Resilio on linux is constantly using 2-5% CPU.


I run nextcloud internally in a container.

Thing I don't like is that it expects to be hooked to the internet, and it nags you about apps you should install but you have to install them from their cloud/app store.

Now there is a way to install the app store on your server, but it would be nicer to get started without having to buy into all that. So I just run the default apps.


I briefly considered putting up separate VMs for front and backend containers, but ultimately decided against it. Instead I have a “web” docker network where my nginx reverse proxy runs, and a “services” docker network where I run the stuff I proxy.

Databases and other containers needed for the backend services run on another network as well, so if you make it inside nginx you will (of course) be able to access my already exposed backend services, but no direct access to databases and other services.

Before shutting down Nextcloud I used to have a resilio container running, and I would mount the data from that inside Nextcloud, so no direct contact between the two.

But then again, I just need access to files through a browser, and don’t need any of the advanced features of Nextcloud, so I’m still trying to find a better match. Looking at seafile if I ever find the time.


Not ideal, but you could give it internet access, install all the apps you want, and then isolate it again (remove it from the docker non-internal network or whatever).

As long as the apps you install don't need to make web requests on their own, they'll work fine.


If you're using resilio I would like to recommend Syncthing. Developed out of neutral Sweden, files stored as-is.


There is a lot of hate for PHP, and monoliths. The great thing about WordPress, mediaWiki, Drupal, nextCloud, and many other open source web solutions is that you can get up a running for $2 -> with little computer knowledge.

A lot of cheap consumer hosting services offer managed PHP/MySql stack very cheap. A lot of them have variations of one click installs of various kinds.

You pay the monthly fee and do not have to know about Linux, PHP or MySql.

If you are familiar with PHP, MySql finding a place to host it is still really cheap and available.

A common argument would be that you can get a VPS for $5 maybe even less. Then you have to know about what a database is, what version of PHP, when to do upgrades of all of them. Managed VPS solutions are usually fairly pricy.

With the new version of ownCloud, based on Go and micrnosercivices the barrier of entry is much higher.

The best bet will be to buy managed nextCloud hosting from someone.

While PHP is often scoffed at, the ecosystem that is ready and waiting to help host it for each is amazing. It enables a lot more people to be able to get involved and get inspired and maybe learn more and get start contributing.

If you are writing software that needs hosting but want as many people as possible to be able to use it then PHP is still the best bet, even if not the sexy bet


IMO there is not a lot of hate for monoliths. Service oriented architectures and their successor microservices have experienced a lot of push back from teams that have tried it and it doesn't work out for all applications.

PHP - I mean, it gets the job done. I would never start a new company and choose it, but if you're already on it, its good enough for most things. Go can be far superior depending on your needs.


The big issue I have with deploying PHP is the sprawling, historically low-quality ecosystem.

Maybe it’s gotten better, but last I checked, running RATS (which might not even exist any more) against random, popular PHP code bases would find piles of zero day vulnerabilities.

It’s been over a decade, so I might be being unfair.


This reminds me of cozy cloud's rewrite from node.js to go (https://github.com/cozy/cozy-stack ); this seems to be a rewrite from php to go, as well as re-architecturing it:

https://github.com/owncloud/ocis

Pydio also rewrote itself from php to go with pydio cells: https://github.com/pydio/cells


Forget ownCloud, Nextcloud is orders of magnitude better either as a company and as software: https://nextcloud.com/

It is my favorite self-hosted software of all time. It's really a dream to use!

The AGPL license makes it impossible to disappear or becoming closed source, which is a huge plus!


I also like and run NextCloud, it is the "cloud" back-end for my entire family. That said, it seems to me that this means that OwnCloud en NextCloud are now truly diverging and OwnCloud is not just a less feature rich version of NextCloud. I think it is very interesting to see what happens to OwnCloud now. I mean, imagine OpenOffice being rewritten with different technologies from LibreOffice... I'd be interested once again!


No, thanks. I decided to stop using it after Nextcloud already got confused by a few of my Git repos, overwriting my local updates with the older version from the server. Not to mention the "don't sync hidden files" default setting that's simply not a good idea, to put it mildly. This new ownCloud product promises to handle that better("with millions of small files or thousands with hundreds of GB each, ownCloud Infinite Scale runs smoothly"), but I'll believe it when I see it.

So far I haven't found a sync tool that I feel like I can trust. I don't care about user group management or online office suites, but I absolutely want ~100% reliable file sync that doesn't randomly do nothing for 15 minutes without me knowing why(I care about my files and if I don't know what your software does with them I won't use it), or like NextCloud, falsely detect merge conflicts when 100% of changes came from the same client machine.


You have naked git repos in synced cloud folders, without using git remote functionalities?

That would be careless and you are actively begging for things to break that way, so I'm not surprised. In fact I'd bet any other service, Dropbox/GDrive/..., would also suffer from such issues.


I'm not a very advanced Git user, so I wonder why that would be? Note that I was NOT using it as a remote repository but purely for backup purposes, meaning all changes were always mirrored to the server and that's it. Nextcloud still messed that up. So I don't know how that's a bad idea, considering sync of a large number of files is the single purpose of file sync.


> So I don't know how that's a bad idea, considering sync of a large number of files is the single purpose of file sync.

Yeah it is, but things aren't all rosy and still very flaky here and there (again, I'd wager it's similar for most other cloud sync clients). So we have to still work around these limitations.

I'm not very familiar with git internals, but git writes many files, most of them tiny. When I physically copy a directory with a git repo, the working directory (the currently checked out files) always copies over fast, but the potentially tens of thousands of tiny files take forever (they're not continuous and each one has to be seeked on disk, and each file has some overhead. The overhead isn't noticeable for large files, but adds up for numerous tiny ones).

Next to selfhosting Nextcloud, I also selfhost GitLab for that purpose. That is way overkill though. There's lightweight git servers like Gitea that I've heard people selfhost. If you don't rely on dumb file-based sync but only sync to your remote in git protocol, things won't go wrong anymore.


What does Git have to do with Nextcloud at all?


I very much like Nextcloud too, and have set up/administrated three instances in various contexts (family, friends, work).

It has a lot of great features and good multi-platform support. I also like that the company is based in Germany, and I had good experiences contributing to their code.

However, I do have to say that the software is somewhat clunky and slow, and the clients can be a little rough around the edges. (The desktop client has a particularly nasty bug that likes to clobber git repos.)

So if ownCloud comes out of the rewrite as sleek and reliable as they claim, they would be worth taking another look at. Until then, I'll stick with my Nextcloud...


Anyone has a link for the actual technical architecture ?

This talks about "three-tier architecture" and "no database needed", which is obviously pure marketing spiel.

Are they re-inventing the wheel by implementing a distributed database directly into ownCloud ? I don't get it.


Seems it has a modular storage back-end[0], the default uses a built-in storage engine local files with oCIS storage driver [1].

[0] https://owncloud.github.io/ocis/storage-backends/eos/

[1] "The default oCIS storage driver deconstructs a filesystem to be able to efficiently look up files by fileid as well as path. It stores all folders and files by a uuid and persists share and other metadata using extended attributes. This allows using the linux VFS cache using stat syscalls instead of a database or key/value store. The driver implements trash, versions and sharing. It not only serves as the current default storage driver, but also as a blueprint for future storage driver implementations."


With this design we aim for the storage to become the single-point of truth and scale-out as the metadata is spatially close to the files.


> ownCloud Infinite Scale neither requires Apache nor any PHP infrastructure

I don't know much about production web services in Go, but is it true that they wouldn't need Apache?

I work on a site using Python/Gunicorn and Node behind Nginx, and Nginx is really doing a lot of critical stuff for us, normalising requests, preventing certain classes of security issues, DoS attacks, etc. I wouldn't expect that sort of protection to be built into Go's standard HTTP server?


Go has good support for TLS and HTTP serving, so you can directly expose a Golang service to the Internet without a web server in front.

Many people still choose to put nginx or caddy in front as a load balancer and TLS termination though.


As with most builtin language HTTP servers it is advised to run Nginx/Apache/Caddy in front of it.

I think what they mean with PHP infrastructure is the specific configuration required to make Apache work with PHP, in the old days this was with an Apache plugin, or more recently the php-fpm project.


Good move. I'd never consider owncloud on LAMP.


They mention the involvement of research institutions - the Swiss universities use a common OwnCloud system (Switch Drive) for researchers to store and share files within and between institutions; which generally works very well. So seems likely they are involved since they must be one of the biggest OwnCloud users in the world.


Huge respect for this direction: I’d put ownCloud on the “too old to seriously consider” pile, but they’ve potentially not just started fresh, but brought forward everything they’ve learned in all the time since they started.

Done right, this means they are a serious, safe, and secure option.


I'm pretty sure this is an attempt to regain some of the market share they lost against Nextcloud. IIRC, the latter's popularity has been growing continuously, while OC has been stagnating/shrinking.

Let's see if they can pull it off.


good and bold choice. but count at least a year to get visible results.


This is terrible news. I hope they keep the PHP version up to date, or someone forks it. I've used Owncloud over Nextcloud, because it seems more focused on just file syncing, which is all I want. Maybe it's time to migrate.


Hannisch wat verpasst, oder war dat jezz allet op deutsch?


It is obviously a good idea to use a safe language that compiles to machine-code for something performance-sensitive. But I do not quite understand why someone should choose go over rust for any large-scale project.


It has more to do with maturity and reliability.

Go has mature and fast standard libraries included in the language for web-related things like http, gateway/api, proxies and above all encryption.

For rust, you'll need quite a lot of crates with a lot of unknown variables to consider.

Why do large businesses choose Webpack when there's a lot of faster, more feature-full alternatives? Because it's reliable and maintained.


Because Go is more mature than Rust for web development.

I get Rust is the current trendy language and people on HN love to argue that $TRENDY should be used instead of $BORING but this is exactly the kind of domain which Go was built for and it actually does a pretty fine job of too.


Rust is not just trendy. It is inherently more powerful when it comes to express and check invariants. I had thought that after decades of buggy dynamic code, people have come around to value that. Comparing go and rust, I see two statically compiled languages with many different and interesting features. But one has a strong type system and the other has a trivial one. I'd choose the strong one for any new project that I expect to maintain for years to come.


> Rust is not just trendy. It is inherently more powerful when it comes to express and check invariants.

It's also massively complicated and the language is still in flux as it hasn't yet matured. Which is fine for hobby web dev projects but NOT what you want to base the long term future of any business on.

> I had thought that after decades of buggy dynamic code, people have come around to value that.

Get a grip. Dynamic code doesn't have to be any more buggy if you have the right developers and the right tool chains. Rust currently benefits from being a relatively niche language but once it's mainstream you can expect similar buggy code as less competent developers are forced into using it. Rust might have a fantastic compiler at catching specific bug types, but lets remember that it's not going to save you against any or all bugs. Claiming otherwise is, at best, massively ignorant.

But even assuming your biased opinion was fact, it's still not as if Rust is the only language out there that covers the non-dynamic market space. In fact Rust is one of the least mature languages in that area and particularly when it comes to web development. This isn't a subjective point either, try spending a few years building and supporting applications in other languages and compare that to Rust. I have done just that and while Rust will get there, it isn't there yet.

> Comparing go and rust, I see two statically compiled languages with many different and interesting features

Yes, one has been built with web development in mind and has been mature in that space for a decade. The other is Rust.

> But one has a strong type system and the other has a trivial one. I'd choose the strong one for any new project that I expect to maintain for years to come.

Well then you'd be choosing Rust for a stupid reason. If you're worried about maintaining something for years to come then you want API stability above all else. Rust cannot yet offer you that. C#, Java and Go are all better choices. Heck, even LISP, Haskell, etc would be better choices than current Rust.

Don't get me wrong, I have a lot of respect for Rust designers and the language itself. But some of the fanboyism demonstrated in your comments are ridiculous. Rust isn't ready for the kind of web development you're advocating for it. It also isn't going to magically save you from every known bug (and actually Go's compiler warning are pretty damn good too by the way). I'm sure one day Rust will be a solid option but not today.


Another thing to consider is that Go is much easier for beginners to contribute to, I've contributed to a couple of Go projects without having written a single program myself. Rust is a different story, and often larger projects tend to abstract things behind type level "magic", which for a beginner/drive-by contributor is a big ask to understand.


If your primary use case is setting up a web server then there is no reason to choose Rust over Go.


Why Rust over Go? Imagine I'm the CTO of an ownCloud-like company who is showing a strong preference for Go.


I'd also go with Go, just because of the amazing standard library. The ability to reuse part of Google's production code for HTTP / JSON is a huge win in my book.

Also, Go has been around for long enough that you'll find libraries and tutorials even for rare issues. When I evaluated Rust, I was missing that. Their ecosystem, while highly motivated, isn't mature enough yet for infrastructure level code.


You should choose rust for the long term. The type system will pay off any initial investment. Trust me, I am maintaining a large python code base. The number of problems that appear "organically" and would have been caught with a good type system is staggering.


I don't like Python personally but even I can see the problem here is you're doing Python wrong. Look up "type annotations" and include them as part of your tox testing.

Also, as I've commented elsewhere, if what you want is a static type system then there is still a plethora of better languages for web development than Rust. Rust might be a hugely impressive language but it's not yet ready for prime time web development.


Lets reverse that - why write in a language without a GC when you can write in a language with a GC? Rust's advantages shine when it's time to go low-level, but being without a GC is a disadvantage otherwise.

This application isn't merely 'performance-sensitive', it's a very particular form of performance - performance over network (I assume that's the main bottleneck here). Better write in a ecosystem all-but-officially optimized for networked applications rather than lose developer productivity dealing with low-level stuff.


Have you ever done web development with Rust?

I have. Rust is a great language and I really enjoy working with it, but it is far from mature as a web development language.

Go is the correct choice here if those are your only two options.


I would argue that go has a more strict policy for language proposal changes too.. This can be good or bad depending on situation.

I would like some more article in OP about the process of 'rewritting' it would be more interesting than all the buzzwork bizspeak.


All major components communicate via grpc so rust is not out of the picture. GC-less languages might make sense for storage-drivers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: