Hacker News new | past | comments | ask | show | jobs | submit login
A currently maintained fork of SSHFS (github.com/deadbeefsociety)
329 points by feldrim on Sept 5, 2023 | hide | past | favorite | 139 comments



If you want something in user land and you don't mind emacs there is TRAMP “Transparent Remote (file) Access, Multiple Protocol”. https://www.gnu.org/software/tramp/

I use it a lot when I am accessing files from my server on my MacBook Pro .


My favorite thing about using TRAMP is being able to cd to a directory on a remote system, and then cp a file either from my working directory to my local machine (or another remote!), or from my local machine to the current working directory.

Before I started using TRAMP, my flow for this was: SSH to a remote system, locate where I want to copy a file to with cd + ls, kill my SSH session, and then scp or rsync the file over, and then usually SSH back into the system.


> kill my SSH session, and then scp or rsync the file over, and then usually SSH back into the system.

Naive question: why the need to kill the ssh session? Can't you just open another terminal window or tab (or tmux/screen tab if that's part of your workflow)


I usually type ~ followed by ^Z to suspend the session for a quick scp operation, then resume with "fg".


Or just scp from the server, inside the same SSH session. scp works both ways.


That's also a way to do it. I prefer TRAMP over both ways though.


Not the GP, but I assume it's just because the ssh swssion is no longer needed -- it was only required to locate the file.


> and then usually SSH back into the system.


TRAMP is amazing. Whenever I need to edit configuration files or code on a remote server, I get to keep the exact same Emacs setup as if I were doing things locally. There are noticeable delays in opening and saving files on slow connections, but it's good enough that I can't remember the last time I have used nano or vi in a remote server.


Couldn't you use zmodem or something?


I am not familiar with zmodem, but maybe.


Couldn't you just use sftp?


Yes. I've never directly used sftp before, but checking the man page it does seem like it would fit the flow I described well.


Idk when you stopped using scp, but scp has been deprecated, and at least w/openssh, using sftp under the hood for years now. You probably have used it if you've scp'd since 2020.


My point above was that I was making and killing multiple SSH connections to find where I want to copy a file, and then do the copy. Using SCP with an SFTP backend would still do that.


FWIW a lot (most?, all?) of Emacs features work transparently with Tramp, including dired, eshell, and magit.


Interesting, I alaways assumed sshfs was part of OpenSSH, learn something new every day.

Also, looks like sshfs used in Slackware is abandoned.

https://github.com/libfuse/sshfs

A quote from the link, I wonder if this project will be the 'one':

>If you would like to take over this project, you are welcome to do so. Please fork it and develop the fork for a while. Once there has been 6 months of reasonable activity, please contact Nikolaus@rath.org and I'll be happy to give you ownership of this repository or replace with a pointer to the fork.

I also wonder if it was abandoned due to the RHEL re-orgs like what happened to bluetooth.


It does seem like a good fit for the maintainers of openssh. I too had thought it was key linux ssh infrastructure.


> I too had thought it [OpenSSH] was key linux ssh infrastructure.

As a side note, OpenSSH is quite independent of the Linux ecosystem. It is developed as part of OpenBSD.

openssh-portable is provided as a standalone software package that is broadly POSIX-compliant.


>openssh-portable is provided as a standalone software package that is broadly POSIX-compliant.

Yes, glad the OpenBSD folks do this. Linux people could learn a lot from OpenBSD Developers (looking directly at Wayland).


I think Wayland is a different beast because unlike SSH, it involves the kernel.


No it’s a horrible fit. OpenSSH is focused on a high quality, secure implementation of a rather complex protocol. And OpenSSH is not a core piece of Linux infrastructure any more than say gcc is. These projects serve other, greater ends. A FUSE driver is a bunch of baggage that is poorly suited to be maintained with it.


Given than fusefs provides FUSE support on BSDs, it seems more than "a bunch of baggage", given how useful it is.


I don't think the person was intending to say it's not useful, only out of scope.


I kinda doubt it, SSH already have SFTP and SSHFS is .... quite honestly a tool that has no business existing.

It's a neat trick but it's always going to be something that someone bolted on to a solution because they didn't want to deal with a proper file server. Part of the appeal might be that NFS (3 or 4, take your pick) is still the best we've been able to come up with and neither is really that great for basic installation. Still SSHFS is a mess to deal with in a production setup. Some of the messiest production systems I've de-tangled over years have frequently been relying on SSH and SSHFS to do stuff assumes a permanent connection.


SSHFS provides enormous value precisely because it doesn't rely on configuring a file server or elevated privileges. It's convenient, secure and performs well. The ability to create ad hoc mounts for any remote location I can access via SSH is awesome (though I prefer rsync or scp for file transfers).

But relying on SSHFS in production... Yeah, that's insane.


I use sshfs at work to mount some AIX Dirs on my Linux Workstation, very convenient.


NFS has too many shortcomings for me to take your answer seriously. SSHFS is a very useful tool in the right places; if it's a hammer and you have a screw, don't use it.


sftp doesn’t make it easy to copy files from one place to another on an ad hoc basis while developing, using the shell operations you’ve got to hand.


A note …

I have transitioned from years of macfuse + sshfs on Mac to just installing the excellent “mountain duck” tool which gives you finder and mount point access to an sftp endpoint.

Very nice software and indispensable for me.


As a recently returned rsync.net customer, I was a bit surprised to peruse the docs and see that Fuse/sshfs is still up as a recommended option for Mac integration (see https://www.rsync.net/resources/howto/mac_sshfs.html). A few months ago this led me down the path of trying ExpanDrive/StrongSync, but this is the first I've heard of Mountain Duck (love Cyberduck though). I'm looking forward to giving it a spin now but if it has been working well for you please consider updating this page for others. Happy customer otherwise!


Just for the record: for macOS there’s now https://www.fuse-t.org/ which works without a kext (by spawning a local NFS server in the background and having macOS connect to it) and also has a sshfs implementation listed on https://github.com/macos-fuse-t/fuse-t/wiki#sample-filesyste... .


brew tap macos-fuse-t/homebrew-cask; brew install fuse-t-sshfs


Mountain Duck has been my goto as well. That said, for VSCode the Remote SSH extenion from Microsoft helps greatly in working with repositories on VMs and other remote machines.

Libfuse / SSHFS for MacOS started becoming a real burden to try and use a ome years back and it lead me to mountain duck as well.


sshfs is no longer maintained? That's sad, I used sshfs in school to be able to use my zillion vim plugins to edit code without having to install them all on the remote server we used for compute. I was surprised with how smooth of a system it was.


I think SFTP is a good but underrated protocol, when mirroring a file tree bidirectionally makes more sense than cloning one to another. Having forked and studied from SSHFS' code, I am currently maintaining a list of resources and some personal thoughts on https://hackmd.io/@q/sftp-over-ws.


SFTP is a major step up from FTP, but there's a lot of unrealized potential on the server side, so you can't just work on better clients. Both OpenSSH and the GNU lsh server only offer an old version of the protocol -- v2, I think. That oldness is intentional: https://marc.info/?l=openssh-unix-dev&m=168488976013498&w=2


It's obviously a slightly different combination of technologies, but I've been using NFS over wireguard pretty happily for a while...


NFS is more graceful in reconnecting when the TCP channel is reset, which is a great benefit.

It also implements more filesystem functionality, as a "df" report will correctly reflect the remote filesystem's usage.

EDIT: NFSv4 also offers "delegations," which give complete control of a file to a client in an expiring lease; the latest NFS clients also have "polite delegations," which tacitly extend the lease period.

SSHFS is very handy for a "quick and dirty" mount, though, with minimal configuration.


I myself went to other way around. While my VPN infra is very stable, I went into repo route. I use my very simple DVFS repo utility to sync files and never looked back. I like to have multiple copies of stuff here and there.


+1

Switched from sshfs to NFSv4+wireguard few years back. Works great!


rclone mount is my go-to now for sshfs functionality - better performance, stability and caching options


rclone cannot write to the middle of a file though, without transferring the whole file. That is my main issue with it. Maybe sshfs has the same limitation though.


Unfortunately, this fork does not look very vivid. Last commit in March, almost not activity in terms of PRs and issues. I would not bet on it.


What sort of activity level are you expecting from a stable project?


People have a hard time orienting if coming from js, where a 'stable' project means refactoring the entire code base to keep up with the dependencies refactoring their apis every 2 months.


plus, they are investigating old edgy errors in many issues, rescuing them (already closed) from the old project.


"Vivid" is the javascript definition of touching shit when it works and breaking it


Vivid?


Definition 3: “Full of life, strikingly alive.”

Etymology: “Borrowed from Latin vividus (“animated, spirited”), from vivere (“to live”), akin to vita (“life”), Ancient Greek βίος (bíos, “life”).”

I like it!

https://en.m.wiktionary.org/wiki/vivid


A fairly unidiomatic rendering of "lively", I think.


Nautilus (Ubuntu's file explorer) allows to mount SFTP folders. Supposedly it uses `gvfs` under the hood.

Note that SFTP uses an SSH connection for its file transfers, so I have not seen an UI difference from SSHFS


It allows you to browse sftp endpoints in the nautilus GUI but does it simultaneously create a mount point in the file system that you could use in the terminal?

I don’t remember…


You can use the `gio mount` command to mount a given remote. It will appear as a folder in `/run/user/$UID/gvfs`.

As an example, my phone has some systemd services which run `gio mount` for an SMB share and an SFTP share, when I'm connected to my home WiFi. I've got symlinks in my home folder to their associated gvfs directories, which become dangling when I'm not at home.


gvfs has a bridge to fuse, cf. https://manpages.ubuntu.com/manpages/trusty/man1/gvfsd-fuse.... -- that means: Yes, you can use gvfs mounts natively in all other non-GNOME/GTK-applications running on your computer.


> does it simultaneously create a mount point in the file system that you could use in the terminal?

Yes it does. You can find it in /run/user/$(id -u)/gvfs/


Anyone try rclone or sshfs on Mac OS X with macfuse/osxfuse?


I use Macfuse to localfolder all of my servers. Works great. Have a little script for each server: mount_fooserver.sh:

umount -f ~/mounts/fooserver

sshfs -o kill_on_unmount,reconnect,allow_other,defer_permissions,direct_io username@server:/ ~/mounts/fooserver -ovolname=foo


Interesting. I use

> -o reconnect,cache=no,defer_permissions

So, I add "cache=no"; and omit "kill_on_unmount", "allow_other" and "direct_io". Looks like "kill_on_remount" is a cleanup option; "allow_other" allows other users to mount the same drive (I don't need that) and "direct_io" is similar to "cache=no". FYI...


I use sshfs with macfuse on my M1, and mount a drive to a linux box in my office. It's mostly OK, but has its quirks. Getting it setup was a bit of an adventure, and I questioned myself a couple times - but I powered through and it worked out just fine.


See my top level comment elsewhere… the answer is “mountain duck”.


sshfs works (I use Monterey, M1), though it's slowish.


The main file is a C file which is nearly 5,000 lines long. Impressive.

https://github.com/deadbeefsociety/sshfs/blob/main/sshfs.c


To me, 5000 lines of C is not impressive nor scary. Functions are pretty small, there are a lot of comments - everything seems like a normal unit with some complexity.


Well... check out QuickJS: the bulk of the code is in a single 55k LOC C file: https://github.com/bellard/quickjs/blob/master/quickjs.c


The plan 9front version is only 1431 lines long.

https://github.com/9front/9front/blob/front/sys/src/cmd/sshf...


Both impressive and scary


This is nothing extraordinary. I don't know why this is a tradition, but this is a very typical situation for C projects.


Compilation used to be slower, and one 5K line file would be noticeably faster than 10 500 line files, not to mention possibly having to build extra header files to connect them together. That would encourage larger files.


I'm not sure that's true. Computers used to be smaller, and had a hard time with very large files. Swapping out of limited RAM and so forth. Not fast.

I think long files are solely caused by somebody incapable of software design at any level. They just keep typing and never think about structure or separation of duties or whatever.

I recall the WindowsCE DHCP service was one large file. An enormous busted-ass straightline pile of garbage code that didn't handle most errors. Written by some intern. I re-wrote it for our platform and removed all the issues.

Microsoft of course didn't want my code because, arrogance.


As a n00b, I enjoyed libs with everything in one file cause I didn't know how to drop the lib into my codebase and build otherwise. Like how was I supposed to merge their makefile into mine, I dunno. And my code was in one file cause I was too lazy to mess with .h files.


I could buy a similar argument for directories: you will almost never see a C project with sources in subdirectories of the top-level source directory -- this is because of the recursive Makefiles which earned quite a bit of a somewhat justifiable hate.

But I don't think compilation times explain the size of the source files. This hasn't been a problem for such a long time that I cannot even remember when it could have possibly been a problem.

I had seen the reverse problem, but not with C... rather with Python source files. The older parser used to be very bad and would start using too much memory if the source file was in the thousands of LOC. I had to witness this firsthand with SWIG-generated Python bindings. I don't remember this kind of problem with C compilers / other utilities though.


> The current maintainer continues to apply pull requests and makes regular releases, but unfortunately has no capacity to do any development beyond addressing high-impact issues.

Assuming this is true--and I think it is fair to trust the author of the statement when judging the same author--this doesn't sound like a project that needs a fork, as it apparently in fact does have an active maintainer; if you want to help contribute to sshfs, you thereby can do that without forking it and causing a mess for everyone having to decide which one to use/ship and without the bad blood inherent in resorting to the four-letter F-word of open source project management.


That's the old status before being orphaned. The latest note at the top of the readme in the original repo reads:

> This project is no longer maintained or developed. Github issue tracking and pull requests have therefore been disabled. The mailing list (see below) is still available for use.

> If you would like to take over this project, you are welcome to do so. Please fork it and develop the fork for a while. Once there has been 6 months of reasonable activity, please contact Nikolaus@rath.org and I'll be happy to give you ownership of this repository or replace with a pointer to the fork.


Is there a more security-oriented alternative to SSHFS, where the connecting client won't be given shell access on the server? I just want to share a directory from server to an untrusted VM over network, and have VM mount it, not give the VM shell access to the server.

FTP is close to such a thing, but it is somewhat archaic, slow and not sure about its security.


If the user account is only supposed to have file transfer capabilities/no shell access, add it a to a specific group e.g. `sftponly`, and only allow this group to use the `internal-sftp` command in `/etc/ssh/sshd_config`

    Match Group sftponly
    ForceCommand internal-sftp -l INFO -f LOCAL6
    AllowTcpForwarding no
    AllowAgentForwarding no
    GatewayPorts no
    X11Forwarding no


Thank you, this seems to prevent client getting shell on the server, while allowing it to make SSHFS mounts. I've put /usr/sbin/nologin into client's row in /etc/passwd on the server as well (/dev/null would break everything including SSHFS).

Is there some way to specify that nothing except internal-sftp is allowed, as opposed to setting each option explicitly to "no"? The latter way seems error-prone, one is bound to miss some obscure option there.

And I wonder why do you suggest using the LOCAL6 log facility? In sftp-server, the default is AUTH...


I remember using SSHFS way back in the day on Mac, also back then thinking "SSHFS" meant "SSH + HFS." It was always confusing to grab the right tools for it, and it never worked very well. With remote codebases, I just SSH in and edit in Vim.


How are the different packages for the different *nix distros maintained? I see the link to repology, but that service only tracks the packages - who created the packages and where are they generated in the repo?


Is it google summer of code or some similar event right now?

Looks like the most recent issues and PRs are just junk typo / grammar fixes


Good, it seems people can't stand it if software just exists and does its job and doesn't get new commits each month.


I mostly care if projects don't accept bugfixes. For example "inotify feature longer works with the latest glibc release due to subtle API change" might be an easy 10 line pull request to fix.

But if the maintainer doesn't take the pull request and make a release, then the effort of fixing it is wasted, and every single user has to workaround/suffer from that bug into the future.

There are loads of projects in that state - unmerged PR's from years ago with sensible fixes, no new release, and no forks that are distributed to users.


Not to mention this scenario except when it's a security patch that just needs to be applied and released


I wish github and other code hosters made it easier to "just make a release".

Next to the "Download zip" button on github, they should add a "Download built .deb" and "Download built .exe" - and those buttons should work on any fork, branch, PR, etc. And they should add all the necessary build infrastructure to achieve that.

It turns out that at scale, build infrastructure is pretty cheap to run, since caching is so incredibly effective and there is only a need to rebuild a file once per (human) edit to a file or its dependencies.


Making a .deb isn't as simple as it might seem. If you want to use shared libraries, especially so. Which version of Ubuntu/Debian/Mint/... whatever are you targetting? You can tools like FPM[1] (which is awesome btw, used it for some great hacks in the past), but that won't make you .debs that are necessarily done to the debian guidelines but usable.

There are a load os SaaS companies that do that allow you to make multiple targets though, so perhaps some integrations there would work.

[1]https://github.com/jordansissel/fpm


Presumably some kind of Makefile or similar build automation config file would be in the repo to define which versions of which tools to use to do the build.



the open build service (by suse) does that. are there others?


I'm certain there are more but I can only think of artifactory now and not sure that even makes the debs (does the repo stuff). Maybe there's a business plan somewhere in there :)


When making a github release, you can attach whatever file you want, including .exe and .deb. The problem is that building packages is kind of a nightmare, as there's at least one (or, as is the case far too often, many) build system for each programming language. Linux package management is also hard as each distro has its own way of packaging things. And you need a mac to produce macOS Applications...

My point is, there is no way github could add a fully automatic "build and package this release" button. It would require tons of configuration (and trial and error...) from the user.

But good news! If you _are_ willing to figure out how, you can make a github pipeline that compiles and packages your code (producing an "artifact"). Several projects I follow do exactly this. A complex problem like this essentially requires a bespoke solution, and to be fair github does give you tools to automate said solution. The problem is not the infrastructure but the complexity of build systems.

I do agree there should be more ready-made pipelines to aid this process. When I tried to release a python program to work on linux, windows, and macOS, I quickly realized I wasn't interested in figuring out how to make a working pipeline (after spending a weekend getting it to build on each OS in the first place). But surely that's because python is particularly bad at package management... Well, most languages are particularly bad at it


Building packages is easy for most programming languages, it's only very old ones and Python that are bad. While I wouldn't expect GitHub to support every language, and many projects will need customization, them offering a basic pipeline for "standard x" for each of e.g. the top 20 languages (other than the awkward ones) would save a lot of effort over every project reinventing its own pipeline.


That sounds more like papering over the cracks of legacy systems, introducing even more centralisation and power to a handful of unaccountable "hosters", and placing even more responsibilities on maintainers (who may be AWOL).

- Cryptographically-verified, content-addressed storage (e.g. IPFS) is preferable to downloading random EXEs from "github and other code hosters". Indeed, for sources too! (I learned this lesson when Microsoft bought GitHub, and many projects jumped ship; that caused an outbreak of 404s for anything that was hard-coding github.com URLs!)

- Rather than relying on someone else having produced opaque blobs for us, it's better for everyone to be capable of building things, if needed. Nix (and Guix) are good for this, since they're source-based, ensuring that the full build instructions are available (they will automatically download binaries, if available and signed by a trusted key; but the option of building ourselves is always there). This is also crucial if we want to validate those binaries for ourselves (I recall the "trustix" project is trying to crowd-source such validation too)

- Another advantage of the Nix/Guix approach is that build instructions can be parameterised, e.g. by the source. This allows anyone to plug in any version of the code they like (whether a git commit, or a local folder, or an IPFS URL, etc.). Again, if someone else has already built that combination (and someone we trust has signed it) then their existing binary will be fetched.

This sort of approach doesn't require any buy-in from hosting platforms, maintainers, DNS authorities, etc.


I'd like software that connects to machine with write privileges, across network boundaries, to have some degree of maintenance, if that's ok.

It may not be just security too, as this integrates FUSE and SSH then there will be bitrot and API drift etc over the years.


The FUSE situation on Mac OS requires maintenance on the software or it will stop working.


> This repository has been archived by the owner on May 26, 2022. It is now read-only.

> This project is no longer maintained or developed. Github issue tracking and pull requests have therefore been disabled. The mailing list (see below) is still available for use.

If you would like to take over this project, you are welcome to do so. Please fork it and develop the fork for a while. Once there has been 6 months of reasonable activity, please contact Nikolaus@rath.org and I'll be happy to give you ownership of this repository or replace with a pointer to the fork.

I saw that there are some semi-active forks focusing on different aspects: a rust rewrite, a persistent cache support version, or a bug fixing only version.

The issue is that most software has bugs and vulnerabilities which has not been discovered yet while the software is not maintained. It means the problems will exist without a solution for the future. Open source software maintainers have been a significant part of our overall IT environment [0] but voluntary contributions are subject to human resource limits. SSHFS is one of those projects relying on a single maintainer which has ended up being archived. The packages on many distributions repositories are stuck as is. The several semi-active forks are also owned by a single person without a proper community. I'm not sure if any of the distro communities would pick one of those and package it to be the next version.

So, the users of these software on their own, with the single, cross platform, ultimately portable packaging solution: the source code.

0. https://xkcd.com/2347/


I have nothing but respect for Nikolaus, and SSHFS is absolutely a cornerstone of IT, but if you think the original project was protecting you from vulnerabilities.. I have a bridge to sell you.

I also don't really understand what your last sentence is getting at--I may be daft.


It may be my bad wording. All I am saying is that an archived and unmaintained project leaves the option of fixing bugs and vulnerabilities out. The hope is lost with no effort of fixing. And this is inevitable with current open source models.


Yeah, I get what you mean. I'm doubtful, but.. hopefully this turns out OK.


Related -- last month I wrote an implementation of something very similar to sshfs, but in Typescript over a WebSocket: https://github.com/sagemathinc/websocketfs


Discovered that you can replace sshfs with rclone. And the project appear to be way more active : https://github.com/rclone/rclone

Edit: cf: https://rclone.org/commands/rclone_mount/


Unfortunately `rclone mount` doesn't support symlinks yet: it always deferences their contents. In contrast, SSHFS has options to either dereference them, or to use them as-is, or to transform absolute links into relative links (which may be more likely to resolve on the client)


I love rclone, I'm currently using `rclone mount` to mount a Backblaze B2 bucket to use as backing storage for Jellyfin and it works a treat. There are plenty of dials to tune things like cache size and duration to minimize unnecessary downloads from B2, and with my usage patterns it ends up astoundingly cost effective (although at some point I might move to Hetzner storage boxes for even cheaper storage).


this seems really expensive compared to a local nas, at least over the lifetime of your service. is there a reason you chose to go this route instead of local storage for media?


A local nas requires much higher upfront costs, I would need space to put it, and it requires much more ongoing maintenance and whatnot. If you factor the cost of my time it probably doesn't really work out as cheaper.


Wasabi might been a good choice for you on that front


Backblaze B2 is $5/TB/month, Hetzner is $4.08/TB/month for 1TB, Wasabi is $6.99/TB/month

Wasabi seems like the most expensive here, with the caveat that Hetzner requires ordering discrete steps of storage rather than the 'pay-as-you-go' model of the other two


They are all OK to use, and each provider has pros and cons, depending on the use case.

Hetzner is the cheapest; however, your data is stored in Germany or Finland. They have free bandwidth, but you are limited to 10 connections at a time.

Backblaze B2 has 4 regions across the globe, storage is $5/mo, there is no minimun retention time, but does have a cost for API Calls(transactions), and in addition charges for egress data(downloads), so your $5/TB is a variable factor, and if you use your data, you may not achieve $5/TB, the cost will grow depending on the use case(there are free levels of transactions and egress)

Wasabi is $7/TB and has 13 regions across the globe, with free egress and no api charges. It does have 90-day minimum storage charge, which means you are billed for every object for 90 days regardless of if you delete it before 90 days. In addition, the free egress has limits to prevent system abuse. There is a 30-day deleted storage charge available if you purchase in bulk with their RCS(reserved capacity) storage plan. It's good if you want to store a lot of data that does not need deletion.

I have accounts with all 3 of these for different use cases.


Regarding Hetzner Storage Box's 10 connection limit, Hetzner also has Storage Share (Nextcloud hosting) at basically the same price point (~3 EUR/TB at the 10TB+ size), only it comes with 200 connections. Rclone supports WebDAV for Nextcloud just as well.


Note that in October B2 is removing egress costs and increasing storage prices to $6/TB.


* removing egress costs up to 3x your amount of stored data

I'm pretty disappointed overall by the price increase for storage. Compared to when they launched B2, they now need 1/4 as many servers with 1/2 the upfront cost to store each petabyte.


Doesn't Hetzner storage boxes also have lower redundancy guarantees? It's been a while since I've looked, so not sure - it's not obvious from their ordering page. They're great either way, though, especially given the "unlimited" egress.


I'm trying to use rclone, but it seems like it's kind of hard to set up compared to sshfs. Do I really need to go through their 20 question setup script?


You can use connection strings [1] instead of a named remote that you setup ahead of time with the wizard.

    rclone mount :sftp,host=example.com:path/to/dir /tmp/mountpoint
[1]: https://rclone.org/docs/#connection-strings


You could just edit rclone.conf directly.


rclone looks promising, but last time I tried it, it was very slow compared with sshfs.


Interesting. I haven't tried a speed comparison but I know we sped up the sftp backend recently.


Does the FUSE mount work on OSX?


VFS on MacOS is a minefield. You either need to use a kext (bad option, for many reasons), a network file system (NFS or SMB) and pretend your VFS is a remote server, or create a FileProvider system extension (which cannot actually function as a VFS).

If your workflow relies on a VFS that isn't NFS/SMB then don't use MacOS. fuse-t is kind of clever in that it spins up a TCP server that transpiles NFS requests into FUSE requests, but it comes with a bit of a cost and eats a TCP port. The one benefit is that you can actually mount and use a file system entirely in userspace this way, which you can't do on Linux without sandboxing (fusermount3 is SUID to get around this).


>but it comes with a bit of a cost and eats a TCP port

I have never heard of someone running out of TCP ports on a personal computer since, well, the invention of TCP on personal computers.


No, but port conflicts do happen.


> fuse-t is kind of clever in that it spins up a TCP server that transpiles NFS requests into FUSE requests

TIL, thank you!

I'm really sad about losing native SSHFS capabilities on macOS (via FUSE, due to the kernel extension deprecation/ban).

I could even get behind the idea of banning all network file systems, but the fact that I can now use SMB and WebDAV(!), but not the one that I actually use all the time, is quite frustrating.


Wow, I gave this a try after work on my personal machine and it's amazing – seems to be more stable than the original thing at a first glance! Thanks again.


MacFUSE always seems unbearably slow to me. Specially in the Finder. Has it improved?

FUSE-T seems more future proof (no kext) and probably less likely to completely hang your Mac, but could potentially be even slower since it’s another abstraction layer in between.

It’s odd that there aren’t any great open source SFTP solutions for the Mac. CyberDuck and FileZilla are barely passable.


> less likely to completely hang your Mac

I think what hangs the Mac isn't the remote file system as such, but rather some local app or (more likely) low-level OS service assuming that all mounted filesystems are local (or at least low-latency and highly available).


That too, but I remember having straight kernel panics. Granted, that was more than a decade ago.


It does. You can use it with macfuse or with fuse-t.

See: https://rclone.org/commands/rclone_mount/#mounting-on-macos


It does work, I use it regularly. However, it can hang the whole system in certain cases such as mount that went wrong, connection timeout between the servers (without -o reconect option on sshfs, but even with that sometimes).


rclone kind of drives me nuts not using standard ssh_config


Not just rclone, but it seems that so many things are moving away from using the ssh_config; I have things setup to "just work" for so many hosts with OpenSSH, and having to individually configure a half-dozen programs that build on top of SSH is a pain. Sometimes it's because they use paramiko, other times it's because they want to pass their own configuration options to ssh that mess things up.

Sometimes you can manage to get things to use the ssh_config again (e.g. with xpra you can do "--ssh=ssh" but other times I've not yet figured a workaround.


Can you name these programs? I want to add them to my 'painful' list.


I only use programs for which I've found a workaround due to the pain, so I can't list the ones that I don't have a workaround for. TBF two of the programs (emacs tramp and git-annex) I use regularly actually mostly respect ssh_config by default, but they override the ControlMaster by default (to do their own connection muxing) which requires me to do an extra authentication step.


jetbrains products. I think they've improved their situation, but it's still a little quirky


how so? rclone is more like scp to me only.



Mix up rclone's mount with the Crypt and maybe even the Union backend to cloud storage for a very interesting proposition. I was using a similar setup to get unlimited storage on VPSes when google was offering unlimited gsuite storage. You just need to play around caching values and options on the Crypt side if you plan on streaming data from it.


Just edited a link to the doc ^^


rclone has a "mount" command.


Sorry if this is documented (I did take a quick look). Does rclone mounted do whole file compression like rsync? I don't see how the file API would support this. eg fopen, write, close, etc


In this scenario, rclone mount uses sftp in the background which compresses the entire stream (you can control that by passing args to the ssh command spawned by rclone) but it doesn't do it per-file. In practice I don't know if there is a difference.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: