Nautilus gets confused after an SFTP transfer from a remote hotspot

Ubuntu 24.04.2 LTS.

I frequently connect to a remote device that has its own WiFi hotspot. I do this for a couple of purposes:

  1. to transfer files from the remote to this computer
  2. to run via VNC something on the remote device

To do #1, I have been using Nautilus → Other Locations → Enter Server Address, then click the connect button. Of course, the computer must at that point in time have been connected to the remote Hotspot, not to my home network.

Once the connection is made, if, in Nautilus, I try to open my home directory, to copy files to a subdirectory of home, after copying the files, Nautilus is sometimes but not always unable to display the contents of my home directory. This is very annoying and I’d like to understand it. Even if my computer is connected to the remote hotspot that should not prevent the home directory from being displayed.

What you are seeing is a quirk of Nautilus, not a problem with your /home
folder.

When you click Other Locations - Connect, Nautilus mounts the remote
hotspot with the gvfs SSH backend.
While that mount is active Nautilus keeps a single worker thread busy
talking to the remote machine. If the hotspot is slow—or drops for a
second every other tab that still belongs to that Nautilus window can
stall while it waits for the remote I/O to finish. It looks as if
Home is missing, but it’s just the UI waiting on a network timeout.

Three quick ways around it

Unmount the hotspot as soon as the copy is done
Click the little :eject_button: icon next to the remote entry in the sidebar, or run

gio mount -u "ssh://user@192.168.x.x/"

The moment the mount is gone Nautilus snaps back to normal.

Open a second Nautilus window for /home before you connect.
The hang is per-window; a fresh window stays responsive.

Use sshfs instead of Connect to server

mkdir -p ~/remote
sshfs user@192.168.4.1:/ ~/remote

Copy files to ~/remote in Nautilus (or the terminal), then

fusermount -u ~/remote

sshfs runs in its own process, so Nautilus never blocks.

Nothing is happening to your home directory Nautilus is just waiting on the
slow remote mount. Unmount, use a new window, or switch to sshfs and the
problem disappears.

Thank you @thingizkhan. I’ve tried that unmounting of the hotspot. Sometimes it worked, sometimes it didn’t. I don’t think I did this right after the copy was done. so that may be why.

The second nautilus window idea sounds good.

Your third suggestion is unclear to me. Do the sshfs and fusermount commands have to be entered every time I want to transfer files? I don’t suppose it can be made permanent since the computer is only occasionally connected to the hotspot.

Yes, the hotspot connection can be slow.

Another idea would be to use FileZilla. This ancient software did the job but was less convenient than Nautilus so I stopped using it.

Is this considered a “bug” in Nautilus, or is it inherent in the system with no good fix?

Glad the second-window idea helps.
Here’s a bit more detail on sshfs:


How sshfs works

sshfs user@host:/ ~/remote mounts the remote machine at ~/remote.
It’s just like plugging in a USB stick the folder appears instantly.
fusermount -u ~/remote (or click the :eject_button: icon in Files) unmounts it.

Nothing persists after you unmount, so you run those two commands only when you actually need the connection.


Making it less to type

Create the mount point once

mkdir -p ~/remote

Make a tiny helper script

echo 'sshfs user@192.168.4.1:/ ~/remote' > ~/mount-remote.sh
echo 'fusermount -u ~/remote'            > ~/unmount-remote.sh
chmod +x ~/mount-remote.sh ~/unmount-remote.sh

Now you just:

Connect to the hotspot Wi-Fi
~/mount-remote.sh → copy your files in Nautilus under Home - remote
~/unmount-remote.sh when you’re done

Optional fstab entry (mounts only when you ask):

Add to /etc/fstab:

user@192.168.4.1:/  /home/youruser/remote  fuse.sshfs  noauto,user,_netdev,IdentityFile=/home/youruser/.ssh/id_rsa  0  0

Then you can mount with mount ~/remote and unmount with umount ~/remote.


Because sshfs runs in its own process, Nautilus windows never freeze even if the hotspot is slow or drops out at worst the mounted folder shows an I/O error while the rest of Files stays responsive.

It’s not really a bugjust a limitation of Nautilus’ design. One remote mount = one worker thread; if that thread stalls, the whole window stalls. External tools like FileZilla or sshfs run in their own processes, so they stay responsive (and leave Nautilus alone). So:

Nautilus: convenient, but can freeze while the hotspot is busy.
FileZilla / sshfs: a bit more setup, but no freezes.

Pick whichever trade-off feels better; there’s no hidden fix inside Nautilus yet.

Thanks. I think I’m going to try the sshfs way. My remote device does not use port 22 for the sftp. What would be the syntax in sshfs to indicate this and what would be the syntax is fstab?

One-off command

# example: SFTP runs on port 2222 instead of 22
sshfs -o port=2222 user@192.168.4.1:/ ~/remote

-o port=2222 (or -p 2222 on some builds) passes the port to the underlying ssh command.

Update your helper scripts:

echo 'sshfs -o port=2222 user@192.168.4.1:/ ~/remote' > ~/mount-remote.sh

fstab line

user@192.168.4.1:/  /home/youruser/remote  fuse.sshfs  noauto,user,_netdev,port=2222  0  0

Mount: mount ~/remote
Unmount: umount ~/remote

(You can add other options—IdentityFile, reconnect, etc.—comma-separated in the same field.)

That’s all: add port=your_port (or -p your_port) and sshfs will connect to the non-default SFTP port every time.

+1 for fstab. You can then access the files as if they were local. SSHFS is very cool, secure too. If you use SSH keys then you won’t be pestered for credentials

okay, I’ve created the scripts as you indicated above. I’ve also created an identity file and copied its public key onto the server. This all works. Running my script mounts the server where I want it. I can use this to copy files in nautilus. It works. Thanks!

I have a couple of questions still:

  1. After running the mount script, I see the remote directory in nautilus. However, if I click on it, it takes me to the root directory on the server. I then have to drill down to the directory I want to start in, i.e. the home directory of the user on the remote machine. If I just use the nautilus connector, I get connected there. How can I change the directory I land in after running the script?

  2. Is pressing the “unmount” button in Nautilus for the connection the same as running the unmount script? It seems to be.

  3. re fstab

what would be the syntax for specifying the identity file in fstab?
and what would “reconnect” do?

Thanks!

Start in your home dir instead of “/”

Just add the path you want after the colon:

sshfs -o port=2222 user@192.168.4.1:/home/user ~/remote

(or whatever directory on the server you want to land in).
Put the same change in the helper script and/or fstab line.


Nautilus “:eject_button:︎ unmount” vs. your unmount script

They do the same thing.
The button runs fusermount -u ~/remote under the hood, which is exactly
what your unmount-remote.sh (or umount ~/remote) does.
Use whichever is more convenient.


fstab options

user@192.168.4.1:/home/user  /home/youruser/remote  fuse.sshfs \
  noauto,user,_netdev,port=2222,IdentityFile=/home/youruser/.ssh/remote_key,reconnect  0  0

IdentityFile=/path/to/key – tells sshfs which private key to use.
(Same spelling and capitalisation as the regular SSH option.)

reconnect – if the SSH link drops, sshfs will try to re-establish it
quietly in the background instead of unmounting the filesystem.

Now you can:

mount ~/remote   # uses port 2222, your key, and starts in /home/user
umount ~/remote  # or click the ⏏︎ button in Nautilus

…and the share behaves just like the Nautilus-connector version, only
without the freeze-ups you had before.

2 Likes

In the above syntax is “user” to be inserted literally or is the name of the user to be inserted?

Duh!. Of course. It’s just “user” not the username.

OK, this works like a charm.

Thanks so much.

Replace the placeholders with real names:

remote-username@192.168.4.1:/home/remote-username   /home/local-username/remote  fuse.sshfs \
  noauto,user,_netdev,port=2222,IdentityFile=/home/local-username/.ssh/remote_key,reconnect  0  0

remote-username – the account on the server you’re logging in as.
local-username – your account on the PC that’s mounting the share.

So if your login on the server is pi and your Kubuntu user is steve,
the line would be:

pi@192.168.4.1:/home/pi   /home/steve/remote  fuse.sshfs \
  noauto,user,_netdev,port=2222,IdentityFile=/home/steve/.ssh/remote_key,reconnect  0  0

Everything else stays the same.

You are welcome! Take care

Okay. I didn’t like the fstab way so eliminated that and went back to script. It got too confusing plus I wanted the script to do a check, and only try to make the sshfs connection if the computer was connected to the remote’s hotspot, and ring bell otherwise. So I modified the script:

#! /bin/bash
set -- `ip route|grep '^default'`
gw=$3

# want script to connect to remote only if this computer is connected to the remote's hotspot 
# else ring bell
if [ "$gw" == "{hotspot ip}" ]
then
    sshfs -o port=9999 {remote username at remote server}@{remote server address}:{home directory of remote user on remote} ~/{local mount of remote};
else
    tput bel
fi

This works really nicely when run from the command line. But not when the script is run from Nautilus. The connection doesn’t happen. A terminal window pops open for an instant and closes. What would I have to do to make it run correctly from Nautilus?

sshfs itself is fine – the script just isn’t finding the
programs it needs when you launch it from Files.
A Nautilus-launched script runs in a very minimal environment
(PATH contains only the “normal” user directories), so commands that
live in /sbin or /usr/sbin are invisible.

ip is one of those commands – its full path is /usr/bin/ip on
newer Ubuntu, but /usr/sbin/ip on older installs – and sshfs
lives in /usr/bin.
When ip is missing the assignment gw=$3 ends up empty, the equality
test fails, and the script exits before it ever calls sshfs.

Two quick ways to fix it


Call the programs by their full path

#!/usr/bin/env bash

# find the default gateway
set -- $(/usr/bin/ip route | grep '^default')
gw=$3

if [ "$gw" = "192.168.4.1" ]; then
    /usr/bin/sshfs  -o port=9999 \
        pi@192.168.4.1:/home/pi   "$HOME/remote"
else
    tput bel      # beep
fi

Replace the examples with your real usernames / mount directory, keep
the absolute paths for ip and sshfs.


Export a PATH that includes the system directories

#!/usr/bin/env bash
export PATH="/usr/sbin:/usr/bin:/sbin:/bin:$PATH"

set -- $(ip route | grep '^default')
gw=$3

if [ "$gw" = "192.168.4.1" ]; then
    sshfs -o port=9999  pi@192.168.4.1:/home/pi  "$HOME/remote"
else
    tput bel
fi

Either variant makes the script work the same way whether you double-click
it in Files, right-click - Run, or invoke it from a terminal.

A couple of extra tips

Mount point – if $HOME/remote doesn’t already exist, add
mkdir -p "$HOME/remote" before the sshfs line.
Unmount helper – a matching fusermount -u "$HOME/remote" script
in the same folder lets you drop the connection from Nautilus, too.
One script per hotspot – if you roam between several Raspberry Pi
hotspots just duplicate the script and change the gateway / port.

Now the script should mount your Pi’s home directory every time you’re
on its Wi-Fi and give a polite beep when you aren’t – no terminal
required.

Hmmm. That isn’t it. I tried it both ways. Exporting the path and specifying the path for each executable the script calls. It doesn’t work from nautilus in either case. Also, even without those changes, when it failed, it’s not that the equality test failed, or else I’d have been hearing the beep, which I wasn’t.

I think it’s because Nautilus is popping open its own terminal window which is visible for a a fraction of a second, to operate the script, which is killed as soon as Nautilus is done executing it.

sshfs is working – it just dies the moment Nautilus closes the helper
terminal.

When you choose “Run in Terminal” Nautilus opens a throw-away
gnome-terminal, executes your script, then sends SIGHUP to every
child process as soon as the script ends.
In an interactive shell the HUP is ignored, but from Nautilus it is not,
so the background sshfs you just started receives the signal and
unmounts before you can click the icon.

Detach sshfs from the controlling terminal and it survives:

#!/usr/bin/env bash
set -e        # stop on first error

gw=$(ip -4 route show default | awk '{print $3}')

if [[ $gw == "192.168.4.1" ]]; then
    mountpoint -q "$HOME/remote" && exit 0     # already mounted

    mkdir -p "$HOME/remote"

    # run sshfs immune to SIGHUP
    nohup sshfs -o port=9999,IdentityFile="$HOME/.ssh/remote_key",reconnect \
         remoteuser@192.168.4.1:/home/remoteuser  "$HOME/remote" \
         >/dev/null 2>&1 &

else
    tput bel    # not on the hotspot – just beep
fi

nohup … & starts sshfs in the background and detaches it from the
terminal, so the HUP that Nautilus sends no longer reaches it.
mountpoint -q prevents a second click from mounting twice.
mkdir -p makes sure the mount-point exists.

Make the script executable (chmod +x mount‐remote.sh) and double-click
Run in Terminal – the window flashes for a second, then the mount
appears in Files and stays up until you unmount it (Nautilus eject icon
or fusermount -u "$HOME/remote").

No changes to fstab required – the whole job is handled by the script
and one line of nohup keeps it alive after Nautilus closes.

Can’t get it to work from Nautilus. Still works from terminal. Maybe different versions. For me, in 24.04.2 LTS, there is no “Run In Terminal”. There is “Run As Program”, which is available by right-clicking on the shell script.