xk3 4 days ago

If you have systemd, you could do this:

    [Unit]
    Description=look ma, no autossh
    After=network.target
    
    [Service]
    Type=exec
    ExecStart=/usr/bin/ssh -o ServerAliveInterval=60 -o ExitOnForwardFailure=yes -Nn -R 7070:localhost:22 pc 'sleep 20m'
    Restart=always
    RestartSec=20
    RuntimeMaxSec=30m
    
    [Install]
    WantedBy=default.target
  • johnklos 4 days ago

    This is no better than ssh in a loop, which is trivially done by a shell script - no systemd needed.

    However, when you have shitty NAT routers (SonicWall, any AT&T fiber device, for instance), the connections will be timed out or will die and there'll be long periods where you're waiting for the next iteration of the loop, and/or sometimes it'll get stuck and never try again.

    autossh deals with this by actually passing traffic and taking action if traffic doesn't move.

    • oasisaimlessly 4 days ago

      > autossh deals with this by actually passing traffic and taking action if traffic doesn't move.

      The `ServerAliveInterval` option above achieves this.

      • johnklos 4 days ago

        No, it actually doesn't, or at least not properly. It's not hard to get ssh sessions that are wedged.

    • jauntywundrkind 4 days ago

      If you read what the person wrote, you'll see a ServerAliveInterval.

      If there are ServerAliveIntevalMaxCount (defaults to 3) attempts that fail, the ssh connection will drop. And systemd will restart it.

      Today you learned. Nice. I've dropped autossh for years and you can too, even on flaky connections.

      • johnklos 4 days ago

        Today I learned that some people make mistakes, but I already knew that ;) ServerAliveInterval doesn't do this properly and consistently.

        I've used my own autossh type script for two decades now. It's mostly used to give access to machines behind shitty NAT, and/or that have addresses that constantly change, and/or for systems on CGNAT, like Starlink.

        If ServerAliveInterval works so well and negates the need for something like autossh to exist, then why have sessions created by my script, which has ServerAliveInterval (and ServerAliveIntevalMaxCount) gotten hung up where the script needs to kill the old and create a new ssh connection now and then? My script logs each timeout, each session hang, and each new connection, and depending on the network, it can happen often.

        Please read the bit where it's explained how autossh sends test data back and forth. Do you think you just magically and cleverly discovered ServerAliveIntevalMaxCount and that the autossh people have no idea that it exists?

        Or perhaps they know it exists, they know it's not perfect, and they used another mechanism to make up for the shortcomings of what ssh offers out of the box?

        • reycharles 4 days ago

          The README has this text:

          > For example, if you are using a recent version of OpenSSH, you may wish to explore using the ServerAliveInterval and ServerAliveCountMax options to have the SSH client exit if it finds itself no longer connected to the server. In many ways this may be a better solution than the monitoring port.

        • nothrabannosir 3 days ago

          Just to clarify that we're talking about the same thing in case I misunderstood something: autossh (style) scripts do these things:

          1. fake data to keep a connection "fresh" for shitty middleware

          2. detect connection which are stuck (state = open, but no data can actually round trip) and kill them

          3. restart ssh when that happens

          Is that what we're talking about here? I think people are saying that points 1 and 2, but not 3, are covered by SSH's ServerAlive* options. And that's also how OpenSSH advertises and documents those options, and apparently even how autossh talks about it in their own readme.

          You're saying that those options don't actually solve points 1 and 2, while (your/their/etc) autossh does properly detect it.

          Correct so far?

          If so that seems like a bug in OpenSSH (or whatever implementation) which should get appropriate attention upstream. Has anyone reported this upstream? Is there a ticket to follow?

          PS: I think we're all in agreement that option 3 is out of scope for stock OpenSSH (regardless of what other tools do)

          • svnt 3 days ago

            I haven’t revisited this issue in years but on a project for thousands of similar devices we found autossh much more reliable.

            I believe the issue is that the connections often fail or get wedged in other network layers; the only way to be sure that your ssh tunnel isn’t: a) lossy enough to “keep alive” but too lossy to send data, or b) isn’t just always waiting on TCP retry backoff, or c) etc, is to use the tunnel to transmit actual data at the application level.

            • pritambaral 3 days ago

              > is to use the tunnel to transmit actual data at the application level.

              Isn't that exactly what ServerAliveInterval does? The man page says: "ssh(1) will send a message through the encrypted channel". A plain TCP keepalive wouldn't count as being "through the encrypted channel".

              • svnt a day ago

                Honestly at this point Im out of date, but autossh also takes care of bugs or connection issues within the ssh link itself

          • johnklos 3 days ago

            You summarized things well. #2 is the primary reason that ssh in a loop doesn't work as well or as reliably as autossh (the program discussed here; it's just coincidental that my own automatic ssh script is also called autossh).

  • 8xeh 4 days ago

    This approach works very well. I've had dozens of extremely remote systems hooked up this way for about 8 years. The only problem I've seen is that occasionally the server ssh process will get stuck, so you have to log in to the server and kill it. It seems to happen when a remote goes offline and reconnects without closing the old connection first.

    If I were doing it now, I'd probably use wireguard, probably. This is simpler to set up and works great.

    • elashri 4 days ago

      Can't you just add something like ServerAliveCountMaxto help with solving stale connections?

      So something like that would solve that

      [Unit] Description=look ma, no autossh After=network.target

      [Service] Type=exec ExecStart=/usr/bin/ssh -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o ExitOnForwardFailure=yes -Nn -R 7070:localhost:22 pc 'sleep 20m' Restart=always RestartSec=20 RuntimeMaxSec=30m

      [Install] WantedBy=default.target

      • xk3 3 days ago

        The default of ServerAliveCountMax is already 3

    • boris 3 days ago

      > The only problem I've seen is that occasionally the server ssh process will get stuck, so you have to log in to the server and kill it.

      You also need ClientAliveInterval on the server side (in addition to ServerAliveInterval on the client). In other words, both the client and the server need to be configured to monitor the connection. With this setup I had no issues with reconnections.

    • xk3 3 days ago

      > ssh process stuck

      systemd's RuntimeMaxSec should help in this case but I've never had trouble with sshd personally

      To add more context I use the above service to ssh from my phone to my laptop via my desktop PC. The service runs on my laptop and binds port 22 of my laptop to port 7070 of my PC but wiregaurd would probably work similarly

      • lloeki 3 days ago

        RuntimeMaxSec would have systemd kill a live forwarded connection though?

        • xk3 3 days ago

          closing ssh doesn't close the ports if they are being used, at least with ControlMaster. You need to run something like this to force the ssh daemon to close the port

              ssh -O cancel -L 4102:localhost:4000 pc
          
          but if ControlMaster is stuck maybe autossh is better in that case, or use this:

              Host *
                  ServerAliveInterval 11
  • dietr1ch 4 days ago

    No passphrase for the key? What about spotty connection? Doesn't WantedBy block startup on this starting properly? (I'm pretty sure I've been soft locked out of my computer when Comcast decides to do Comcast things.

    • j33zusjuice 4 days ago

      No. WantedBy will have no impact on startup. Before or after would, but not Wantedby.

  • mikrotikker 2 days ago

    Ok now how to tell if the connection is running in the systemd status output? Cos this will show as active even when the connection is down or trying to reconnect.

  • botto 4 days ago

    This is quite clean and tidy

  • denimnerd42 4 days ago

    been doing this since 2012... autossh wasn't the solution back then even.

    you want ServerAliveCountMax too but default is 3.

  • rs_rs_rs_rs_rs 3 days ago

    What is the reason to run 'sleep 20m'?

    • lloeki 3 days ago

      exits (and so restarts) every 20min, e.g ensuring there's no hung sshd on the other side for longer than that.

      IIRC if there's an active connection on the forwarding thingy, that ssh command won't exit until the forwarded connection is closed, so this won't interrupt an active forwarded connection every 20min.

  • polalavik 4 days ago

    I think this is actually superior to autossh. Doesn’t autossh not restart after crash/reboot?

    • pferde 4 days ago

      You could run autossh as a systemd service that starts on boot. :-)

      • svnt 3 days ago

        I think you meant this as a joke, but this is what we landed on about a decade ago and it was the most reliable setup we found.

    • LaputanMachine 4 days ago

      It doesn't by default, but you can set the AUTOSSH_GATETIME environment variable to 0 so that autossh retries even if the first connection attempt fails.

  • ChoHag 4 days ago

    That's so much better than bourne/bash, which requires this monstrous wart of a code blob:

    autossh() {

            # Tiny delay after failure in case of connection errors
    
            while ! ssh "$@"; do echo Restarting ssh "$@"...; sleep 1; done
    
    }
  • nine_k 4 days ago

    [flagged]

    • sevg 4 days ago

      Do we really still have to turn every conversation into systemd friction?

      • nine_k 4 days ago

        No. Some people use ssh while not running Linux, and not by running something exotic; macOS is widely popular.

beagle3 4 days ago

14 years ago, i was using auto ash to keep SSH tunnels up; but at some point (quite far back - perhaps 2016?) ssh gained everything needed to do this internally except the restart.

At this point I configure all of the keep alive and retry options in ssh_config and sshd_config, and use

    While true; do ssh user@host ; sleep 10; done
To get the same effect, but with much more flexibility - e.g. alternating connection addresses on a multihomed host, add logging, run from daemontools or systemd unit instead of a loop and let them track the process and restart, etc.
vincentpants 4 days ago

Curious what advantages this has over mosh?

https://mosh.org/

  • mjw1007 4 days ago

    mosh is for interactive sessions, to keep them working when the connection is flaky.

    autossh is for keeping unattended ssh tunnels alive, if the connection is flaky or one end is only intermittently available. So for using tunnels for the sort of thing you might otherwise use a VPN for.

  • st380752143 3 days ago

    AIK, for using mosh, you need to install mosh on target host as well. Seems like autossh doesn't need this step.

  • leni536 4 days ago

    I have used autossh + tmux before to enable X forwarding (just for clipboard sharing). Couldn't do that in mosh.

cperciva 3 days ago

If your concern is to have secure tunnels between hosts, you should probably use spiped rather than SSH, since it uses a separate TCP connection for each pipe -- this avoids the "connection dropped" problem and also the "multiplexing many connections over one TCP connection" performance hit.

Also, spiped is way simpler and more secure than SSH. (On my servers, I tunnel SSH over spiped, to protect the sshd from attacks.)

paulfharrison 3 days ago

For web-servers on remote machines, I have found this useful:

  socat TCP4-LISTEN:1234,fork,bind=127.0.0.1 EXEC:'ssh my.remote.server nc 127.0.0.1 1234'
1234 = local/remote port. Can be adapted to use unix sockets at the remote end. my.remote.server = your remote server address.

This will set up a tunnel only when needed, and seems to play nicely with my browser.

botto 4 days ago

I've used autossh to have a reverse tunnel open connection back to my work desktop, IT never found it and I had that in place for year

hi-v-rocknroll 4 days ago

The last time I used autossh it was on a client site to keep 2 layers of ssh tunnels open to jump through their network isolation hoops.

In general, when flexibility is possible, such a use-case nowadays would often be better served by deploying WireGuard. Grouchy, out-of-touch corporate net admins probably don't even know what it is and insist on their antiquated Cisco VPNs.

bashkiddie 4 days ago

I used to be a happy user of `autossh` until 2023. I used it on Cygwin on Windows and was quite happy how reliably it set up my tunnels (upon tunnels) in a flaky corporate network. `autossh` worked reliable compared to `ssh`s many timeout options.

I would still recommend it.

mifydev 4 days ago

I’d recommend https://eternalterminal.dev/, compared to mosh(poor colors support), this is the only thing that manages to consistently keep up my ssh sessions.

  • goode 3 days ago

    I love ET. Some discussion here of its advantages over mosh: https://news.ycombinator.com/item?id=21640200. Beware that ET does phone home: depending on how it's packaged for your system, telemetry is enabled by default in /etc/et.cfg.

aborsy 4 days ago

Wouldn’t ssh with systemd or auto ssh be a more secure means of remote access to apps (like http/https apps) than the zero trust network access solutions (like Cloudflare Tunnels which terminates the TLS) or even Tailscale (which should be a trusted third party)?

You set up public key authentication with SSH to a reverse proxy, a persistent tunnel, and a socks proxy. In a Firefox profile, you set localhost:port. Done! All your services are available in that browser all the time.

Autossh with a reverse ssh tunnel can also be used to expose an internal service to the Internet through a VPS.

SSH has been very secure over the decades. A good feature of SSH is that it can jump from host to host, unlike VPN.

  • curben 3 days ago

    SSH protocol does not protect against weak configuration, e.g. password authentication without brute force mitigation. Zero-trust can be misconfigured too, so it depends how well either of them is configured.

isoprophlex 4 days ago

Not 100% the same use case as autossh was built for maybe, but I'm now simply throwing tailscale on every box i need to interact with. Does away with all the port forwarding stuff, it's absolutely delightful.

  • amlib 4 days ago

    How much reliance on third party am I subjecting myself by using Tailscale? What happens if I make a local connection to a machine/service running on Tailscale, does it still go out of the local network? If so, is the bulk of the payload transferred locally? Is there any advantage on using it if the machine/service is easily accessible over ipv6?

    • snailmailman 4 days ago

      It will route directly over the local network when possible.

      It will be encrypted through the VPN, so there will be some overhead. But will be as direct as it can be. It only routes through tailscales servers as a last resort, when it can’t find a direct route at all (usually because NAT holepunching fails somehow). Their “DERP” relay servers just relay the encrypted connection. I think you can use your own relay servers, but I don’t know if that feature can be disabled entirely.

      Headscale can be entirely self-hosted. It still uses the tailscale client applications- but is compatible.

    • botto 4 days ago

      That's what Tailscale is built for, when it can it sets up a P2P connection, it only ever sends data through Tailscales servers if you are in a restrictive network environment (i.e. corp network that controls all inbound and outbound traffic)

    • isoprophlex 4 days ago

      good questions, pretty well answered by other commenters. if you are happy with the level of encryption you have on your 'plain' ipv6 connection, sure, use that.

      additionally the acl/auth system, their dns and service discovery thing is nice, though not essential.

    • umbra07 3 days ago

      if that's your major concern, look into just using plain wireguard. It's what I run on my server/desktop/laptop and it works great.

  • zmmmmm 3 days ago

    How well is tailscale accepted in orthodox enterprise security circles?

    I like the idea of it but can't even imagine trying to get it past the cyber security folks.

    • curben 3 days ago

      If "orthodox" means "vpn traffic cannot be established through third-party cloud infrastructure", then tailscale and any other cloud-hosted ZTNA solutions wouldn't be accepted in that kind of enterprise.

dheera 4 days ago

autossh is nice but the default options suck. I have to do something like this for it to work well

    autossh -f -N -o ServerAliveCountMax=2 -o ServerAliveInterval=5 -o ConnectTimeout=5 -o BatchMode=yes [...]
dingi 4 days ago

Sometime back, I had a rapsberry pi connected to wired network of a coworking space. I remember using autossh to keep a tunnel open with one of my VPS. Mainly used it as a torrent box. I added magnet links through qbittorrent webui installed on raspberry pi. Qbittorrent was configured to only run at night time to not cause issues for business work. Downloaded all sort of things easily reaching thousands of GBs throughout my time there. They never found out. Or they didn't care to look. Good times.

sgt 4 days ago

Rather than using AutoSSH for port forwarding and such, I just create a systemd unit with a restart policy. Then you don't need autossh at all, just use ssh.

ndreas 4 days ago

I used to use autossh to set up a SOCKS proxy to tunnel my web browser traffic via my home network and it worked really well. Also had a ControlMaster on the tunnel which made SSH connections to my server instantaneous.

Nowadays I use wireguard an a dedicated SOCKS proxy. The upside is that I can access everything on my home network directly without having to tunnel.

amelius 4 days ago

Nice tool, but I'm getting tired of using port numbers for everything instead of more descriptive strings. My system has more than 10 tunnels and servers running, and since I only do sysadmin work once every half year or so, the port numbers are very cumbersome to deal with.

  • jclulow 4 days ago

    I believe these days SSH is willing to forward a UNIX domain socket to a remote TCP port, or a local TCP port to a remote UNIX domain socket, or any combination of the two families really. You could use names locally, if your client tools are willing to do AF_UNIX!

    • aflukasz 3 days ago

      And if you are wondering, if you can just point your browser to a local unix socket (without setting up a proxy - which will listen on... local tcp port), then no, but maybe some day?

      Anyway:

      - https://bugzilla.mozilla.org/show_bug.cgi?id=1688774 - [open] "Support HTTP over unix domain sockets" - 4 years old, last activity 7 months ago,

      - https://issues.chromium.org/issues/40402523 - [closed; won't fix] "[ENH] Support HTTP over Unix Sockets via http://localhost:[/tmp/socket]/foo convention " - 9 years old, last activity 11 months ago.

    • mjw1007 4 days ago

      The nice thing about this is that, with filesystem permissions on one end and a check for SCM_CREDENTIALS or SO_PEERCRED on the other, you can effectively get user-based access control working between two machines.

      I think this is the one remaining advantage of ssh tunnels over using a VPN.

      NB if you're doing this sort of thing, you probably want to add `StreamLocalBindUnlink yes` to the ssh options.

  • sjf 4 days ago

    Agreed, I have so many services that all want to run their own webserver, db, elasticsearch, etc. I have to start using non-standard port numbers and it’s a burden to have to keep track of them.

qwertox 4 days ago

I use this to set up reverse tunnels, for example to set up MongoDB replica sets which sync through SSH. It kind of simplifies the security aspect of replica sets a bit, since then MongoDB does not need to be exposed to the internet and no VPN setup is needed.

frizlab 4 days ago

How is this different from this

    ssha () {
     while true
     do
      ssh "$@"
      sleep 1
     done
     true
    }
EDIT: Oh I think I know, autossh must be detecting when the connexion is closed but ssh does not automatically…
  • beagle3 4 days ago

    Ssh does with the right settings and has for about a decade - see the systemd example someone posted above.

chasil 4 days ago

Use stunnel for non-interactive tunneling over TLS.

It is much more straightforward than ssh for this purpose, and works well with socket activation under systemd.

I use it with the systemd automounter to encrypt NFSv4, and I have found it to be quite reliable.

leetrout 4 days ago

I used autossh to do terrible things securing redis back in 2013. Fantastic tool.

  • r0n22 4 days ago

    Ohh tell me more?

    • leetrout 4 days ago

      Way back redis didnt have passwords at all. That got added but there was no secure transport support.

      So I ran redis in a higher memory box at rackspace separate from my db and my app server. I used autossh to forward 6379 from localhost on the app server(s) to the redis server. Worked like a charm and never caused any issues.

      Other commenters are right in that wireguard is a great modern solution to this!

jbverschoor 3 days ago

Can’t recommend… just loop ssh.

I’ve run autossh for quite some time but it was not reliable enough under my conditions

pawelduda 4 days ago

I used autossh to access hundreds of on prem client machines via a reverse SSH tunnel. Never failed me!

whatever1 3 days ago

Why SSH does not do this by default? Why the average Joe wants his SSH session to timeout?

  • oxygen_crisis 3 days ago

    There's no timeout on SSH sessions by default.

    In good conditions you can go months without sending a single byte of traffic between an SSH server and client and both will pick up the connection just fine when it's time to communicate again.

    You could cut off traffic between them for any amount of time and they would be none the wiser as long as the network connection is back to normal when they finally try to send traffic again.

    (I had SSH sessions in a QA lab persist as if nothing had happened after the connection between the endpoints was down for almost a week while we replaced the aggregation layer routers. They never saw a link state change since the access layer switches were up the whole time. They never attempted to communicate while the connections between those were down, so there was never any problem as far as they were concerned.)

    The keepalives and connection checks and so forth are mostly to account for things like stateful network gear (firewalls, NAT routers, etc) between the endpoints that will cease relaying traffic between them if they are quiet for too long.

89nn 4 days ago

Is there anything like this but for `kubectl port-forward`?