Hey everyone, just a quick question.

I’ve been selfhosting a number different web applications throughout the years. For most of them I would use Cloudflare Tunnels to expose them to the internet. I usually had one tunnel set up for my root domain and either a wildcard or multiple specific CNAME records pointing to the same tunnel. The tunnel would then terminate in a Docker container which shares a network with a Traefik instance, which then routes the traffic through a seperate network to the different application containers.

I was just wondering what your opinions on this are, considering this approach over a seperate tunnel for every application. This would eliminate the need for a shared network for Traefik, although I don’t consider this much of an issue.

Any opinion, input or recommendation is welcome! I’d love to hear about your setups, if you’re running something similar.

  • boring_bohr@feddit.deOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I thought about something like that as well, but never tried it out (yet). Do you use WireGuard tunnels for that? Or something else?

    Ideally I’d not expose most of the services to the public internet at all, since only me and some relatives need access to most of them. I have briefly looked into Tailscale or similar services for that, but again, haven’t tried that out yet, as that would (presumably) require changing quite a few things on both the server(s) and all of the clients…

    After all, I’m just cosplaying as a sysadmin for the most part, so what do I know ;)

    • CAPSLOCKFTW@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I use reverse ssh tunnels, technically running on my home server. For each service i want to expose on the internet, i have a systemd-unit which handles a said reverse tunnel to the vps. Basically, the port running the service locally gets tunneled to a port on the vps, that happens via ssh, so reasonably secure (login as root disabled, login with password disabled, with a special user with little to no rights running the systemd service locally and remotely to log in via ssh). On the remote vps, there is a reverse proxy running, nginx, which works like the service would be running on the remote vps, really. There are some services actually running there, a mail server for example. The config files aren’t really different, everything nginx handles gets passed to a localhost port. A nginx instance is also running on the local home server to serve all the local services and the global ones locally, and the dns on my main router resolves the adresses of the global services to the local ones. SSL-Certificates are acquired by the remote vps and copied to the local home server, so that the end users don’t have any difference in their ux regardless if they are in the local network or somewhere outside.

      Edit: I mostly use this approach because my ISP uses dualStack lite and I could not access anything local from outside with any other technique. But I like it, it is really basic.