This webpage provides instructions for using the acme-dns DNS challenge method with various ACME clients to obtain HTTPS certificates for private networks. Caddy, Traefik, cert-manager, acme.sh, LEGO and Certify The Web are listed as ACME clients that support acme-dns. For each client, configuration examples are provided that show how to set API credentials and other settings to use the acme-dns service at https://api.getlocalcert.net/api/v1/acme-dns-compat to obtain certificates. Interesting that so many ACME clients support the acme-dns service, providing an easy way to obtain HTTPS certificates for private networks.
HN https://news.ycombinator.com/item?id=36674224
seiferteric: Proposes an idea for automatically creating trusted certificates for new devices on a private network.
hartmel: Mentions SCEP which allows automatic certificate enrollment for network devices.
mananaysiempre: Thinks using EJBCA for this, as hartmel suggested, adds unnecessary complexity.
8organicbits: Describes a solution using getlocalcert which issues certificates for anonymous domain names.
austin-cheney: Has a solution using TypeScript that checks for existing certificates and creates them if needed, installing them in the OS and browser.
bruce511: Says automating the process is possible.
lolinder: Mentions Caddy will automatically create and manage certificates for local domains.
frfl: Uses Lego to get a Let’s Encrypt certificate for a local network website using the DNS challenge.
donselaar: Recommends DANE which works well for private networks without a public CA, but lacks browser support.
deleted by creator
IMHO all these approaches are convoluted and introduce way too many components (SPOFs) to solve the problem. They’re “free” but they come at the cost of maintaining all this extra infrastructure and don’t forget that certificate transparency logs mean all your internal DNS records that you request a LetsEncrypt certificate for will be published publicly. (!)
An alternative approach is to set up your own internal certificate authority (CA), which you can do in a couple minutes with step-ca. You then just deploy your CA root cert to all the machines on your network and can get certs whenever you need. If you want to go the extra mile and set up automatic renewal, you can do that too, but it’s overkill for internal use IMHO.
Using your own CA introduces only a single new software component and it doesn’t require high availability to be useful…
Unfortunately these days internal CAs aren’t always trusted. We have one where I work, and hundreds of times a day people have to click through “I understand the risks, proceed anyway” alert prompts.
Which makes me really uncomfortable - I fear one day someone will blindly click past a warning about an actual malicious certificate.
It kills me that companies seem to willingly train their users to ignore warnings and signs that something is amiss.
“Yeah, all our emails from that vendor come with the external email warning, just ignore it”
Personally I use dnsrobocert with my own domains. I’ve got a few subdomains that point to a Wireguard subnet IP for private network apps (so it resolves to nothing if you’re not on VPN). Having a real valid SSL cert is really nice vs self signing, and it keeps my browser with HTTPS-Everywhere happy.
But why
Because you might want to use HTTPS on a server that’s not accessible externally. Some browser features only work over HTTPS.
Sounds like a bad browser.
Plenty of non-browser related reasons to want HTTPS in your own network.
If you need it/should use it depends on your system architecture and level of paranoia.
For instance we’re running all our stuff in a virtualized Linux environment on-premise on our own hardware. There’s a firewall zone from the outside and in, several zones for different applications.
We terminate SSL at the edge and use port 80 for anything internal that’s HTTP.
While that opens us up to internal eavesdropping my argument is that anyone that deep in our system will have compromised everything anyways.
On the other hand it allows our firewall to do application filtering, including killing bad (as in faith) incoming requests.
The only caveat to that is that some of our external pen-testers think they’ve found a DOS scenario in our application when all that happens is that the firewall drops the connection.
If I was routing traffic over a shared network or multiple sites I’d definitely employ HTTPS.
All this said, I’m sure someone smarter than me have written better opinions on the topic.
Good browsers don’t let random unauthenticated content to do whatever it wants on neither the local machine or the network.
HTTPS is also the only way to use client-side certificates for strong two-way authentication and zero-trust setups.
Good browsers don’t let random unauthenticated content to do whatever it wants on neither the local machine or the network.
So, lynx?
zero-trust setups. private networks
lynx, no-script… it’s all fine until some web needs JavaScript yes or yes, which nowadays seem to be most of them, then it’s a game of whom to trust.
Private networks are usually an oxymoron, they’re only as private as far as the WiFi router or whoever clicks the wrong malicious link go. Zero-trust mitigates that, instead of blindly relying on perimeter defenses and trusting anyone who manages to bypass them.
This is your brain on webshit.
You may want to rephrase that?
Every browser implements these limitations, as they’re part of the web platform. Some examples are service workers, web crypto, HTTP/2, webcam, microphone, geolocation, and more. There’s a list here: https://developer.mozilla.org/en-US/docs/Web/Security/Secure_Contexts/features_restricted_to_secure_contexts
Sounds like a bad browser.
Every browser does this. It’s intentional to push people towards using encrypted connections, especially for PII like geolocation.
Sounds dystopian. I still won’t feel bad for normies.
So, Chrome, FireFox, Edge, Safari, Opera, every other browser I’ve ever heard of, are all “bad browsers” in your opinion?
For example, my browser won’t auto-fill a credit card without a valid HTTPS connection. And as someone who does QA on payment pages, I find myself typing out the standard VISA test card number
4200 0000 0000 0000[tab]12/34[tab]123
about a thousand times a day. Every ten minutes or so I type the wrong number of zeros and have to go back and try again. With a working HTTPS connection, the browser will fill it out for me. So much better.
Big fan of letsencrypt’s certbot with the nginx and cloudflare (or other dns providers) plugins.
Is there any reason to use caddy or traefik over nginx?
Caddy takes almost all of the nginx boilerplate and handles it for you.
If you’re doing something simple in nginx, it’s far simpler with Caddy.
What if I’m using NGINX Proxy Manager which gives me a GUI for my dumbness?
Stick with it, sounds like you’ve got a system that works for you