Disclaimer: I am running personal website on cloud, since it feels iffy to expose local IP to internet. Sorry for posting this on selfhosting, I don’t know anywhere else to ask.
I am planning to multiplex forgejo, nextcloud and other services on port 80 using caddy.
This is not working, and I am having issues diagnosing which side is preventing access.
One thing I know: it’s not DNS, since dig <my domain>
works well.
I would like some pointers for what to do in this circumstances. Thanks in advance!
What I have looked into:
- curling localhost from the server works well, caddy returns a simple result.
curl <my domain>
times out, currently trying to inspect packets - it seems like server receives TCP without HTTP.curl <my domain>:3000
displays forgejo page, as forgejo exposes at 3000 in its container, which podman routes to host 3000.
EDIT: my Caddyfile is as follows.
:80 {
respond "Hello World!"
}
http://<my domain> {
respond "This should respond"
}
http://<my domain 2> {
reverse_proxy localhost:3000
}
EDIT2: I just tested with netcat webserver, it responds fine. This narrows it down to caddy itself!
EDIT3: (Partially) solved, it was firewall routing issue. I should have checked ufw logs. Turns out, podman needs to be allowed to route stuffs. Now to figure out how to reverse-proxy properly.
EDIT4: Solved, created my own internal network between containers, besides the usual one connecting to the internet. Set up reverse-proxy to correctly connect to the container. My only concern left is if I made firewall way permissive in the process. Current settings:
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
3000/tcp ALLOW Anywhere
222/tcp ALLOW Anywhere
8080/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
8443/tcp ALLOW Anywhere
Anywhere on podman1 ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
3000/tcp (v6) ALLOW Anywhere (v6)
222/tcp (v6) ALLOW Anywhere (v6)
8080/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
8443/tcp (v6) ALLOW Anywhere (v6)
Anywhere (v6) on podman1 ALLOW Anywhere (v6)
Anywhere on podman1 ALLOW FWD Anywhere on ens3
Anywhere on podman0 ALLOW FWD Anywhere on ens3
Anywhere (v6) on podman1 ALLOW FWD Anywhere (v6) on ens3
Anywhere (v6) on podman0 ALLOW FWD Anywhere (v6) on ens3
podman0
is the default podman network, and podman1
is the internal network.
You do not want port 80. For 80 is http which is totally unencrypted and unauthenticated.
What you want instead is 443 or better yet, 443 behind a VPN.
For Let’s encrypt not to work you will need 80 open for Caddy though
It is good you have solved you initial issue. However, as you say, your rules are too permissive. You should not publish ports from containers to the host. Your container ports should only be accessible over reverse-proxy network. Said otherwise <my domain>:3000 should not resolve to anything.
This can be simply acheive by not publishing any port on your service containers.
Here is an example of my VPS:
Exposed ports:
$ ss -ntlp State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=4084094,fd=3)) LISTEN 0 4096 0.0.0.0:443 0.0.0.0:* users:(("conmon",pid=3436659,fd=6)) LISTEN 0 4096 0.0.0.0:5355 0.0.0.0:* users:(("systemd-resolve",pid=723,fd=11)) LISTEN 0 4096 0.0.0.0:80 0.0.0.0:* users:(("conmon",pid=3436659,fd=5)) LISTEN 0 4096 127.0.0.54:53 0.0.0.0:* users:(("systemd-resolve",pid=723,fd=19)) LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=723,fd=17))
Redacted list of containers:
$ podman container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [...] docker.io/tootsuite/mastodon-streaming:v4.3 node ./streaming 2 months ago Up 2 months (healthy) social_streaming docker.io/eqalpha/keydb:alpine keydb-server /etc... 2 months ago Up 2 months (healthy) cloud_cache localhost/podman-pause:4.4.1-1111111111 2 months ago Up 2 months 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp 1111111111-infra docker.io/library/traefik:3.2 traefik 2 months ago Up 2 months 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp traefik docker.io/library/nginx:1.27-alpine nginx -g daemon o... 3 weeks ago Up 3 weeks cloud_web docker.io/library/nginx:1.27-alpine nginx -g daemon o... 3 weeks ago Up 3 weeks social_front [...]
Modern web services are served on port 443 over HTTPS with secure certificates, not on port 80 over HTTP.
Make sure you have a cert issued and installed for your server, that port 443 is not blocked by any firewall and that curl is explicitly connecting to https.
Caddy automatically generates certs
Are these running on the same server? You haven’t given a lot of information here. Communication between containers is different:
Yes, they are running on the same server. I am hoping to communicate through host network, maybe that’s not working well
Inter-container communication is different. At least with docker which I have more experience with, but they’re similar. Try using the name of your container in your proxy config rather than the external host name.
Install a reverse proxy like caddy, but on your server bare metal not container.
Also, expose port 443 not 80, and put a SSL certficate.
Can at least ping <my domain> from server and from home?
“bare metal” does not mean “outside of a container”. Just say “outside of a container”.
It’s a losing battle, but I’ll fight it anyway.
What do you mean? I have only heard that phrase meaning not in a container or VM. But I am not a native speaker.
Not in a VM is better usage - but “metal” refers to the hardware. Traditionally it’s used for embedded devices - no OS. But containers run on the hardware / OS in exactly the same way that non-containerized processes do. They even share the kernel of the same OS. There is no way non-containerized processes run on “metal” any more than containers do.
The distinction is between bare metal and virtual machine. Most cloud deployments will be hosted in a virtual machine, inside which you host your containers.
So the nested dolls go:
- bare metal (directly on hardware)
- virtual machine (inside a hypervisor)
- container (inside Docker, podman, containers, etc.)
- runtime (jvm, v8, clr, etc) (unless your code is in C, Rust, or other such language)
- your code
Thanks for the clarification. So I go on bare metal, but probably in op case was not the case.
I have a real server at home and I rent a real server (which I often incorrectly call VPS).
There’s no indication that running caddy in a container was a problem here.