Disclaimer: I am running personal website on cloud, since it feels iffy to expose local IP to internet. Sorry for posting this on selfhosting, I don’t know anywhere else to ask.

I am planning to multiplex forgejo, nextcloud and other services on port 80 using caddy. This is not working, and I am having issues diagnosing which side is preventing access. One thing I know: it’s not DNS, since dig <my domain> works well. I would like some pointers for what to do in this circumstances. Thanks in advance!

What I have looked into:

  • curling localhost from the server works well, caddy returns a simple result.
  • curl <my domain> times out, currently trying to inspect packets - it seems like server receives TCP without HTTP.
  • curl <my domain>:3000 displays forgejo page, as forgejo exposes at 3000 in its container, which podman routes to host 3000.

EDIT: my Caddyfile is as follows.

:80 {
    respond "Hello World!"
}

http://<my domain> {
    respond "This should respond"
}

http://<my domain 2> {
    reverse_proxy localhost:3000
}

EDIT2: I just tested with netcat webserver, it responds fine. This narrows it down to caddy itself!

EDIT3: (Partially) solved, it was firewall routing issue. I should have checked ufw logs. Turns out, podman needs to be allowed to route stuffs. Now to figure out how to reverse-proxy properly.

EDIT4: Solved, created my own internal network between containers, besides the usual one connecting to the internet. Set up reverse-proxy to correctly connect to the container. My only concern left is if I made firewall way permissive in the process. Current settings:

Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere                  
3000/tcp                   ALLOW       Anywhere                  
222/tcp                    ALLOW       Anywhere                  
8080/tcp                   ALLOW       Anywhere                  
80/tcp                     ALLOW       Anywhere                  
8443/tcp                   ALLOW       Anywhere                  
Anywhere on podman1        ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)             
3000/tcp (v6)              ALLOW       Anywhere (v6)             
222/tcp (v6)               ALLOW       Anywhere (v6)             
8080/tcp (v6)              ALLOW       Anywhere (v6)             
80/tcp (v6)                ALLOW       Anywhere (v6)             
8443/tcp (v6)              ALLOW       Anywhere (v6)             
Anywhere (v6) on podman1   ALLOW       Anywhere (v6)             

Anywhere on podman1        ALLOW FWD   Anywhere on ens3          
Anywhere on podman0        ALLOW FWD   Anywhere on ens3          
Anywhere (v6) on podman1   ALLOW FWD   Anywhere (v6) on ens3     
Anywhere (v6) on podman0   ALLOW FWD   Anywhere (v6) on ens3

podman0 is the default podman network, and podman1 is the internal network.

  • dragnucs@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    It is good you have solved you initial issue. However, as you say, your rules are too permissive. You should not publish ports from containers to the host. Your container ports should only be accessible over reverse-proxy network. Said otherwise <my domain>:3000 should not resolve to anything.

    This can be simply acheive by not publishing any port on your service containers.

    Here is an example of my VPS:

    Exposed ports:

    $ ss -ntlp
    State                Recv-Q               Send-Q                             Local Address:Port                             Peer Address:Port              Process                                                  
    LISTEN               0                    128                                      0.0.0.0:22                                    0.0.0.0:*                  users:(("sshd",pid=4084094,fd=3))                       
    LISTEN               0                    4096                                     0.0.0.0:443                                   0.0.0.0:*                  users:(("conmon",pid=3436659,fd=6))                     
    LISTEN               0                    4096                                     0.0.0.0:5355                                  0.0.0.0:*                  users:(("systemd-resolve",pid=723,fd=11))               
    LISTEN               0                    4096                                     0.0.0.0:80                                    0.0.0.0:*                  users:(("conmon",pid=3436659,fd=5))                     
    LISTEN               0                    4096                                  127.0.0.54:53                                    0.0.0.0:*                  users:(("systemd-resolve",pid=723,fd=19))               
    LISTEN               0                    4096                               127.0.0.53%lo:53                                    0.0.0.0:*                  users:(("systemd-resolve",pid=723,fd=17))  
    

    Redacted list of containers:

    $ podman container ls
    CONTAINER ID  IMAGE                                        COMMAND               CREATED        STATUS                 PORTS                                     NAMES
    [...]
    docker.io/tootsuite/mastodon-streaming:v4.3  node ./streaming      2 months ago   Up 2 months (healthy)                                            social_streaming
    docker.io/eqalpha/keydb:alpine               keydb-server /etc...  2 months ago   Up 2 months (healthy)                                            cloud_cache
    localhost/podman-pause:4.4.1-1111111111                            2 months ago   Up 2 months            0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp  1111111111-infra
    docker.io/library/traefik:3.2                traefik               2 months ago   Up 2 months            0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp  traefik
    docker.io/library/nginx:1.27-alpine          nginx -g daemon o...  3 weeks ago    Up 3 weeks                                                       cloud_web
    docker.io/library/nginx:1.27-alpine          nginx -g daemon o...  3 weeks ago    Up 3 weeks                                                       social_front
    [...]