You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After 2 hours of refactoring and looking around for solutions I asked Claude and came up with this:
--- https://github.com/2Tiny2Scale/tailscale-docker-sidecar-configs/blob/14eac89007ae0e65e6662f77fd701c665465bc00/services/uptime-kuma/docker-compose.yml+++ Expose local port@@ -3,6 +3,7 @@
# Tailscale Sidecar Configuration
tailscale-uptime-kuma:
image: tailscale/tailscale:latest # Image to be used
+ network_mode: service:uptime-kuma # Sidecar configuration to route uptime-kuma through Tailscale. Service name!
container_name: tailscale-uptime-kuma # Name for local container management
hostname: uptime # Name used within your Tailscale environment
environment:
@@ -14,6 +15,8 @@
- ${PWD}/uptime-kuma/config:/config # Config folder used to store Tailscale files - you may need to change the path
- ${PWD}/uptime-kuma/tailscale-uptime-kuma/state:/var/lib/tailscale # Tailscale requirement - you may need to change the path
- /dev/net/tun:/dev/net/tun # Network configuration for Tailscale to work
+ depends_on:+ - uptime-kuma
cap_add:
- net_admin
- sys_module
@@ -31,11 +34,16 @@
# uptime-kuma
uptime-kuma:
image: louislam/uptime-kuma:latest # Image to be used
- network_mode: service:tailscale-uptime-kuma # Sidecar configuration to route uptime-kuma through Tailscale
container_name: uptime-kuma # Name for local container management
volumes:
- ${PWD}/uptime-kuma/uptime-kuma-data:/app/data # uptime-kuma data/configuration folder
- /var/run/docker.sock:/var/run/docker.sock:ro # Read-only access to the docker.sock
- depends_on:- - tailscale-uptime-kuma+ ports:+ - 3001:3001+ networks:+ - internal
restart: always
++networks:+ internal:+ driver: bridge
It basically moves network_mode and depends_on to the Tailscale container and then adds a bridge network and exposes the port locally. This is not ideal as it also exposes the port on ${TS_CERT_DOMAIN}¹, but it accomplishes what I had in mind.
I thought I put it up here for discussion, eventually someone finds a better solution.
1: I tried to configure serve.json either to also serve HTTPS on this port or disable this port or redirect to port 443, but I my attempts always resulted in a non-functional configuration/
The text was updated successfully, but these errors were encountered:
I just found the easier solution after looking at the Jellyfin example.
--- https://github.com/2Tiny2Scale/tailscale-docker-sidecar-configs/blob/14eac89007ae0e65e6662f77fd701c665465bc00/services/uptime-kuma/docker-compose.yml+++ Expose local port@@ -26,6 +26,8 @@
timeout: 10s # Time to wait for the check to succeed
retries: 3 # Number of retries before marking as unhealthy
start_period: 10s # Time to wait before starting health checks
+ ports:+ - 0.0.0.0:3001:3001 # <Host Port>:<Container Port>
restart: always
# uptime-kuma
After 2 hours of refactoring and looking around for solutions I asked Claude and came up with this:
It basically moves
network_mode
anddepends_on
to the Tailscale container and then adds a bridge network and exposes the port locally. This is not ideal as it also exposes the port on${TS_CERT_DOMAIN}
¹, but it accomplishes what I had in mind.I thought I put it up here for discussion, eventually someone finds a better solution.
1: I tried to configure
serve.json
either to also serve HTTPS on this port or disable this port or redirect to port 443, but I my attempts always resulted in a non-functional configuration/The text was updated successfully, but these errors were encountered: