kubectl port-forward is a debugging shortcut. It ties a service to a local port for as long as the terminal is open, requires a terminal window per service, and only works on the machine running the command. Real Kubernetes services need real URLs.
This is the companion article to Episode 5 of the Kubernetes on Raspberry Pi series. By the end, Gitea is at http://gitea.spatacoli.xyz and Grafana is at http://grafana.spatacoli.xyz, accessible from any machine on the network with no port numbers required.
All configs are in the kubernetes-series GitHub repo under video-05-ingress-metallb-traefik/.
The Network Layout
| Thing | Value |
|---|---|
| MetalLB pool | 10.51.50.210โ10.51.50.219 |
| Traefik IP | 10.51.50.210 |
| Domain | spatacoli.xyz |
Two Pieces Working Together
On cloud providers like GKE, EKS, or AKS, creating a Kubernetes LoadBalancer service automatically provisions a public IP. On bare metal, nothing provides this. MetalLB fills that gap by using your existing LAN. It watches for LoadBalancer services and assigns IPs from a pool you configure.
Traefik is an Ingress controller: a pod that watches for Ingress resources and routes incoming HTTP traffic to the right service based on hostname. MetalLB gives it one IP; Traefik decides where every request goes from there.
The full flow: Client -> MetalLB IP -> Traefik -> Service -> Pod
Installing MetalLB
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm install metallb metallb/metallb \
--namespace metallb-system \
--create-namespace
MetalLB's speaker DaemonSet needs NET_RAW capability to send ARP packets. Talos blocks this. Add metallb-system to the PSS exemptions in the machine config:
export EDITOR=nano
talosctl edit machineconfig --nodes <control-plane-ip>
exemptions:
namespaces:
- kube-system
- monitoring
- metallb-system
Reboot the control plane and verify speaker pods are running:
kubectl get pods -n metallb-system
Configure the IP pool:
# metallb-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: homelab-pool
namespace: metallb-system
spec:
addresses:
- 10.51.50.210-10.51.50.219
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: homelab-l2
namespace: metallb-system
spec:
ipAddressPools:
- homelab-pool
kubectl apply -f metallb-pool.yaml
Before touching Traefik, verify MetalLB is actually assigning IPs. This step saves debugging time later:
kubectl apply -f test-lb.yaml # a simple LoadBalancer service
kubectl get svc test-lb
# EXTERNAL-IP should show 10.51.50.210
kubectl delete -f test-lb.yaml
If EXTERNAL-IP stays <pending>, fix MetalLB before continuing.
Installing Traefik
# traefik-values.yaml
service:
type: LoadBalancer
ingressClass:
enabled: true
api:
dashboard: true
insecure: true
providers:
kubernetesIngress:
publishedService:
enabled: true
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
helm install traefik traefik/traefik \
--namespace traefik \
--create-namespace \
--values traefik-values.yaml
Verify Traefik got an IP from the MetalLB pool:
kubectl get svc -n traefik
# EXTERNAL-IP should show 10.51.50.210
Split-Horizon DNS with BIND9
We're using spatacoli.xyz, a real domain, but routing it internally to the Traefik IP. This is split-horizon DNS: the same domain resolves to different answers depending on where the query originates. Internal queries go to Traefik; external queries reach the public internet as normal.
In BIND9, add an internal zone in named.conf:
zone "spatacoli.xyz" {
type master;
file "/etc/bind/zones/spatacoli.xyz.internal";
};
Create /etc/bind/zones/spatacoli.xyz.internal:
$TTL 300
@ IN SOA ns1.spatacoli.xyz. admin.spatacoli.xyz. (
2024010101 3600 900 604800 300 )
@ IN A 10.51.50.210
* IN A 10.51.50.210
sudo rndc reload
dig gitea.spatacoli.xyz
# Should return 10.51.50.210
The wildcard * record means every subdomain resolves to Traefik's IP. Adding a new service only requires an Ingress resource, no DNS changes needed.
Exposing Gitea
Update the Gitea deployment to inform it about the reverse proxy, then create an Ingress resource that tells Traefik to route gitea.spatacoli.xyz to the Gitea service:
env:
- name: GITEA__server__DOMAIN
value: gitea.spatacoli.xyz
- name: GITEA__server__ROOT_URL
value: http://gitea.spatacoli.xyz/
- name: GITEA__security__REVERSE_PROXY_TRUSTED_PROXIES
value: "10.0.0.0/8"
# gitea-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitea-ingress
namespace: gitea
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
ingressClassName: traefik
rules:
- host: gitea.spatacoli.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitea
port:
number: 3000
kubectl apply -f gitea-ingress.yaml
Open http://gitea.spatacoli.xyz. No port number, no terminal window left open.
Exposing Grafana
Grafana is a Helm-managed release, so we expose it by adding ingress config to the values and upgrading:
grafana:
grafana.ini:
server:
domain: grafana.spatacoli.xyz
root_url: http://grafana.spatacoli.xyz/
ingress:
enabled: true
ingressClassName: traefik
hosts:
- grafana.spatacoli.xyz
path: /
helm upgrade kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--values monitoring-values.yaml
What's Next
Both services are running on HTTP. In Episode 6 we add free TLS certificates via cert-manager and Let's Encrypt: one wildcard cert that covers everything in the cluster, auto-renewing forever.