I have been experimenting with the kubernetes Gateway API using kind in Docker
Desktop for macOS. Theoretically I should be able to change implementations
without having to modify my resources - almost. I started with this awesome
Hands-On with the Kubernetes Gateway API: A 30-Minute Tutorial which uses
Gloo Gateway v2 (beta). Under the hood, the edge proxying is done with Envoy,
the same as Istio uses. I managed to work through the Gloo tutorial getting by
with forwarding my curl requests through a kubectl port-forward
but when I
switched over to Istio I found a new problem.
The problem
Istio requires a load-balancer. Without one, all of your port-forwarded requests will return a 404 as the Istio control plane doesn’t know where to send your traffic.
❯ kubectl get gtw -A
NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE
istio-ingress gateway istio <pending> False 2d1h
Note the programmed=False
condition which indicates the Gateway control plane
has not been able to make a decision on its status:
❯ kubectl get gtw gateway -n istio-ingress -o yaml | yq .status.conditions\[1]
lastTransitionTime: "2024-06-02T06:09:41Z"
message: 'Assigned to service(s) gateway-istio.istio-ingress.svc.cluster.local:80,
but failed to assign to all requested addresses: address pending for hostname
"gateway-istio.istio-ingress.svc.cluster.local"'
observedGeneration: 1
reason: AddressNotAssigned
status: "False"
type: Programmed
Any attempt to change the Service resource to a type: ClusterIP
would just
get replaced by istiod
- it really wants to use a type: LoadBalancer
!
NOTE: Docker Desktop Mac runs in a process-based VM using hyperkit and does not expose the VM’s network interfaces to the macOS host.
The setup
KinD allows to create a 2 node cluster with this configuration when you initialise the cluster:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
❯ kind create cluster --config=cluster.yaml
❯ kind get clusters
kind
❯ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:59903
CoreDNS is running at https://127.0.0.1:59903/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 2d21h v1.30.0
kind-worker Ready <none> 2d21h v1.30.0
The rest of the setup can be found in the Gloo tutorial mentioned above, but it’s basically a Gateway controlling a Service on port 8000 and routing a HTTPRoute to a httpbin pod on port 80.
The solution
I found a reddit post that referred to Docker Mac Net Connect
Connect directly to Docker-for-Mac containers via IP address.
It’s a pretty awesome idea - you create a utun
virtual network interface on
the macOS host which routes traffic over a lightweight wireguard virtual
network to the kernel running in the VM.
As per their instructions, installation was simple:
# Install via Homebrew
$ brew install chipmk/tap/docker-mac-net-connect
# Run the service and register it to launch at boot
$ sudo brew services start chipmk/tap/docker-mac-net-connect
EDIT: I found after a Docker Desktop upgrade, the docker.sock
file seems to have moved to a user-specific socket in ~/.docker/run/docker.sock
but the docker-mac-net-connect tool is still trying to use unix://var/run/docker.sock
. You can enable the system-wide socket under Settings -> Advanced.
Once installed, we can see it is running the daemon process on the host:
❯ sudo brew services list
Name Status User File
docker-mac-net-connect started root /Library/LaunchDaemons/homebrew.mxcl.docker-mac-net-connect.plist
❯ sudo brew services info docker-mac-net-connect
docker-mac-net-connect (homebrew.mxcl.docker-mac-net-connect)
Running: ✔
Loaded: ✔
Schedulable: ✘
User: root
PID: 3145
❯ ps aux | grep 3145
root 3145 0.0 0.1 35504240 9048 ?? Ss 3:39pm 0:00.09 /usr/local/opt/docker-mac-net-connect/bin/docker-mac-net-connect
Currently the project does not have a config file but it’s easy to see it
created a new utun5
network interface and we can see the Docker networks are
now routed to it automatically
❯ netstat -rnf inet | grep 10.33.33.1
10.33.33.2 10.33.33.1 UH utun5
❯ netstat -rnf inet | grep utun5
10.33.33.2 10.33.33.1 UH utun5
172.17 utun5 USc utun5
172.18 utun5 USc utun5
All that remains is to install MetalLB and configure it to act as a load balancer.
❯ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/servicel2statuses.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/metallb-webhook-cert created
service/metallb-webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
We can get the worker node’s IP address with a simple docker command:
❯ docker exec -ti kind-worker ip addr show dev eth0 scope global
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fc00:f853:ccd:e793::2/64 scope global nodad
valid_lft forever preferred_lft forever
Using that, we configure an IPAddressPool resource for MetalLB so it knows to start listening on that address:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: kind-worker
namespace: metallb-system
spec:
addresses:
- 172.18.0.2/32
autoAssign: true
avoidBuggyIPs: false
Copy and paste it to kubectl on stdin to apply it:
❯ pbpaste | kubectl apply -f -
ipaddresspool.metallb.io/kind-worker created
Now we can see the Gateway resource has met the condition for a load-balancer
❯ kubectl get gtw -A
NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE
istio-ingress gateway istio 172.18.0.2 True 2d1h
❯ kubectl get gtw gateway -n istio-ingress -o yaml | yq .status.addresses,.status.conditions\[1]
- type: IPAddress
value: 172.18.0.2
lastTransitionTime: "2024-06-04T06:13:13Z"
message: Resource programmed, assigned to service(s) gateway-istio.istio-ingress.svc.cluster.local:80
observedGeneration: 1
reason: Programmed
status: "True"
type: Programmed
Finally we can ping the worker node’s IP address and make HTTP requests to our pods via the Gateway without needing to port-forward anything.
❯ ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=63 time=3.833 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=63 time=0.885 ms
^C
--- 172.18.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.885/2.359/3.833/1.474 ms
❯ curl -vs -H "Host: api.example.com" http://172.18.0.2/api/httpbin/get
* Trying 172.18.0.2:80...
* Connected to 172.18.0.2 (172.18.0.2) port 80
GET /api/httpbin/get HTTP/1.1
Host: api.example.com
User-Agent: curl/8.4.0
Accept: application/json, */*
HTTP/1.1 200 OK
server: istio-envoy
date: Tue, 04 Jun 2024 06:15:52 GMT
content-type: application/json
content-length: 1378
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 38
* Connection #0 to host 172.18.0.2 left intact
{
"args": {
},
"headers": {
"Accept": "application/json, */*",
"Authorization": "Bearer my-api-key",
"Host": "api.example.com",
"User-Agent": "curl/8.4.0",
"X-Envoy-Attempt-Count": "1",
"X-Envoy-Decorator-Operation": "httpbin.httpbin.svc.cluster.local:8000/*",
"X-Envoy-Internal": "true",
"X-Envoy-Original-Path": "/api/httpbin/get",
"X-Envoy-Peer-Metadata": "ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKHQoMSU5TVEFOQ0VfSVBTEg0aCzEwLjI0NC4xLjE1ChkKDUlTVElPX1ZFUlNJT04SCBoGMS4yMi4wCvABCgZMQUJFTFMS5QEq4gEKMwomZ2F0ZXdheS5uZXR3b3JraW5nLms4cy5pby9nYXRld2F5LW5hbWUSCRoHZ2F0ZXdheQoiChVpc3Rpby5pby9nYXRld2F5LW5hbWUSCRoHZ2F0ZXdheQoyCh9zZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1uYW1lEg8aDWdhdGV3YXktaXN0aW8KLwojc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtcmV2aXNpb24SCBoGbGF0ZXN0CiIKF3NpZGVjYXIuaXN0aW8uaW8vaW5qZWN0EgcaBWZhbHNlChoKB01FU0hfSUQSDxoNY2x1c3Rlci5sb2NhbAonCgROQU1FEh8aHWdhdGV3YXktaXN0aW8tNTdmOGI0NGRmLXNzMm5iChwKCU5BTUVTUEFDRRIPGg1pc3Rpby1pbmdyZXNzClcKBU9XTkVSEk4aTGt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9pc3Rpby1pbmdyZXNzL2RlcGxveW1lbnRzL2dhdGV3YXktaXN0aW8KIAoNV09SS0xPQURfTkFNRRIPGg1nYXRld2F5LWlzdGlv",
"X-Envoy-Peer-Metadata-Id": "router~10.244.1.15~gateway-istio-57f8b44df-ss2nb.istio-ingress~istio-ingress.svc.cluster.local"
},
"origin": "10.244.1.1",
"url": "http://api.example.com/get"
}
UPDATE - 2024-06-18: After a reboot, I found my kind
clusters came back
up on different IP addresses so the routing was not working any more. A little
bit of googling led me to a stack overflow post explaining how to detach and
re-attach a container to a docker “network” - in this case the “network” is
called kind
.
❯ docker network disconnect kind kind-control-plane
❯ docker network connect --ip=172.18.0.2 kind kind-control-plane
❯ docker inspect kind-control-plane --format '{{.NetworkSettings.Networks.kind.IPAddress}}'
172.18.0.2
This causes the kubernetes API server pod to fail but a restart of the container fixes that.
❯ docker restart kind-control-plane
I could have also simply updated MetalLB’s IPAddressPool to use the container’s new IP as below but that means I would need to change the commands in my shell history too.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: kind-worker
namespace: metallb-system
spec:
addresses:
- 172.18.0.5/32 # updated to container's new IP
autoAssign: true
avoidBuggyIPs: false
Unfortunately the method they describe to verify the container’s IP address does not show it correctly but at least I fixed the routing problem.
❯ docker inspect kind-control-plane --format '{{.NetworkSettings.IPAddress}}'