With Kubernetes(K8s), there’s no need to do ssh user@host
anymore since everything is running as containers. There are still occasions when I need shell access to a container to do some troubleshooting.
With Docker I can do
docker exec -ti <container_id> /bin/bash
It’s quite similar in K8s
kubectl exec -ti <container_id> -- /bin/bash
However in K8s containers have random IDs so I need to know the container ID first
kubectl get pods
Then I can grab the container ID and do the `kubectl exec` command. This is hard to automate because picking up the expected container ID using `grep` and `awk` commands can fail if the matching condition is too strict.
Given a deployment like this
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-app-deploy spec: replicas: 2 template: metadata: labels: app: my-app ...
the K8s way to query the container will be
kubectl get pods --selector=app=my-app -o jsonpath='{.items[0].metadata.name}'
A chained one-liner could be
kubectl exec -ti $(kubectl get pods --selector=app=my-app -o jsonpath='{.items[0].metadata.name}') -- /bin/bash
This doesn’t check for errors, eg. if no container matching `app=my-app` was found, but a better script can be easily crafted from here.
🙂