The other day, I was playing with ArgoCD application sets and by mistake renamed an application owned by its application set. What’s worse is, the application is istio. By the default logic an application set will delete the old application and create one with the new name, so the istio application was being deleted.
I thought it will be fine because even if I lost istio which means I would lost access to ArgoCD UI shortly, since ArgoCD was still running, it will bring back istio quickly. That didn’t happen.
I took a closer look and noticed that the new istio application can’t be sync’ed by ArgoCD because the istio-system namespace was still in pending deletion state. From my experience if a namespace in a Kubernetes cluster gets stuck and can’t be deleted, it means some resources within the namespace can’t be deleted. I just need to find out which.
I found the 1-liner below showing all namespaced resources in a namespace was very helpful:
kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace> # --verbs=list will filter out resources that you can't `get` # --namespaced this is the default now so can be omitted, but good to know # xargs -n 1 will run the `kubectl get` command with each resource type found # --ignore-not-found will skip any `resource not found` error message
With this command I found some custom resource stuck there, deleted it and the crisis was averted 🙂