How to Use Pod Anti-Affinity in Kubernetes


By default the Kubernetes scheduler distributes pods of a replica set evenly to all nodes, if no taints are present of course. So why or when do we need pod anti-affinity? 1 scenario I can think of is like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy1
spec:
  replica: 2
  selector:
    matchLabels:
      app: wordpress
      domain: example.com
...

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy2
spec:
  replica: 2
  selector:
    matchLabels:
      app: wordpress
      domain: hello.com

Without any advanced tuning such as pod anti-affinity, the replicas can possibly be scheduled like

node1:
  - deploy1-replica1
  - deploy2-replica1
node2:
  - deploy1-replica2
  - deploy2-replica2

To be fair this is usually Ok. But if we don’t want deploy1 and deploy2 sharing the same node, now it’s the time to use pod anti-affinity:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy1
spec:
  replica: 2
  selector:
    matchLabels:
      app: wordpress
      domain: example.com
  template:
    spec: 
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - wordpress
                topologyKey: kubernetes.io/hostname
              weight: 100
...
# same goes for deploy2

Now all replicas having app: wordpress label will be distributed evenly among all nodes

node1:
  - deploy1-replica1
node2:
  - deploy2-replica1
node3:
  - deploy1-replica2
node4:
  - deploy2-replica2

🙂