In Kubernetes(K8s) cluster, 1 or more containers form a pod and every container in the pod can access other container’s port just like apps in the same local host. For example:
- pod1 - nginx1 - gunicorn1, port:8000 - pod2 - nginx2 - gunicorn2, port:8000
So nginx1 can access gunicorn1’s port using localhost:8000 and nginx2 can access gunicorn2, etc… However nginx1 can’t see gunicorn2 using localhost.
When it comes to cache, like redis, I would like a shared redis container(or cluster later) so less memory is wasted and the cache doesn’t need to be warmed for each pod. The structure will look like:
- pod1 - nginx1 - gunicorn1 - pod2 - nginx2 - gunicorn2 - pod3 - redis
To grant both gunicorns to access redis, redis needs to be a service:
--- apiVersion: v1 kind: Service metadata: name: redis-svc labels: app: redis role: cache spec: type: NodePort ports: - port: 6379 selector: app: redis role: cache
And the deployment to support the service looks like:
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-deploy spec: replicas: 1 template: metadata: labels: app: redis role: cache spec: containers: - name: redis image: redis:4 resources: requests: memory: 200Mi ports: - containerPort: 6379
Then in the settings.py of the django app running in gunicorn container, redis can be accessed via ENVs setup by K8s:
CACHES = { "default": { "BACKEND": "django_redis.cache.RedisCache", "LOCATION": "redis://{0}:{1}/1".format(os.environ['REDIS_SVC_SERVICE_HOST'], os.environ['REDIS_SVC_SERVICE_PORT']), "OPTIONS": { "CLIENT_CLASS": "django_redis.client.DefaultClient" }, } }
🙂