Some Field Test with Google Cloud Run


Recently I got a chance to migrate on-premise applications to GCP(Google Cloud Platform), and ran the apps in containers via Cloud Run. Here are some pros and cons that I think about the fully managed Cloud Run.

Pros:

  • Very easy to get started. As long as the app can run in a container, it can be quickly deployed to CR(Cloud Run). Maybe not for production but a CR service can have its endpoint exposed to the public so it can handle traffic very quickly.
  • The UI for CR is sufficient for daily operations, such as setting/editing configurations
  • Can set instance count to 0 when idle which saves money

Cons:

  • Not quite mature as of the moment
  • There isn’t much to customize(it’s a pro for some use cases)
  • Full manual mode to mount an EFS, ie. running mount command in the container
  • Only a few container registries are allowed, eg. Google Artifact Registry. Can’t use public ones

Some gotchas I got:

  • There’s only a TCP readiness probe. Clour Run Anthos has similar probe options to Kubernetes.
  • Autoscaling didn’t work as expected. In some cases the instance count exceed the maximum while in some other cases the count was capped before reaching the set maximum
  • A container seemed to be run as a special user. I noticed this because once my container not running as root user failed to start in CR. Probably a pre-built /etc/passwd is mounted to every container

Verdict: Cloud Run is good for running simple APIs but for someone already feeling comfortable with Kubernetes(EKS, GKE, etc.) CR is probably less attractive. 🙂