Continuing with the topic of Introduction to Serverless, we will now analyse two services that offer the deployment of containers in serverless environments, starting with a brief description and background to each before focusing on some of their important features.

#AWSFargate vs. #GoogleCloudRun for deploying containers in #cloud: we consider both in this comparison #serverless Click To Tweet

AWS Fargate

AWS Fargate was created in 2017 to provide an enhanced and managed experience of the AWS ECS as it existed at that time. AWS ECS deploys containers defined as tasks in a previously created cluster. Its first mode allowed only EC2-type tasks and therefore it could not be considered a serverless service per se.

To use the ECS AWS Fargate mode, a cluster on which to operate must be created. In this case, the option of creating a “Networking only” type cluster managed by Fargate could be used.

From here onwards, it is only necessary to define the Docker image to be deployed, as well as extra network setup, implementation role resources and others (some of them optional).

AWS ECS services complement cluster and task concepts, which make it possible to define how many copies or replicas of a task we want to deploy in a cluster. Optionally, a traffic distribution configuration can also be created using a load balancer for the various containers in the service. AWS Fargate manages and maintains all coordination and programming of tasks and integration with the load balancer.

Finally, and also optionally, auto-scaling configurations of tasks of our service may be defined, which can be deployed under a single AWS ECS service ranging from 1 to 1000.

Referring to the creation of minimum necessary resources through its CLI:

$ aws ecs create-cluster --cluster-name MyCluster
$ aws ecs run-task --launch-type FARGATE --cluster
--task-definition --network-configuration

Google Cloud Run

Google Cloud Run is a computing service that appeared in a closed beta in mid-2018, and its open progressive beta was announced at Google Cloud Next 2019 in San Francisco.

This service supports two deployment modes:

  • In a fully managed Google Cloud environment, considered serverless.
  • The option to deploy containers in clusters of Google Kubernetes Engine.

With Cloud Run, the deployment of any type of Docker container (with very limited requirements) is carried out extremely simply, with configurations of scalability, availability, network configuration and access (if required) provided implicitly.

In this way, Google Cloud somehow renews the “flexible” App Engine environment whereby any defined container may be deployed through its Dockerfile. Whilst Google App Engine Flexible was never a first level service to which Google gave great relevance, Cloud Run is top tier.

Knative

One of the main reasons for Google’s great commitment to this service is the technology on which it is based. Whereas in App Engine Flexible, the way Google manages our services was largely unknown, with Cloud Run the opposite is true. This service is based entirely on Knative, a complete platform on Kubernetes and Istio to deploy and make our containers accessible.

With Cloud Run the focus moves once again to the needs of the business, since Google manages the infrastructure supporting our applications. Of course, with Cloud Run we only pay for what is used.

Google promises scalability from 0 to 1000 container instances, according to demand. As for resources, a vCPU (virtual CPU) and a parametrisable memory are available for the instances.

Configuration is extremely simple since we do not have to create a VPC in our project to be able to start creating instances in the Cloud Run service. Similarly, a URL is automatically provided to access the service with a fully managed SSL certificate.

In terms of security, all containers deployed in Cloud Run are strictly isolated based on gVisor technology. In addition, access to Cloud Run services is very easily configured by indicating the identities that can be accessed or simply by configuring open access, for example if we are working on some type of public API.

Google wishes to be an expert in terms of container implementation and deployment, not only through other technologies at different service levels such as the renowned Kubernetes Engine, but also by going one step further and facilitating the use and deployment of docked workloads in any service of its portfolio. In this case the experience with Cloud Run is almost perfect thanks to its integration with other services and the ease and immediacy of use.

Referring to the creation of minimum necessary resources through its CLI:

gcloud beta run deploy --image

Benchmark

Here is a quick comparison of the services mentioned:

Comparison between AWS Fargate and Google Cloud Run.

Conclusions

The AWS Fargate service has a longer path than Google Cloud Run and is therefore more robust and flexible. Furthermore, the services differ greatly in terms of service resource configuration, which is often a determining factor.

However, Cloud Run stands out for two major and important features: almost unbeatable ease of use, not only in the basic configuration but also in the implicit setup of the service itself, such as load balancing and auto-scaling, and the fact that the Open Source technology on which it is based is compatible with Kubernetes, making any Cloud Run workload deployable in any public cloud.

If you are looking for more flexibility in configuration and resources, then AWS Fargate is the service for you; but if you need the highest level of serverless in containers, based on Open Source technology and easily portable to another environment, then look no further, Cloud Run fits the bill.

If you are interested in serverless technologies and want to learn more about them, follow our blog because we will continue to write about these technologies.

Image: unsplash | zanilic

Author

  • Sergio Gordillo

    Cloud Architect en Keepler. "Lifelong learner and interested in cloud computing and public cloud technologies. Engineer with extensive experience in backend development and skills in machine learning techniques. Passionate about learning and solving real world problems. I enjoy collaborative teamwork, sharing knowledge and creating amazing products."