Home / Blog / Azure vs AWS vs GCP comparison: Which Work Best for Serverless Architecture?

Azure vs AWS vs GCP comparison: Which Work Best for Serverless Architecture?

Views: 141

You have definitely heard about serverless computing of Functions-as-a-Service (FaaS) offerings from multiple cloud providers such as AWS Lambda, Azure Functions, and Google Cloud Functions. While introduced not so long ago (with AWS Lambda released in 2014, Azure Functions launched in 2016, and Google Cloud Functions in 2018), these services are steadily growing in popularity. The reason for this is simple — they remove a ton of overheads and costs when running your workloads in the cloud.

According to a GitLab survey on AWS Lambda usage from 2019, 90% of software engineers use FaaS in their production environments, and 53% have worked with Lambda for 1 to 3 years. More importantly, serverless computing works great for developers and Ops engineers; it is used by large corporations, startups and SMEs alike.

What is serverless computing?

Serverless computing is the cloud resource allocation model where the cloud vendor provides backend services on an as-used basis, rather than prepaid bandwidth or server capacities. In simple terms, you don’t pay for idling virtual servers. Instead, you upload the code that executes some function and configure various invocation scenarios (webhook, API calls, event triggers), so that you pay per invocation, not per instance.

All the IaaS and PaaS components of cloud vendor’s offers run under the hood so there is no need to handle their scaling and configuration. This sounds like a great solution, allowing you to avoid hiring expensive DevOps engineers to manage your cloud infrastructure. Any developer can use AWS Lambda, Azure Functions, or Google Cloud Functions web interface to configure the service and run your apps in FaaS, right?

Wrong.

What some software engineers and business executives fail to grasp is that FaaS is not a replacement for standard cloud computing instances. It is a glue that allows building event-driven workflows, but it cannot replace standard cloud services — file storage, databases, Kubernetes clusters, monitoring and security features, etc.

The reason is simple. FaaS is not meant to be a replacement for common cloud services and is not built to run 24/7. Functions are executed in invocations, meaning that when they are needed — they run and then are immediately shut down to conserve resources. The key difference with standard cloud instances is that a single instance can handle multiple connections and perform multiple tasks simultaneously, while an invocation is a single function, which is limited in size and execution time.

Serverless computing use case scenarios

If serverless computing does not fit every use case, where does it fit in? It is a great choice for infrequent, stateless, asynchronous, highly dynamic, and concurrent workloads with sporadic demand and unpredictable resource usage. Think user registration or authentication, financial transactions, etc.

If your app runs as a bundle of microservices, having an instance (or a single container) running a registration module 24/7 is wasteful — because new registrations do not happen non-stop. It is much more feasible to invoke the required function when a new user wants to register and pay for the time the service was actually needed, not 24/7.

Here are other notable examples of FaaS usage:

  • Business logic. When your app uses a variety of cloud services and features, they can be invoked upon request, via webhooks, API calls, and other methods. This is especially useful for running microservice-based applications.NN
  • CI/CD pipelines. When your software delivery process automatically provisions the required environments only when and where they are needed, you save considerably on the software development pipeline, as you don’t run idle servers.
  • Stream data processing. Leveraging FaaS in complex corporate ecosystems with infinite message queues helps to streamline data processing at any scale.
  • Batch jobs and scheduled tasks. When you have scheduled jobs and tasks that need to be executed in a batch, serverless computing is a godsend. Invoke infrastructure that enables intense parallel computation, network access, or IO, perform the scheduled tasks, and shut everything down to conserve resources.NN
  • HTTP REST APIs for web apps. When your app needs to perform infrequent requests (the aforementioned user registrations and authentications, yeah.)
  • Chatbots. These should always be operational and meet a wildly fluctuating demand, so FaaS is the obvious choice.
  • Mobile backends. Why run a mobile app backend non-stop, if you can execute all the required business logic once the users demand it? Build the required workloads on REST APIs and invoke them upon request, not run servers 24/7 just in case.
  • IoT sensor input management. When an IoT sensor signals some disruption, the infrastructure required to handle the response is invoked, scaled to size, and then shut down.
  • Multimedia processing. A set of functions required to transform the multimedia object in the needed format starts once a new file is uploaded.
  • Database changes: auditing the changes made to the database to ensure the quality standards are met.

In short, serverless computing is a perfect choice for irregular jobs with fluctuating resource consumption. But what about use cases when serverless doesn’t work?

When should you avoid using serverless architecture?

Quite contrary to the previous point, serverless computing is not viable for stateful, long-term, resource-intensive jobs, like Big Data analytics, Machine Learning model training, etc. For example, when a group of researchers at Berkeley used AWS Lambda for ML model training and making predictions, it cost them 57 times more than on EC2. Just imagine wearing shoes on your hands instead of your feet.

The cases when you should avoid using serverless computing:

  • Long-term tasks. Running your apps with FaaS might seem a great idea unless they grow in popularity rapidly. When the user numbers start to kick in, paying per invocation becomes very expensive.
  • Resource-intensive jobs. Training a Machine Learning model is best done using conventional infrastructure, not FaaS.
  • When monitoring is essential. With FaaS you have very limited observability, so monitoring of your operations is literally impossible. Every FaaS platform provides some basic monitoring capabilities, but they are quite rudimentary and non-configurable. You will need to use tools like Dashbird or X-ray to enable at least some visibility.
  • When latency matters. FaaS means cold start every time, so you need to account for it. The more complex the workflow, the bigger the code size, the larger the functions — the longer it takes to invoke them. Python and Go launch the quickest, C# and Java launch the longest. If you need lightning-fast execution, you either need to keep your FaaS warm by invoking them at regular intervals — or avoid going for serverless architecture in the first place.
  • Vendor lock-in. When you go for AWS Lambda, you have a very limited range of database options (namely, DynamoDB or Aurora) to work with. If you wish to use RDS or ElastiCache, you need to connect them via a VPC. Most importantly, should you decide to move to another cloud vendor in the future, you will need to build your functions from scratch using their components. Thus, FaaS equals vendor lock-in.

To sum it all up, serverless architecture is not a “one size fits all” solution, by far. It works great as a glue for multiple aspects of your software delivery pipeline. More work can now be done with requests that bounce between various corners and components of your cloud ecosystem, but FaaS will never replace standard cloud instances in a wide range of cases.

A brief overview of major serverless market players — AWS Lambda, Google Functions, Azure Functions

Let’s take a look at what the big three cloud vendors have to offer in terms of FaaS.

AWS Lambda

Now there is a reason why we decided to talk about AWS Lambda in the first place. It is gradually becoming one of the pivotal scripting languages within the AWS infrastructure. One peculiar reason behind the developers falling in love with it is that it lets you create unlimited scenarios to respond to a plethora of events happening all over your AWS infrastructure; the events triggered from the outside can also be managed with AWS Lambda’s help. Basically, every event that occurs invokes a Lambda function, which, in turn, calls a consecutive function, thus enabling and launching genuinely complex workflows.

Moving on, Lambda is quite diverse in terms of its compatibility with other technologies and frameworks, namely, Nodes.js, Python, Jave, C#, and Go. All these languages are capable of embedding code into other languages while supplying you with the wrappers to work with Lisp, Haskell, or produce legacy C++ that will run on Lambda.

Another advantage of Lambda is its web interface; it is clean as a whistle, which simplifies the task of configuring any pipeline you might need. No more text files to write configurations in, as Lambda gives you several web forms that will let you harness the full potential of the Amazon cloud while writing no more than just a few lines of code. Nonetheless, bear in mind that testing the new functionality can be a tall mountain to climb. Get ready to configure the API gateway respectively and open specific firewalls for the new functions to be tested comprehensively.

Lambda empowers you to build its workflows in the “events source to destination” direction while configuring the response scenarios.

User-friendliness is another virtue that AWS Lambda can boast of. If you’re a Lambda user, get used to getting a lot of explanations and warnings popping out on your screen to prevent you from hitting the underwater reefs of its functions. For instance, when creating a function that requires an open-source code from an external library, you’ll get a corresponding message in your browser. Before Lambda, the programmers were expected to know this all themselves. Now that you have the information in advance, you save yourself a lot of time and effort. 

Yet, let’s acknowledge that Lambda is not the only serverless offering from AWS, as there are lots of products that might relieve you from the tedious task of managing your server. Elastic Beanstalk is one of them. With its help, you can upload the code, as it will take it to the webserver, and automatically handle load balancing and scaling. However, the feature that yet again makes Lambda stand out the crowd is AWS Step Functions, a tool that lets you glue stateful events with stateless Lambda. 

Any business logic will find this feature crucial and incomparably advantageous. Imagine reloading the client’s info from the database every time the page is reloaded. How easy would you find this task if there was no AWS Step Functions feature? The ordinary Lambda would have you doing it every time the customer reloads a page. The Step Functions tool gives you the bragging rights to do it just once and forget about it. Hence, it seems like AWS Lambda has quite something in store for those willing to make their business run smoothly and efficiently.

Google Cloud Functions

Google has been providing “serverless” capabilities long before FaaS became a thing. With the release of Google App Engine in 2008, Google enabled developers to concentrate on writing code instead of configuring and managing the environments to run it. With Google Cloud Pub/Sub you don’t need to handle the message queuing, only to write the code for the event producer and the consumer. Google Firebase is a database with superpowers, which allows you to mix the database layer with JavaScript functionality while delivering the data to your clients. Finally, Google Cloud Functions provides event-driven computing, which blends various GCP products into a powerful and flexible mix.

Firebase takes the idea of serverless to the extreme, handling all server-side business logic, including data storage and authentication. You can really run an app with a bit of client-side HTML/CSS, and Firebase. Firebase code is written in JS, so it will run locally on Node.JS. You can handle any business logic this way because there are multiple Node libraries for doing exactly this. In addition, you will enjoy the benefits of working with isometric code, running the same for the client, the server — and now the database. Your client apps are just other Firebase nodes, so you can write the information to Firebase once, and it will replicate the relevant data (and only the relevant one) to wherever it needs to be.

Google Cloud Functions allows embedding custom Node.JS code throughout the whole GCP ecosystem. However, while GCP at large works with Go, Python, PHP, Java, and C#, Cloud Functions currently support only Node.js and pure JS. Support for other languages is announced but has not been implemented yet. However, Google Cloud Functions has to work through the REST API to even interact with Google Docs. Additionally, your code has to be stateless and every request can run for a limited time only.

Microsoft Azure Functions

Azure Functions is Microsoft’s answer to FaaS offerings from AWS and GCP. One of its biggest applications is working with Office products, which are slowly but steadily moving from desktop apps to the cloud. This way, HTML and web interfaces are not the only way to the Azure cloud — much can be achieved using Excel spreadsheets and Word documents.

Azure Logic Apps provides you with the ability to create workflows using built-in connectors for Azure and third-party apps. Instead of worrying about the semantics and syntax of the code, you fill out the forms that enable you to link stateful code with stateless functions.

Logic Apps supports the same push and pull methods to exchange data between Salesforce, Office 365, Twitter, and other tools. This way your enterprise IT teams can use Azure Logic Apps to build consistent workflows the way they used Powershell scripts.

Another great addition to this roster is Durable Functions, which allows creating and managing stateful functions, greatly enhancing the range of capabilities for building your event pipelines and workflows in the Azure cloud.

Another great feature from Azure is CosmosDB, a SQL/NoSQL database. Azure has duplicated Cassandra and MongoDB APIs, so you can easily push and pull data to and from your various databases. Cosmos DB works as a central nexus and continuously builds indexes to keep things running smoothly. Should you need to write pure SQL — you can do that too. This way you can consolidate your existing database ecosystem and keep it open to implementing new approaches in the future.

Comparison: advantages and disadvantages of serverless PaaS from the big three cloud service providers

While each of the big three FaaS offerings has a different set of features, they all have a set of characteristics that can be easily compared. These include pricing per invocation, programming languages supported, trigger types, concurrency and execution time length, etc. The table below showcases these parameters.

Regarding trigger types, Lambda and Azure work via APIs. With Lambda, you can use API Gateway, dynamic triggering from DynamoDB, or files-based triggering from Amazon S3. For Azure Functions, you have web API triggering, scheduled invocations, and event-based triggers via Azure Event Hub or Azure Storage. For Google Cloud Functions, you have a wide range of trigger types, described in their documentation. The key benefit here is that you can integrate your FaaS with any Google Cloud service, and use Cloud Pub/Sub or API callbacks, for instance.

Concerns about using FaaS in real life

While this all looks fine and dandy, the real question is whether it actually works this way in real life. Regarding this question, the answer is… not so much.

AWS Lambda runs best, being the oldest, most popular, and polished of them all. When you use Lambda for any of the scenarios it supports, it runs reliably. Building complex scenarios might require additional configuration, though (like adding some extra connectors when operating a less popular database used in your product, for example).

Google Functions present yet another level of challenges, due to the limited application scenarios. Besides, at times they experience deployment and scaling issues, which are quite hard to debug and resolve.

As for Azure, you might find it difficult to believe, but it is Microsoft at its best. Sometimes they just freeze, not to mention having the lowest number of runtimes of the three. Perhaps it was caused by project specifics yet from our experience AWS FaaS felt a bit raw.

Conclusions: Which cloud platform Works Best for Serverless Architecture?

As you can see, all of the three leading cloud providers offer mature and robust serverless computing features. They slightly differ in various aspects but their core benefit remains the same: they enable developers to build various CI/CD pipelines and leverage all the power of the respective cloud platforms from a convenient FaaS dashboard, without having to mess with provisioning and configuration of various IaaS and PaaS modules.

So, which one to choose? The point being, while you can configure Lambda to invoke Azure services or upload files from GCP storage, it is best to go with the FaaS offering from the cloud vendor you use. You will have to rebuild everything from scratch, should you ever decide to switch to another provider, yes. However, while you are with AWS, Lambda will provide the widest range of possibilities. The same goes for Azure and Google Cloud tools.

We hope this article was informative and useful and helped you to gain a better understanding of the similarities and differences between the serverless computing offerings of Azure, AWS, and Google Cloud. Should you have any additional questions or inquiries, please feel free to contact us, we would be more than happy to provide assistance!

Related articles

Everything About Adopting NLP in Healthcare: Techniques, Use Cases, Benefits, and Challenges

Everything About Adopting NLP in Healthcare: ...

Read More
Implementing Machine Learning in Healthcare: Benefits, Challenges, and Real-Life Use Cases

Implementing Machine Learning in Healthcare: ...

Read More
Big Data in Healthcare: The Technology’s Most Critical Real-Life Applications

Big Data in Healthcare: The Technology’s Most ...

Read More

Contact us

Talk to us and get your project moving!