What is Serverless Computing?

So you might doubt the term “serverless” in serverless computing.

So there is no server at the backend? Nope! There is a server at the backend in serverless, but that runs less and only runs on demand.

Let’s dig deeper…


In Cloud Computing, major service providers like AWS, Azure, and Google Cloud provide Serverless offerings where you upload your cloud to these serverless offerings, and the cloud provider will take care of running your application infrastructure that can scale up to millions of requests within seconds.

Serverless Offerings by major cloud providers:

Cloud Provider Service Name
AWS AWS Lambda
Azure Azure Functions
Google Cloud Google Cloud Functions

Benefits of Serverless environments:

  • Ease to deploy
  • No Infrastructure Management
  • Low Cost
  • Better Scalability
  • Improved Latency

  • So you might ask, if that is so wonderful, why doesn’t everyone Migrate or build services around a serverless framework?

    Serverless Offerings come with limitations attached to them.

    Below are some serverless deployment limitations.

    1) Deployment size is limited.

    2) Run time is limited.

    3) Memory allocation is limited.

    So Applications are needed to be decoupled, divided, and redesigned into different sets of microservices so that they can be deployed into serverless architecture.

    Option1: Migrating Application from Monolothic to Microservices

    It is new !! And it is hard to migrate an application from a monolithic framework to a serverless architecture. It takes a lot of human resources to do that, and it is a huge task of managing, migrating, and decoupling different services.

    Option2: Building microservices from scratch

    It is easier to build a service by following serverless principles rather than migrating an Application to a serverless architecture.

    So how lightweight is an application to be? So that it can be migrated to serverless? The application needs to be optimized for the cloud provider you are targeting to. For example, AWS Lambda has a max memory limitation of 10GB, and Azure function has a max memory limitation of 15GB.

    Points to Consider while designing microservices for serverless deployment:

    Decouple your microservices:

    Microservices should be loosely coupled, meaning that they should not depend on each other for their functionality. This will make your application more scalable and resilient.

    Use event-driven communication:

    Event-driven communication is a good way to decouple your microservices. When a microservice emits an event, other microservices can subscribe to that event and take action accordingly.

    Use stateless functions:

    Serverless functions are stateless, meaning that they do not retain any state between invocations. This makes them easy to scale and deploy.

    Use a cloud-native programming language:

    There are a number of cloud-native programming languages that are well-suited for serverless microservices. These languages include Node.js, Python, and Go.

    Use a cloud-based event bus:

    A cloud-based event bus can help you decouple your microservices and make them more scalable. Some popular cloud-based event buses include Amazon SQS, Azure Event Hubs, and Google Pub/Sub.

    Use a cloud-based API gateway:

    A cloud-based API gateway can help you manage your microservices’ APIs. This can make it easier to secure your APIs and control access to them.

    Monitor your microservices:

    It is important to monitor your microservices to ensure that they are performing as expected. There are a number of tools that can help you monitor your microservices, such as Amazon CloudWatch, Azure Monitor, and Google Cloud Monitoring.

    Additional Readings:

    Azure Function limitations

    AWS Lambda Limitations

    Google Cloud Functions