The Matchless Power of Serverless / Cloud-Native for Front-end & Backend Developers

Jay Dwayne
16 min readMar 1, 2021

--

http://www.jaydwayne.com

“Serverless” usually refers to an architectural pattern where the server side logic is run in stateless compute containers that are event-triggered and ephemeral.

In my last article, I wrote about the Divergent Evolution of technology and the Concept behind API’s ( Application Programming Interface ) . The article was quite comprehensive and I’m sure if you read it, this current article on Serverless Computing will be sweeter.

Here is the link to the previous article just in case ; https://www.linkedin.com/pulse/divergent-techvolution-how-apis-fulfill-original-promise-jay-dwayne/

Serverless computing is a cloud-based application architecture where the application’s infrastructure and support services layer is completely abstracted from the software layer.

When speaking of the latest leading-edge tech trends, we have to admit that cloud computing is definitely on the TOP 10 list. Thanks to the excellent processing power of cloud-based services and platforms, small and mid-size companies, as well as large-scale corporations, can enjoy competitive benefits. To begin with, here’s a quick glimpse at some of the advantageous features provided by Serverless:

  • Flexibility of services
  • Greater scalability
  • Faster deployment times
  • Accelerated workloads in the cloud
  • Reduced costs on infrastructure management and maintenance
  • Automated workflows, and so on

Serverless applications are internet based application/system which do not use a dedicated server but instead use third party services for authentication, cloud functionality and data storage.

A traditional application architecture looks like:

A traditional architecture includes various dedicated server driven or stored functionalities including user authentication, database access and business logic.

Now, when this architecture is made serverless, it looks like following:

Types of serverless

Serverless applications can be made up of two types of components:

  • Serverless functions
  • Serverless backends

Of these, serverless functions are hosted and managed by a type of service known as “Function as a Service” or FaaS. FaaS is the primary platform for running serverless program code. With FaaS, developers write independent code scripts known as “functions” and upload those functions to the FaaS. The code can be then triggered by some event or run on a schedule. Popular examples of FaaS are Amazon Lambda, Azure Functions or Google Cloud Functions.

Serverless backends on the other hand, refers to managed services which serverless functions can make use of. These services are typically used for storage, database, messaging, notifications, orchestration or security. Like FaaS, users don’t need to provision and manage any infrastructure when using a serverless backend.

Another feature of serverless backends is that they are not coupled with FaaS only. This means non-serverless applications can also make use of serverless backends.

An example of serverless backend is Simple Queue Service (SQS) which provides a managed message queuing service from Amazon. Similarly, Amazon Aurora Serverless is a serverless database service. This is distinctly different from Amazon RDS or Aurora which — although being a managed service — requires users to provision and manage database instances.

In this article, I welcome you to the world of the serverless approach to development, taking a look at multiple different cloud service providers, like

· AWS

· Azure

· Google Cloud

· OpenWhisk

Serverless ?

No, serverless does not mean that you don’t use a server, not to be confused with Full-Front-end apps that don’t interact with a back-end (in-browser apps) or static Web pages. Serverless is kind of a misnomer because of course you use a server but its provisioning and maintenance is fully automated thanks to the cloud service providers. Serverless is also known as Cloud-Native which is more representing of what is going on.

What Serverless Technology Is: A Definition and Its Main Providers

Since progress doesn’t stand still, we can count on more tech solutions to come out all the time. Giving the advent of cloud computing, you’ve likely heard of a new buzzword in town — Serverless computing. Despite the name, the serverless architecture still requires servers to host all applications and perform computing processes.

The key difference with the traditional cloud-based infrastructure is that a third-party vendor provides a backend platform and manages servers on his own entirely. Hence, developers can focus on other vital business goals and deliver improved services and products, ending up with streamlined customer experience, and the company’s optimized resources.

In other words, serverless computing is a model, which provides developers with multi-functioning tools, allowing to create top-notch apps more effectively, since a third-party vendor manages the entire underlying infrastructure. A bunch of developers finds this kind of architecture scalable and flexible, as it is based on a fast TTM business approach.

The serverless approach is the process where we deploy the code into a cloud and it is built, run and executed automatically, without us worrying about the underlying infrastructure, renting or buying servers, scaling, monitoring, or capacity planning. The service provider will take care of that.

The servers that host our applications are most of the time idle when waiting for requests and sit unused. This results in resources, such as the power supply, capacity, storage, and memory, being wasted (not cool for our planet).

Also servers allow only a certain volume and load. The more the load or traffic, the slower the processing will be, or it may stop completely. When attempting to deal with higher traffic or requests, we could buy high-end servers. But we will end up using the same resources for 100 requests a day as we would for 100,000 requests, even with high-end servers that also allow a finite volume and load.

When using a server, we have to make sure we install the right software (as well as the right version), make sure that it is secure enough, and always monitor whether the services are running.

The Serverless Framework (hereby mapping “Serverless Framework” to “Serverless” for this article) in my own definition is essentially a CloudFormation generator. It’s the first framework of it’s kind to be developed solely to build applications on AWS Lambda functions. However it is cloud-service agnostic, and can work on virtually any cloud service you can think of (Azure, Google Cloud Platform, Alibaba Cloud and Tencent to name a few). It’s also runtime agnostic, although that’s more of a feature of the cloud service. You can write your logic code in virtually any language, as long as the cloud service you’re using supports it. AWS Lambda for instance supports a whole array of runtimes and all of them can be used within Serverless.

Many companies, such as AWS, Azure, or Google, started by renting their servers. Nowadays, it’s not uncommon for developers or startups to rent a server, and the provider will take care of the storage, memory, power, and basic setup. Go back 15 years ago, it was very costly (tens of thousands of dollars) to rent servers and you had to configure everything (hardware + software) yourself or have a dedicated team (even more cost) — impossible for single developers. Times have changed, paradigms have shifted.

When we rent a server on the cloud, we might not bother dealing with the power, storage, and memory. But we still need to install the required software, monitor the application service, and upgrade the software version from time to time, as well as monitor the performance of the application.

Most companies still lease servers and manage them through Platform as a Service (PaaS), still manage application downtime, upgrade the software version, and monitor services.

But this can all be changed by adopting a serverless approach. Serverless computing will automatically provision and configure the server and then execute the application code. As the traffic rises, it will scale automatically, apply the required resources, and scale down once the traffic decreases.

I have been using the Serverless Framework for about 6 years now and I wanted to share my experiences with it. Later on ahead, I will go over some of the pros and cons of Serverless and also my opinion on why people should adopt it if it makes sense for them.

Concepts Behind Serverless

The definition of serverless has evolved over time. In the introduction, I said that all my deployed projects are serverless apps. This is true but when building them I was not thinking about serverless, I was just using convenient web services on AWS, Firebase / Google Cloud or Azure like authentication or blockchain. I was basically building Front-end apps that took advantage of web services that I didn’t need to maintain.

In other words, an application that was dependent on third-party applications or services to manage server-side logic. They were referred to as Backend as a Service (BaaS) services.

But serverless also means code that is developed to be event-triggered, and which is executed in stateless compute containers. This architecture is popularly known as Function as a Service (FaaS). Let’s look at each type of service in a bit more detail.

So before serverless, we used to talk about BaaS most refering to Auth0 and Google Firebase.

Backend as a Service (BaaS)

The most common approach is for web and mobile engineers to develop their own authentication feature (login, registration, and password management). Each of these has its own API, which has to be incorporated into the application.

Unfortunately, this approach was complicated and time consuming. BaaS providers made it easy by providing unified APIs and SDKs that would integrate seamlessly with the front-end.

This allowed for Devs to focus more on the front-end and worry less about back-end. I would argue that’s why we saw the front-end getting more and more complex with design patterns coming from the back-end world (state management, dynamic routing in single page apps, pub/sub, etc.).

Say, for example, that we want to build a portal that would require authentication to consume our services. We would need login, signup, and authentication systems in place, and we would also need to make it easy for the consumer to sign in with just a click of a button using their existing Google or Facebook or Twitter account. Developing these functionalities individually requires lots of time and effort.

But by using BaaS, we can easily integrate our portal to sign up and authenticate using a Google, Facebook, or Twitter account. Another BaaS service is Firebase, provided by Google. Firebase is a database service that is used by mobile apps, where database administration overhead is mitigated, and it provides authorization for different types of users. In a nutshell, this is how BaaS works. Let’s look at the FaaS side of the serverless approach.

Function as a Service (FaaS)

FaaS is essentially a small program or function that performs small tasks that are triggered by an event. Unlike a monolithic app, which does lots of things, there is a clear separation of concerns.

So, in a FaaS architecture, the app is broken into smaller, self-contained programs or functions instead of the monolithic app that runs on PaaS and performs multiple functions.

For example, each API endpoint could be a separate function, and be run on-demand rather than running the entire app full time.

The traditional approach to API coding would be a multi-layer architecture, like a three-tier architecture where the source code is broken into the presentation, business, and data layer. This might work fine for a limited number of simultaneous users, but how to manage the infrastructure when traffic grows exponentially?

To resolve this problem, ideally, the data layer would be isolated to a separate server. But all problems are not solved because the API routes and business logic are still within one monolithic application, so the scaling would still be a problem.

The serverless approach can be the solution for reducing the pain of maintenance. Instead of having one server for all the application API endpoints and the business logic, each part of the application is broken down into independent, auto-scalable functions.

The developer writes a function, and the serverless provider wraps that function into a container that can be monitored, cloned, and distributed on any number of servers

The benefit to breaking down an application into functions is that each function can scale and be deployed separately. For instance, if one endpoint in our API is aggregates 90 percent of our traffic, that one function can be distributed and scaled more easily than scaling out the entire application.

In a FaaS system, functions are expected to start within milliseconds for the handling of individual requests. Also, there are restrictions on the execution length that need to be considered. You will get timeout errors if your functions handle heavy processing tasks. There are strategies for these use cases that I will not get into here.

In PaaS systems, there is typically an application thread that keeps on running for lengthy periods of time, and handles multiple requests. FaaS services are usually charged per execution time of the function, whilst PaaS services charge per running time of the thread in which the server application is running. So imagine, the money saved by adopting the right serverless architecture for your project.

In the microservices architecture, the applications are loosely coupled, fine grained, and light weighted. The reason for which microservice architectures were engineered was to break down monolithic applications into small services that can be developed, managed, and scaled independently. FaaS takes that a step further by breaking the architecture down into even smaller units called functions.

The trend is : The unit of work is getting smaller and smaller. We’are moving from monoliths to microservices, and now to functions. Some authors even talk about nanoservices.

With the rise of containers, many cloud providers realized that serverless functions architecture can provide better flexibility for developers to build their applications without worrying about the operations (ops).

AWS was the first cloud provider to launch this kind of services with the name Lambda, then others followed the trend:

· Microsoft Azure => Azure Functions

· Google Cloud => Google Functions (/ Firebase Cloud Functions).

The industry has seen open source alternatives being offered:

· IBM’s OpenWhisk = Apache licensed,

· Kubeless = built over Kubernetes

· OpenFaaS = built over the Docker

· Oracle Fn

CONS

I’ll start with the cons of Serverless, to start, if you don’t know what you’re doing then doing Serverless can end up being expensive. Let’s say you don’t really know what concurrency within lambda’s mean, and you set yours to some ridiculous high number. Let’s also say you’re building a big data ETL pipeline and you’re reading and writing to a database, while also making external API calls. This can get quite expensive if you don’t necessarily know the pricing model of AWS (or any cloud service). If something just “fixes” your problem and you don’t look into how much it’s going to cost you could run into a situation where simply using a EC2 would actually be cheaper.

I happen to believe that this is a skill you might have on top of other skills, but it will also narrow down your job search, since a lot of dev teams will probably not be using Serverless. Although it’s been adopted by a large number of developers and teams, it’s not like it’s used everywhere. So looking for a job, or even people to hire might not be as easy. With this being said, I personally wouldn’t not hire someone simply because they have no Serverless experience, if they were willing to learn and had a decent understanding of cloud services, it’s something I believe you can definitely learn on the job. Not a deal breaker in the slightest, but worth noting.

It’s not impossible to have good insights into your lambda’s, you can utilize tools like Cloudwatch Alarms and dashboards to get a good idea of how your lambda function is running. For example, at my current role we’ve utilized Cloudwatch quite a bit to get insights into our how often our lambda’s were failing. This is particularly important to us, because we have SQS triggers that hit these lambda’s, so failing will simply put messages back in the queue. This isn’t a bad thing, it’s our design, however we do like to have insights into hundreds of lambda’s being ran during a ETL pipeline. So although it’s not impossible, I think it’s worth noting its not as easy to monitor this. At the time of this writing, AWS just released some powerful lambda monitoring tools so this could actually be a moot point.

I almost feel like this next point is actually a pro of Serverless, but I have had cases where it felt like a downside. I’ve had times where I’ve actually had to go into the AWS UI, and sometimes the verbiage or just the UX of the services isn’t great or at least isn’t a good mapping to what I’m used to when writing my YAML files. So I’ve had times when I didn’t know how to create something in AWS UI, but I’ve done it dozens of times in my Serverless code. I do feel like this is partly a upside to Serverless because you actually have to learn more about the resources you’re creating, but that doesn’t mean the UI will be as easy. Really minor point, but again worth noting.

PROS

The best part of Serverless, in my opinion, is the fact that you manage all your resources within the same repository as your codebase. Even if you do microservices, all the resources that the microservice you’re building uses will be in the same code base. This is by far the biggest upside to using Serverless, if nothing else it’s way more streamlined this way. Without Serverless, you’d have to go into your cloud service and sort of just guess what resources belong to the codebase you’re working on. Perhaps you have an org setup for each project you work on so its easy, but nevertheless, I think one thing everyone can agree on is the fact that all of the resources coupled with the code that uses it is a huge upside.

I would also include the fact that because your resources are in your code base, this in a way will force the user to truly understand the resources they are making. The fact that I have to go look at AWS documentation to get my resource to deploy or it wouldn’t work. If I’m missing a required property for instance. It forces the user to go look at the documentation of it, and because you’re not just mindlessly using their UI flow, you have to actually know about the properties you’re setting. This really enabled me to learn a lot about AWS.

The Serverless community is huge, and there are tons of resources for anyone who needs help. I have yet to come across a problem that isn’t solved using a Serverless supported plugin or simple fixes on my end. For example, retrieving the AccountId of the current scope was always a pain point for us. One solution we had was to add a accounts property in our custom in serverless.yml. We’d then have a dictionary of

env = account_id .

For example,

dev: 111111111, qa: 222222222, uat: 333333333, prod: 444444444

and since we know the stage of the scope, we can retrieve the account id using something like this:

${self:custom.accounts.${self:provider.stage}}

but as you can see this is pretty messy and not as intuitive as one would hope. But Serverless has a supported package called serverless-pseudo-parameters that allow you to do something like this:

#{AWS::AccountId}

Of course this is limited to the resources files and won’t work everywhere, it’s still a valid solution to a rather annoying problem. This is just one example, but if you ever need any help there’s a rich ecosystem of helpful folks willing to tackle any issue you come across (if you can’t figure it out yourself).

FaaS providers also expose metrics which can help monitor a serverless function’s performance. The table below shows some typical metrics

Metric

What it means

Executions

Number of times a function was run in the last sample period, categorized by its status (ok, timeout or error).

Errors

Number of times a function failed in last sample period due to internal errors like timeouts, out-of-memory, insufficient privileges or unhandled exceptions.

Ideally, this value should be zero. A consistent non-zero trend requires troubleshooting.

Throttles

Number of times a function was stopped from running in the last sample period because the rate at which it was being called exceeded its allowed limit of concurrent runs.

Duration or Execution times

The time in milliseconds or nanoseconds a function was running.

Memory usage

Maximum amount of memory used by the function during execution

Note how there are no disk or CPU related metrics here. This is expected because the underlying storage or computing resources are abstracted.

Monitoring serverless with Sumo Logic

Sumo Logic is a cloud-native, Software-as-a-Service (SaaS) platform for machine data analytics. It’s used for storing, analyzing and creating insights from machine-generated data and logs. It’s also a powerful SIEM (Security Information and Event Management) tool.

Users can easily subscribe to Sumo Logic and start sending logs and performance data from their on-premise or cloud-hosted assets. The ingested data can be then meaningfully interpreted with Sumo Logic “apps”.

Apps are pre-configured searches and dashboards for different types of data sources. Sumo Logic comes with a number of out-of-the-box apps. These apps help quickly turn raw data into critical insights.

My name is Jay Dwayne. A former baby, still a Fire Fighter and currently a Software Engineer. Thankyou for making it to the end of this Article.

Stay Awesome!

--

--

Jay Dwayne

As a software engineer, designer, graphic artist, guitarist, and martial artist, Jay brings a unique blend of skills and passions to every project he undertakes