A serverless architecture can mean lower costs and greater agility, but you’ll still need to make a business case and consider factors like security and storage before migrating selected workloads.
Serverless computing promises to free both developers and operations people alike from the shackles of underlying hardware, systems and protocols. In making the move to a serverless architecture, the good news is that the move can often be made quickly and relatively painlessly. However, IT managers still need to pay close attention to the same components in the stack in which their current applications are built and run.
How is a serverless architecture like previous, more traditional technology architectures, and how does it differ? Despite the name, it's not a serverless architecture entirely devoid of servers: rather, it's a cloud-based environment often referred to as either Backend-as-a-Service (BaaS), in which underlying capabilities are delivered from third parties, or Function-as-a-Service (FaaS), which capabilities are spun up on demand on a short-lived basis.
In a FaaS environment, "you just need to upload your application codes to the environment provided, and the service will take care of deploying and running the applications for you," says Alex Ough, CTO architect for Sungard Availability Services.
"The main difference between traditional IT architecture and serverless architecture", he adds, "is that the person using the architecture does not own the physical or cloud servers, so they don't pay for unused resources."
A serverless architecture "still requires servers," says Miguel Veliz, systems engineer at Schellman and Company. "The main difference between traditional IT architecture and serverless architecture", he adds, "is that the person using the architecture does not own the physical or cloud servers, so they don't pay for unused resources. Instead, customers will load their code into a platform, and the provider will take care of executing the code or function, only charging the customer for executing time and resources needed to run."
Or, as Chip Childers, CTO of Cloud Foundry, prefers to define serverless, "computing resources that do not require any configuration of operating systems by the user."
So, with everything managed or spun up through third parties, there isn't as much a need to worry about annoying details such as storage, processing and security, right? Not quite. These are all factors in the migration from traditional development and operations settings to cloud-based serverless environments. Here are some further considerations you'll need to weigh up when developing a serverless architecture:
Before anything else is initiated in a serverless architecture development process, the business case needs to be weighed, to justify the move. The economics of serverless may be extremely compelling, but still need to be evaluated in light of architectural investments already made, and how it will serve pressing business requirements. "Serverless adoption must be a business and economic decision," says Dr. Ratinder Ahuja, CEO at ShieldX. "The presumption is that over time and across functions, paying for a slice of computing for the short period of time that a piece of logic executes is more economical than a full stack virtual machine or container that stays online for a long time. This approach should be validated before organizations embark on a serverless journey."
Migration - and blending
As serverless computing is inherently a cloud-based phenomenon, the best place to start is looking at what cloud providers have to offer. "If lock-in is not a concern, and you want to start quickly, a fully managed solution like the ones provided by the major cloud providers is one way to start," says William Marktio Olivera, senior principal product manager for Red Hat.
However as a serverless architecture expands from there, Olivera recommends additional approaches such as container technology, to assure the seamless transformation of code and applications between environments. "As soon as you start considering running your application on more than one cloud provider, or you might have a mix workloads running on-premises and on a hybrid cloud, Kubernetes becomes a natural candidate for infrastructure abstraction and workload scheduling and that's consistent across any cloud provider or on premises," he says. "If you already have Kubernetes as part of your infrastructure, it makes even more sense to simply deploy a serverless solution on top of it and leverage the operational expertise. For those cases, Knative is an emerging viable option that has the backing of multiple companies sharing the same building blocks for serverless on Kubernetes, making sure you have consistency and workload portability."
Serverless functions are running in containers, and "these containers appear ephemeral and invisible to the application designer," says Scott Davis, VP of software development at Limelight Networks and former CTO at VMware. "Under the covers there is a pool of reusable containers managed by the infrastructure provider and used on demand to execute a serverless function. When a function completes, the host container is reset to a pristine state and readied for its next assignment. Since serverless functions only live for a single API call, any persistent state must be stored externally for subsequent functions that need it."
While a transition from on-premises assets to serverless can be accomplished relatively swiftly, the move to serverless should be taken with deliberation. Not everything will be ready to go serverless at once. "Legacy software is anything you've already deployed, even if that was yesterday," Childers says. "Changes take time in any non-trivial enterprise environment, and often a rush to re-platform without rethinking or redesigning software can be a wasted effort. Software with pent-up business demand to make changes -- or new software projects -- are the logical projects to consider more modern architectures like serverless environments."
Not every workload "is a perfect candidate for serverless workloads," Olivera agrees. "Long running tasks that can be split into multiple steps or ultra-low-latency applications are good examples of workloads that may take a few years to be considered as good candidates for serverless platforms. I recommend approaching the migration as an experiment -- a new API that is being implemented with a reasonable SLA or a new microservice are good candidates. Then you can progress to single-page applications and some other backend functionalities of web application. The learning of running those experiments at scale should be enough to inform the next steps and prove the benefits of serverless architecture."
"Organizations should embrace a different path forward, combining their existing -- and often monolithic -- applications with modern APIs, which can be used from newer serverless components as functionality engines."
This blending of legacy environments with serverless will likely go on for some time. "Organizations should embrace a different path forward, combining their existing -- and often monolithic -- applications with modern APIs, which can be used from newer serverless components as functionality engines," says Claus Jepsen, deputy CTO and chief architect at Unit4. "Serverless architectures can complement and enrich the existing architecture by providing a new abstraction that supports building new services and offerings."