Serverless Architecture FAQ
Published: Sep 23, 2025
Though servers have typically been the norm among organizations since the inception of the computer, there’s been a slow transition through different architecture since then. Though going serverless has been the latest trend within the last decade, many still have plenty of questions regarding this potential option.
Because the components of your environment are critically relevant to your compliance—as long-time cybersecurity assessors—we have to know our way around the different avenues organizations can take in constructing theirs.
To shed some light on the details of serverless architecture, we’re going to answer some questions, both basic and highly technical, regarding this approach so that you can gain a better, more complete grasp on a wide spectrum of pertinent specifics.
Concepts to Know About Serverless Architecture
What we’re about to tell you can be classified into the following concepts:
- The basics
- Containers, microservices, and Kubernetes
- Sidecars
- mTLS
The Basics
Discover answers to the most frequently asked questions around serverless architecture below.
What does it mean to become serverless?
Going serverless means shifting from using and managing dedicated servers to a cloud-based, event-driven compute model and your organization doesn’t handle that infrastructure directly—rather, the serverless platform provider automatically allocates, scales, and manages the backend resources for you.
What’s the appeal of serverless architecture?
Organizations are turning to serverless architecture for a number of benefits, including:
- Cost efficiency: When serverless, you only pay for time and resources used, and you’re also saving on the infrastructure maintenance that’s required for servers.
- Scalability: Serverless platforms automatically adjust capacity based on demand.
- Faster to market: Serverless architecture allows developers to concentrate on business logic instead of server configuration or infrastructure.
- Resilience built-in: Serverless services handle failures by restarting instances, and multiple instances can be spun up on-demand without downtime.
- Environmentally friendly: Sustainability concerns are on the rise, but serverless architecture—and its lack of idle server instances—can reduce carbon footprints and energy usage.
Are there any drawbacks to serverless architecture?
As with everything, yes, there are some drawbacks to serverless architecture, including:
- Cold starts: Serverless functions may take time to initialize if not in use, thereby causing delays upon first requests.
- Vendor risks: Using serverless functions ties you to the provider, opening you up to any security risks in their ecosystem.
- Integration challenges: Traditional tools may not work well with a newly implemented serverless approach, which would mean extra effort logging, debugging, and tracing distributed systems.
- Architecture complexity: Similarly, while going serverless does allow for more flexibility, managing its many small functions across different services can increase operational complications.
- Cost predictability problems: If not well-monitored, serverless’s pay-per-use can become unexpectedly expensive during high-traffic events.
When and why should organizations go serverless?
While server-based infrastructure offers more control and is ideal for applications needing predictable performance, serverless architecture provides flexibility, cost efficiency, and scalability.
So then, organizations may consider going serverless if they’re involved with:
- Event-driven applications (e.g., notifications, file uploads)
- APIs that have unpredictable workloads
- Batch processing and data transformation tasks
- Prototyping and short-term projects
That being said, remaining server-based may be more appropriate if your work features:
- Applications requiring constant, predictable performance
- Long-running tasks (beyond time limits)
- Workloads with extreme cost sensitivity
Containers, Microservices, and Kubernetes
Containers, microservices, and Kubernetes all represent different components within the landscape of modern application development, and they all can also complement implemented serverless architecture:
- Containers: Both containers and serverless simplify deployment and improve scalability.
- Microservices: Both serverless and microservices focus on independent functionality units.
- Kubernetes: Both Kubernetes and serverless handle scaling and resource management (serverless does so automatically, while Kubernetes requires configuration).
What are the differences between microservices, Kubernetes, and Docker containers?
Though they’re sometimes mistakenly conflated, there are significant differences between microservices, KUBE, and Docker.
- Docker: Containers that package and run applications in a repeatable manner to provide consistency across different stages of the application lifecycle (e.g., development, testing, and production).
- Microservices: An architectural style where applications are composed of small, independent services that do not rely upon an operating system like conventional services or applications.
- While not a container itself, microservices can be deployed using them.
- Kubernetes: A platform that can be configured to automate the deployment and management of containerized applications at scale.
- Can manage containers, including those that run microservices, across a cluster.
Insofar as their relationship with each other, Kubernetes provides the orchestration and scalability of an environment that is designed as microservices and can be deployed using containers.
Aren’t microservices and containers conceptually immutable? How is it possible to run a database inside of them if they’re read-only?
While containers are not inherently immutable, seeing as docker images are, obviously containers can be made to be immutable. However, there’s good reason to avoid doing so—while you can write files inside a container, those changes are lost when the container is destroyed, which is likely why so many regard them as immutable (whether they are or not).
Regarding microservices, within this architecture, each service typically has its own database to ensure loose coupling. These databases persist data separately from the containers running the service – meaning that you don’t need to worry about losing those changes. To rephrase, microservice databases can run in containers that can write to external persistent storage volumes which are not part of the container's immutable layer.
If my environment is immutable, access control to the containers is irrelevant, right?
No, immutability does not negate the importance of access controls. Even if your containers are immutable—meaning they don’t allow changes/modifications to runtime—access controls discern which accounts can deploy, start, stop, or configure them, so they remain relevant/important.
What is a service-mesh in relation to containers or microservices?
A service-mesh is a dedicated communication layer of the infrastructure used solely for service-to-service communications. Service-meshes are typically used for all the basics like:
- Authentication;
- Service discovery;
- Load balancing; and
- Encryption.
What are some of the commonalities of implementing containers or microservices the wrong way?
Common issues that can arise when implementing containers include:
- Not updating sidecars,
- misconfiguration, and
- excessive privileges (this would also mean that the main container has too many privileges).
Why do I need to leverage a dedicated team when running a Microservices architecture?
We sometimes get this question from those who want to just run containers from a personal machine to the CLI or GUI. But the reasoning for a team can be considered like this—you can keep your kitchen clean all by yourself, but could you keep a large restaurant’s kitchen clean by yourself?
Similarly, microservices architecture is about managing function and complexity. In production, a dedicated team will manage the varied services and provide the support functions necessary to maintain them, scale them, keep communications correctly set, and understand how these need to be updated – be it as a single instance or as a group.
What is a Kubernetes API server?
The Kubernetes API server is the central management point of a Kubernetes cluster. It exposes the Kubernetes API, which is used by clients (like kubectl) to interact with the cluster.
Do Kubernetes containers pull from a Kubernetes API server?
No, Kubernetes containers do not directly pull from the API server. Rather, a kubelet—the primary "node agent" that runs on each node—interacts with the API server to get information about what containers should be running.
How do Custom Resource Definitions (CRDs) work and are they relevant to my cybersecurity?
CRDs allow users to define custom resources in Kubernetes, which can extend the API functionality to support application-specific objects.
So, if CRDs are used to define security-related configurations, they become relevant to security. Moreover, understanding and controlling CRDs is important as they can define new permissions and access control points.
How does Function-as-a-Service Fit in Here?
Function-as-a-Service (FaaS) is a category of cloud offerings that provide these services on demand. FaaS offerings are increasingly popular as the responsibility stack sits with the cloud provider, can deploy code with minimal prerequisites, and has incredible scalability features.
For major cloud providers, these are:
- AWS Lambda
- Azure Functions
- Google Functions
- Oracle Functions (somewhat differently deployed using the open-source Fn Project)
Sidecars
A sidecar is a software design pattern where an independent, secondary process or container shares a lifecycle with and runs alongside the primary function and provides additional features like logging, monitoring, authentication, or communication.
Within the context of serverless architecture, sidecars can play a significant role when the individual functions or workloads are deployed in environments where they’ll be required to perform additional capabilities or integrations.
Do sidecars function like an API to the container? Are they proxying traffic coming into the container?
Yes, sidecars do function like an API to the container, but only in some ways. It is more accurate to define these as the auxiliary processes that extend container capabilities. Both sidecars and APIs can expose functionality within containers, have similar interaction patterns with their containers, and can abstract functionality from their containers.
But while APIs stand alone, usually across a network, sidecars—as an extension or helper process that enhances the container’s capabilities—are co-located and tightly integrated within the container in the same environment.
Insofar as traffic, sidecars do often proxy traffic and can add capabilities like mTLS without the main container needing to be aware of these details. They can also do things like enforce policies and provide monitoring/logging.
Sidecars seem like they add complications to containers, which are intended to simplify things—is that true?
While containers are meant to be simple, they often need to interact with the broader environment. Sidecars allow you to add these capabilities without complicating the main container.
How do sidecars contribute or take away from the security of the system?
Sidecars can enhance security by handling things like mTLS, though they do also increase the attack surface. As such, it's important to ensure sidecars are properly configured and kept up-to-date.
When implementing sidecars, are there specific things to consider?
Yes, focus on ensuring sidecars are from trusted sources, are updated regularly, and are configured correctly. Ensure they are not inadvertently granting excessive permissions to the main container.
Mutual TLS (mTLS)
Defined as a security protocol that requires both parties in a connection verify each other using certificates, mTLS (Mutual Transport Layer Security) can be an essential component in serverless architecture.
As serverless applications often involve distributed, individual functions communicating over networks, mTLS can ensure secure interactions between those functions through its authentication, encryption, and integrity tests.
Can you leverage mTLS to build a Zero-Trust architecture (ZTA)?
Yes. In a classic client-server connection, the client will authenticate the server but not vice versa. However, in mutual TLS, each endpoint does authenticate the other endpoint, and that does support ZTA where the core principle is "never trust, always verify."
Does mTLS maintain identity and authentication capabilities with a trust anchor?
Yes. mTLS very much requires a trust anchor—think Certificate Authority—to issue and validate certificates. During mutual authentication, both endpoints provide their certificates, which are verified against the trusted CA. Both endpoint identities are authenticated in this manner
Does mTLS use the familiar asymmetric and symmetric encryption concepts (PKI) or no?
This is TLS, but both encryption concepts apply. Asymmetric encryption is used for authentication and key exchange. The symmetric key is used to encrypt data in transit. It is similar to PKI in that it exists to establish trust. It is different from PKI because PKI is a framework to share trusted, public keys while mTLS is about verifying endpoints at the beginning of communication.
Balancing The Benefits and Risks of Going Serverless
Serverless architecture offers organizations scalability, cost efficiency, and faster innovation, but it also comes with challenges like vendor lock-in, cold starts, and added complexity. The key is to understand both the benefits and limitations so you can align this model with your organization’s goals, workloads, and security requirements. Whether you’re exploring event-driven applications, APIs, or data processing, careful planning and ongoing oversight will ensure you maximize the advantages of serverless while minimizing risks.
If you’re considering a move to serverless or are already operating in the cloud, Schellman can help assess your architecture to ensure your environment meets the highest standards of security and compliance. As you navigate the opportunities and challenges of serverless, contact us today to learn more.
About Sully Perella
Sully Perella is a Senior Manager at Schellman who leads the PIN and P2PE service lines. His focus also includes the Software Security Framework and 3-Domain Secure services. Having previously served as a networking, switching, computer systems, and cryptological operations technician in the Air Force, Sully now maintains multiple certifications within the payments space. Active within the payments community, he helps draft new payments standards and speaks globally on payment security.