Introduction
The overhead in starting a Lambda invocation, commonly called Cold Starts, is the most common problem faced by Serverless platforms. For some workloads where latency is critical, Cold Starts are something to avoid to keep things running smoothly. For that, some strategies are available to improve the performance of your Lambda functions, and this article will highlight the contrast between them in different points.
The YouTube Channels in both English (En) and French (Fr) are now accessible, feel free to subscribe by clicking here.
Lambda Performance Optimization Strategies
Having to wait a few seconds for your Lambda functions to render could be very detrimental for common applications and businesses that move to the Cloud, so here are the strategies to optimize the performance of your Lambda functions:
- Lambda SnapStart
- Provisioned Concurrency
- Custom Warmer
Lambda SnapStart
How it Works
SnapStart as seen previously is a performance optimization technique that helps reduce the Lambda function’s initialization time. This strategy is fully powered by AWS, it prevents a Cold Starts by creating a snapshot of the function when releasing the version, and then at the next invocations will reuse that cached version. With SnapStart, the Cold Starts is improved by up to 90%.
Pricing
Using SnapStart requires no additional cost, it’s free.
Supported Runtime
It’s available for only Java(11) runtime (Correto) at the moment.
Complexity to Set Up
It’s available through the AWS console and doesn’t require any changes to your source code, you just have to activate it and let it do magic.
Limit
One consequence is that the ephemeral data or credentials have no expiry guarantees since they use snapshot resuming. For example, if your code uses a library that creates an expiring token at a function level, it can expire when a new instance of the function is launched via SnapStart.
Also, in the case where your code establishes a long-term connection to a network service during the init phase, the connection will be lost during the invocation process.
Here’s a more detailed article that covers its impacts and how to set it up:
Provisioned Concurrency
How it Works
Fully powered by AWS, Provisioned Concurrency keeps your function warm, and ready to respond in double-digit milliseconds at the scale you provision. With this option enabled, you choose how many instances of your function you run simultaneously for incoming requests, unlike the in-demand where Lambda decides when to launch a new one per request.
The particularity of this feature is its startup speed, which is because all setup processes happen before the invocation including the initialization code, and keeps the function in the state with your code downloaded and the underlying container structure all set. This feature is available with published versions or aliases of your function only.
Pricing
There are additional costs related to it :
- You pay for how long the provisioned capacity is active.
- You pay how many concurrent instances should be available.
Supported Runtime
It is available for all runtimes.
Complexity to Set Up
The Provisioned Concurrency option is available through the AWS Console, the Lambda API, the AWS CLI, AWS CloudFormation, or through Application Auto Scaling and it doesn’t require any changes in the existing source code.
Limit
Provisioned Concurrency is not supported with Lambda@Edge.
Custom Warmer
How it Works
This strategy prevents Cold Starts by explicitly keeping the function warmed, thanks to a pinging mechanism. It allows that by using AWS EventBridge Rules to schedule function invocations with a specific frequency. So the function is triggered automatically after each while of time that you have chosen (15min generally).
Also, it’s generally implemented by some open-source libraries, but you are free to build a custom one manually.
Pricing
No cost is required !! There are no additional charges for rules using Amazon EventBridge.
Supported Runtime
You can use it with any runtime you need.
Complexity to Set Up
Putting a warming strategy in place needs some additional changes in the source code since the Warmer triggers an invocation of the function after some time.
The function must know if it’s the Warmer’s call and should then have a particular behavior, as in the following example:
A sample implementation is available in the following repository with the NPM script npm run job:warm:env
.
Limit
This approach doesn’t guarantee the reduction of Cold Starts. In the case where the function is behind a Load Balancer, it won’t always work, since the LB can call instances that aren’t warmed yet. Also, in production environments when functions scale out to meet traffic, you aren’t sure that the new instances are on the pipe to get warmed.
———————
We have just started our journey to build a network of professionals to grow our free knowledge-sharing community that’ll give you a chance to learn interesting things about topics like cloud computing, software development, and software architectures while keeping the door open to more opportunities.
Does this speak to you? If YES, feel free to Join our Discord Server to stay in touch with the community and be part of independently organized events.
———————
If the Cloud is of interest to you this video covers the 6 most Important Concepts you should know about it:
Conclusion
This article highlighted the different strategies you can put in place to improve the performance of your Lambda functions so that it’ll be quicker to respond to your requests, some are services provided by AWS while one seems like a hack but is still recommended by the Cloud community.
Thanks for reading this article. Like, recommend, and share if you enjoyed it. Follow us on Facebook, Twitter, and LinkedIn for more content.