Easily build complex reports
Monitoring and efficiency metrics
Custom cost allocation tags
Network cost visibility
Organizational cost hierarchies
Budgeting and budget alerts
Discover active resources
Consumption-based insights
Alerts for unexpected charges
Automated AWS cost savings
Discover cost savings
Unified view of AWS discounts
COGS and business metrics
Model savings plans
Collaborate on cost initiatives
Create and manage your teams
Automate cloud infrastructure
Cloud cost issue tracking
Detect cost spikes
by Emily Dunenfeld
Contents
LLRT (Low Latency Runtime), created by AWS, is the newest of the 14+ popular JavaScript Runtimes. So, why do we need another? Unlike some of the other popular JavaScript runtimes, like Node.js, LLRT is specifically intended for serverless applications—especially AWS Lambda. This means LLRT can effectively reduce cold start times and the associated costs.
Note—LLRT is still an experimental package and is therefore not yet recommended for production workloads.
The cold start problem in Lambda is something that AWS has been trying to improve for a while with measures like SnapStart and Provisioned Concurrency. In short, it’s what happens after there’s a period of inactivity before a Lambda function is invoked, resulting in a delay as the environment spins up and initializes the function. This latency not only impacts user experience and application responsiveness but also contributes to increased costs, as AWS bills for the time it takes to start the execution environment.
JavaScript runtimes are another method that can be helpful for reducing cold start times. They’re generally faster than Java runtimes, execute code efficiently, and load quickly.
LLRT is the newest JavaScript runtime, created by AWS. Unlike its predecessors, which are more general-purpose, LLRT is purpose-built for serverless applications. According to AWS benchmarks, it leads to 10x faster startups and 2x overall lower costs on Lambda compared to other JavaScript runtimes.
A few design considerations were made that make LLRT so suitable to address Lambda cold starts:
In the development of the Low Latency Runtime (LLRT), certain tradeoffs were necessary, leading to inherent limitations. The biggest limitation is that since it was made for serverless use cases, it won’t work as well in scenarios requiring heavy computational processing or extensive APIs that are commonly found in general-purpose runtimes like Node.js.
This is due, in part, to the decision not to use JIT—though you will get faster starts, it will be slower for compute-heavy tasks. Because of this tradeoff, LLRT is not intended for tasks like “large data processing, Monte Carlo simulations or performing tasks with hundreds of thousands or millions of iterations.”
There are also many APIs that are not supported or not yet supported (see list here). However, in Yan Cui’s podcast, the creator of LLRT, Richard Davison, stated the goal is to become WinterCG compliant. This thereby ensures LLRT is interoperable with Node.js so that you can switch to/back to Node.js if there are APIs you need that are unsupported with LLRT.
Another important consideration is that while you may save money due to shortened runtimes that, because this is technically a “custom runtime,” you are charged while the runtime is spinning up (something that is not charged for AWS-provided runtimes). Finally, to reiterate, LLRT is still an experimental package that is subject to change and is therefore not yet recommended for production workloads.
Node.js was developed in 2009 and significantly contributed to the rise of full-stack JavaScript development because of its, at the time, unique addition of server-side development capabilities. It has since matured into a robust platform with a vast ecosystem of tools, APIs, and frameworks that remains a leader in JavaScript runtimes.
However, having such an extensive toolset and the need to ensure backward compatibility due to it being so widely used can lead to slower execution and adoption of new use cases. Modern JavaScript runtimes, like Bun and Deno, were created to improve performance, reduce complexities, and add modern functionalities.
There is a huge difference in intended uses between comprehensive runtimes like Bun, Deno, and Node.js vs LLRT. Contributors of LLRT emphasize that while Bun and LLRT are intended as replacements or drop-in replacements for Node.JS, LLRT is not intended as a “drop-in replacement for Node.js, nor will it ever be.”
As we mentioned, LLRT stands apart by its focus on serverless (mostly Lambda) use cases. The focus is evident in the design choices, such as the language, engine, and APIs available:
LLRT is actually more comparable to JavaScript runtimes that are highly optimized toward specific use cases, rather than the general-purpose runtimes. Most relevantly, Workerd (also in Beta), was created by Cloudflare specifically for Cloudflare Workers.
We’ve gone in-depth with a pricing comparison of Lambda vs Workers before and found Workers to be more cost-efficient in certain scenarios. This was due in large part to Workers not charging for duration and not being impacted by cold starts. By significantly minimizing cold start times, LLRT can reduce the cost of running serverless functions on Lambda. This could lead to increased adoption of Lambda in use cases where Lambda was previously prohibitive due to cold starts.
MongoDB Atlas is the cost-effective choice for production workloads where high-availability is a requirement.
Grafana is a strong competitor to the monitoring and observability features of Datadog for a fraction of the price.
AWS is implementing a policy update that will no longer allow Reserved Instances and Savings Plans to be shared across end customers.