Cost & Performance optimization in Laravel Vapor

Updated: Jul 31, 2019 — 4 mins Read#vapor

Laravel Vapor uses different AWS resources to efficiently get your application up and running in the serverless cloud. The building block of the whole thing is AWS Lambda, it's where the actual computing happens. Calculating the cost of the compute part for your application can be a bit confusing, so let's simplify it a bit with an example:

If your application runs on a lambda that has 0.5GB (512MB) of allocated memory receives 2 million requests per month, and the average execution time is 0.5 seconds (500ms) each time, your get charged for 2 things:

Your Gigabyte-seconds, which is equal to the total Compute seconds * allocated memory. So in our example it's 0.5 * 2,000,000 * 0.5 = 500,000 GB-s.

You get 400,000 GB-s for free each month so you'll only pay for 100,000 GB-s which costs: 100,000 * 0.00001667 = $1.66.

You'll also pay for the number of invocations, and you get 1 million invocations for free each month, so you'll only pay 1 * 0.2 = $0.2.

So the total cost would be $1.66 + $0.2 = $1.86.

Looks cheap? Yes! But if you require execution time of 1 second instead, you'll pay $10.2! Also the duration is rounded up to the nearest 100ms, so if the average execution time is 901ms you'll still pay for the full second.

As you can see, the performance of your code and the memory you choose to allocate affects the amount of money you have to pay significantly.

Optimize your application

In a previous post we mentioned that Vapor takes your laravel application and converts it to as single lambda function that AWS invokes when needed. However, one of the problems with serving your entire app using a single lambda is that you'll have to configure lambda for the most consuming parts of your app even when they're not running that much.

For example; if you have a part of your app that runs a simple query while another part is doing heavy reporting analysis. You'll have to configure your Lambda for the heavy reporting all the time, so you set the memory to 3GB instead of 128MB; this is a waste of money since the price for the 3GB lambda is much higher than the 128MB one.

Having a built-in queue system, Laravel encourages you to run heavy work in the background. If you follow this, your HTTP lambda will become light and won't require too much resources.

However, the CLI lambda is what you really need to watch out for. If you have a queued job that only requires 128MB of memory to run, while another job requires 3GB of memory, it doesn't make sense to run the CLI lambda with high memory all the time. Instead you should consider splitting the heavy work into multiple jobs, so instead of running it inside a single lambda that requires a lot of resources, you just dispatch a chain with multiple jobs and Vapor will invoke the lambda once per job.

This works for most of the cases. However, at some point running multiple invocations of a small lambda will cost you more than running a single invocation of one resourceful lambda. Once you notice this, you should start considering moving this part of your app to its own app (microservice).

Given that you can create unlimited projects on Vapor, splitting your project to multiple smaller projects when it makes sense won't add any cost.

Another thing you can do is to create a new vapor environment but configure it to handle a certain queue, and push all heavy work to that queue. You can now require high resources for this environment while keeping your default environment with adequate resources.

Regulate communication with external services

You pay for the execution time of your function, even if it's just sitting there and waiting for a Guzzle request to finish. There's not much you can do about it if your app really has to make this HTTP request, but at least set a reasonable timeout so you don't have to wait forever and cause your function to timeout and your bill to grow.

Configure a robust error handling for your queues

Some of your queued jobs may fail; if you don't set a proper number of retries, Laravel will keep retrying these jobs forever and increase your running costs. Instead, make sure you have a proper error handling in place so jobs won't be retried forever.

Configure the maximum concurrency

Using Laravel Vapor, you can easily set the maximum concurrent invocations for your HTTP function, this will help regulate/throttle the number of invocations at any given point in time and reduce the risk of a denial of service attack.

How Vapor deals with Cold Starts

When Vapor first invokes your lambda; AWS starts a fresh container, downloads your code, and initializes the layers needed to run it. This is quite some work, and depends on the size of your project it could take a few seconds before your project code even runs. This is called a "Cold Start".

For handling HTTP requests, this isn't acceptable. Vapor handles this by allowing you to keep a fixed number of containers warm so they're ready to handle new requests.

Keeping containers warm is actually free, you only pay for when your code runs. So the first time Vapor starts a container, you only pay for the duration of the initial container setup + a few milliseconds of Vapor work. After that you have the container up and ready for requests.

To keep the container alive, Vapor will hit that container with a request every 5 minutes, but this time the container is already warm so you'll just pay for a few milliseconds of Vapor work but no container initialization is needed.

For the CLI lambda, Vapor keeps 1 container always warm as it runs schedule:run every 1 minute. So if there aren't any scheduled jobs running, the container will be free to run any queued jobs or even a manual CLI command that you wish to run.

However, if that single container is busy or your queue is full of jobs and you need more containers, AWS will start new containers to handle the incoming invocations. You'll pay for the containers initialization, but then the containers will be warm to process jobs & CLI commands. Once the high demand stops, those containers will shutdown and AWS will start new containers if needed in the future.

By Mohamed Said

Hello! I'm a web developer, cyclist, runner, swimmer, and freediver. Nice to meet you! I currently work at Laravel. You can find me on Twitter, Github, and Strava. You can also check my blog.

Join Our Mailing List

If you like this article, you may want to consider joining our mailing list to receive new content once it is posted.
This site was built using Wink. Follow the RSS Feed.