Achieving low latency with Amazon CloudFront
Building a web application with high availability and eventual consistency
This was a project I did recently in the first half of 2021. It was a collaboration with Ascenda Loyalty where my team was tasked to build a high performing application with the use of the Amazon Web Services (AWS). The application requirements are that it should be highly scalable and available while achieving eventual consistency.
It is important to mention eventual consistency and not strict consistency as it was virtually impossible to achieve both high availability and strict consistency without sacrificing partition tolerance. This is due to Brewer’s Theorem (CAP Theorem).
With regards to the application, its main function would be aggregating results from different hotel suppliers and displaying results with low latency to the consumers (who use the application). As our application connects to multiple hotel suppliers with different endpoints and thus different latency times, ranging from 2 seconds to 20 seconds, it is also a challenge to ensure that we return the first results to consumers first and provide the best results eventually.
Our team explore various ways that we can fulfil this requirement. We considered various services that we can use on AWS, design patterns and communication patterns. On the programming side, we aggregate the results using asynchronous calls to the multiple suppliers so that we can provide our consumers with results first while slowly aggregating the remaining results. My team mate has done a nice article on this here where we also utilise AWS ElastiCache for Redis.
Amazon Web Services (AWS)
On the infrastructure side, our team decided on using AWS Elastic Container Service (ECS) to deliver on the scalable requirement and Amazon CloudFront to ensure the high availability. Amazon CloudFront also acts as a second layer of caching on top of the results that were cached in AWS ElastiCache for Redis.
This is our overall AWS architecture diagram.
From the diagram above, it is also clear that Amazon CloudFront provided high availability to our static React JS web application that is stored in the S3 bucket. AWS S3 storage has data durability of 99.999999999% due to it being built around regions thus the static webpage hosted on S3 will not likely be down. Furthermore, versioning has been enabled on the S3 bucket containing the static webpage, preventing accidental deletes allowing us to preserve, retrieve and restore every and any version of a particular object.
To ensure high availability and scalability of the application, autoscaling for the ECS service was implemented. This ensures that the backend services are fault tolerant and can handle the traffic load by scaling in and out appropriately (depending on the traffic).
The application load balancer detects if the Fargate tasks are healthy using the health check endpoint in the backend services. If the health check endpoint is not reachable, it would deem the task unhealthy, deregister it from the target group, terminate the task and launch a new task to replace it. As we deployed the ECS service across multiple availability zones, if an availability zone were to be unavailable, there will be instances in the other availability zones that would keep the application running. More tasks will be launched automatically depending on the traffic. Through the use of autoscaling, we ensure that only the right amount of resources and capacity are provisioned to handle the current demand, saving costs in the long run.
We used JMeter to load test our application, with 500 users and 1 ramp up, pointing to the two different endpoints (application load balancer and CloudFront). Looking at the results, we can see that the requests made to CloudFront is much better with a huge decrease in latency (~80% reduction in latency). The error rate was surprisingly 0% as well. This should be due to the high availability and auto scaling feature of AWS ECS.
Though this project was daunting especially to us newbies in AWS, it was certainly rewarding to be able to see these kinds of amazing results, meaning that our implementation was good and praiseworthy.
Some feedback provided on our architecture by Ascenda Loyalty:
“Hosting of frontend on S3 -> nice idea. Looks like they’re building a true SPA”
If you are interested in what we have accomplished, you can view a demo of our application here!
Thank you for reading :D