arrow_back Design and implement a scalable application using OpenFaaS with Docker and Kubernetes
Growing with Elasticsearch arrow_forward
Throttling requests before they hit your application
Submitted by Aditya Patawari (@adityapatawari) on Thursday, 15 March 2018
Most APIs get abused by users, sometimes intentionally, sometimes by mistake. If we throttle requests in the app, we waste precious resources of the app which should have been used to serve legitimate traffic. In this talk we will figure out a way to throttle traffic before it hits the application.
- how do you throttle?
- using middleware? but by then request has already hit your app.
- if you are on aws, you can use WAF but that will only limit per 5 minutes and will base it on IP.
- Let us think of an open and portable alternative. How about nginx?
- Nginx can rate limit for us. A dumb solution, but it can help avoid spikes. We can rate limit based on IPs, user names and lots of other parameters.
- But we can only put a single limit per parameter. We need it to be smarter.
- Nginx supports Lua. Can we use it?
- Enter OpenResty, the Nginx bundle pre-complied with Lua from by folks at cloudflare.
- Let us add redis in the mix and push user info there.
- More money, more API requests. We will be rich!!
- each user will get a fixed number of tokens every minute. Once the tokens are finished, they can’t use the API anymore.
- API gets rejected by Nginx itself, so no need to bother app about it.
Basic knowledge of what is an API and knowledge of Nginx can be used to proxy_pass. No in-depth knowledge is required.
Aditya Patawari is a consultant specializing in cloud management, infrastructure management and container technologies. He has helped several organizations in setting up and managing their infrastructure.
He has given talks and workshops on containers and related technolgies in India and abroad (including Rootconf, FOSDEM, Flock and FUDCon). He is a contributor to Kubernetes project and to Fedora Project.