arrow_back Failure resilient architecture with microservice dependencies
Deployment strategies with Kubernetes arrow_forward
Necessary tooling and monitoring for performance critical applications
Submitted by Manan Bharara (@mananbharara) on Thursday, 2 February 2017
Section: Full talk of 40 mins duration Technical level: Intermediate
In a competitive market which is heavily based on trust and confidence, ensuring stable performance becomes important. However, with Continuos Delivery and frequent deployments, making sure that something doesn’t break becomes a hard problem to solve.
The audience can look forward to an understanding about the performance and error monitoring setup at https://www.otto.de/, a hugely successful e-commerce application in Germany. I plan to share experiences around how the monitoring set up is able to equip the developers with enough knowledge to debug and fix critical performance issues before the users could see and how we used such metrics to create automated alarms to alert project teams about any performance problems as well as security threats. The monitoring setup that I plan to discuss is completely based off multiple open-source projects put together to create a solid and reliable monitoring and alerting framework.
I also plan to share thoughts on how data mining concepts of Anomaly/Outlier detection can be applied to granularly identify potential risks with the fastest feedback possible.
I’m a senior developer at ThoughtWorks. I have experience working with a variety of technology stacks and varying infrastructure set ups. I like to place a strong emphasis on ensuring quality and performance through the use of modern day tools, preferably open-source. I firmly believe that in a competitive market like e-commerce, ensuring stable performance becomes important and that not having proper metrics in place could have an adverse effect on the success of an application.