Understand how to achieve maximum efficiency from your infrastructure using pre-emption friendly docker applications and load-aware container scheduling.
In any deployment, there are resources which are not optimally used due to the very sensible practice of over-provisioning to handle peak loads. On the flip side, when the load is low, a lot of machines end up running under-utilized. In the cloud you have the option of down-scaling though it’s not optimal because of using non-reserved instances, and time taken to downscale or upscale. In a data-center, auto-scaling is out of question.
In this session we want to discuss how to use docker to schedule applications when the infra is under-utilized and remove them when load increases and the problems associated with this. As an example, we describe how we run hadoop jobs on web nodes while load is low on the infrastructure and de-allocate them when load is back up while maintaining resiliency of web-nodes.
Basic knowledge of docker
Currently working at BigData as a Service company Qubole as an engineer in Hbase as a Service and cluster orchestration team. Previously worked at Directi and HackerRank as backend engineer.
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}