##About Rootconf 2018 and who should attend:
Rootconf is India’s best conference on DevOps, SRE and IT infrastructure. Rootconf attracts systems and operations engineers to share real-world knowledge about building reliable systems.
The 2018 edition is a single track conference. Day 1 – 10 May – features talks on security. Colin Charles (chief evangelist at Percona Foundation), Pukhraj Singh (former national cybersecurity manager at UIDAI), Shamim Reza (open source enthusiast), Alisha Gurung (network engineer at Bhutan Telecom) and Derick Thomas (former network engineer at VSNL and Airtel Bharti) will touch on important aspects of infrastructure, database, network and enterprise security.
Day 2 – 11 May – is filled with case studies and stories about legacy code, immutable infrastructure, root-cause analysis, handling dependencies and monitoring. Talks from Exotel, Kayako, Intuit, Helpshift, Digital Ocean, among others, will help you evaluate DevOps tools and architecture patterns.
If you are a:
- DevOps programmer
- Systems engineer
- VP of engineering
- IT manager
you should attend Rootconf.
- DevSec Ops
- Microservices - tooling, architecture, costs and culture
- Mistakes that startups make when planning infrastructure
- Handling technical debt
- How to plan a container strategy for your organization
- Evaluating AWS for scale
- Future of DevOps
Rootconf is a conference for practitioners, by practitioners.
The call for proposals is closed. If you are interested in speaking at Rootconf events in 2018, submit a proposal here: rootconf.talkfunnel.com/rootconf-round-the-year-2018/
NIMHANS Convention Centre, Lakkasandra, Hombegowda Nagar, Bengaluru, Karnataka 560029.
For more information about Rootconf, sponsorships, outstation events, contact firstname.lastname@example.org or call 7676332020.
Compute Intensive applications on DC/OS
Deep learning needs no introduction these days. With the growth of data and complex hidden behaviour inside data, there is a sudden burst in use cases of Deep Learning usage. With lots of data and complex processing, there is a costly infrastructure involved as well for running the Deep Learning models. This becomes even more important when we start using GPUs.
This talk is going to revolve around my learning and experience in the past year developing such an infrastructure which meets the expectations of Data Scientists(specially Deep Learning on GPUs).Why and Why not Kubernetes for our use case?
Creating infinitely scalable infrastructure for DS tasks and with an eye on optimizing the cost. This talk will feature how’s of achieving that.
- Defining the term “Optimized Infrastructure for Data Science”.
- Identifying factors for optimizing infrastructure for Deep learning models.’
- What’s and Why’s of GPU?
- Deep learning on Kubernetes – Why we rejected the approach?
- Introduction to DCOS.
- Dynamic GPU usage with DCOS for running distributed tensorflow jobs.
- Demo: Preparing Docker for our Use case.
- Demo: Deep learning model training showing dynamic allocation of GPU using DCOS.
Swapnil has 9+ years of experience and he is currently working as Technical Architect (Bigdata) at Exadatum Softwares Services Pvt Ltd. Prior to working with Exadatum, Swapnil has experience of working with Snapdeal as Lead Engineer and Schlumberger as Senior Data Engineer.
Swapnil has contributed in Domains of BFSI,Ad Serving and eCommerce with Hadoop,Spark and GCP as primary tech stack.
His current area of interest is in developing infinitely scalable and optimized infrastructure using Docker and Kubernetes/Mesosphere.
Swapnil has served as Cloudera Certified trainer for Hadoop Admin and Developer courses. He believes in learning and sharing his learning across the community. A frequent speaker in meetups and active presenter in conferences. Anthill Inside, Rootconf, Expert talks, Dr Dobbs and IEEE International conference to name few.