Rootconf Pune edition
Rootconf For members

Rootconf Pune edition

On security, network engineering and distributed systems

Make a submission

Accepting submissions till 21 Aug 2019, 10:30 AM

St. Laurn Hotel, Pune

Tickets

Loading…

##About Rootconf Pune:

Rootconf Pune is a conference for:

  1. DevOps engineers
  2. Site Reliability Engineers (SRE)
  3. Security and DevSecOps professionals
  4. Software engineers
  5. Network engineers

The Pune edition will cover talks on:

  1. InfoSec and application security for DevOps programmers
  2. DNS and TLS 1.3
  3. SRE and distributed systems
  4. Containers and scaling

Speakers from Flipkart, Hotstar, Red Hat, Trusting Social, Appsecco, InfraCloud Technologies, among others, will share case studies from their experiences of building security, SRE and Devops in their organizations.

##Workshops:

Two workshops will be held before and after Rootconf Pune:

  1. Full-day Prometheus training workshop on 20 September, conducted by Goutham V, contributor to Prometheus and developer at Grafana Labs. Details about the workshop are available here: https://hasgeek.com/rootconf/2019-prometheus-training-pune/
  2. Full-day DNS deep dive workshop on 22 September by Ashwin Murali: https://hasgeek.com/rootconf/2019-dns-deep-dive-workshop-pune/

##Event venue:
Rootconf Pune will be held on 21 September at St. Laurn Hotel, Koregaon Park, Pune-411001.

#Sponsors:

Click here to view the Sponsorship Deck.
Email sales@hasgeek.com for bulk ticket purchases, and sponsoring the above Rootconf Series.


Rootconf Pune 2019 sponsors:


#Platinum Sponsor

CloudCover

#Bronze Sponsors

upcloud SumoLogic TrustingSocial

#Community Partner

Shreshta IT Hotstar

##To know more about Rootconf, check out the following resources:

  1. hasgeek.com/rootconf
  2. hasgeek.com/rootconf/2019
  3. https://hasgeek.tv/rootconf/2019

For information about the event, tickets (bulk discounts automatically apply on 5+ and 10+ tickets) and speaking, call Rootconf on 7676332020 or write to info@hasgeek.com

Hosted by

Rootconf is a community-funded platform for activities and discussions on the following topics: Site Reliability Engineering (SRE). Infrastructure costs, including Cloud Costs - and optimization. Security - including Cloud Security. more

Abhishek A Amralkar

@aamralkar

Running Spark on Kubernetes

Submitted Jul 10, 2019

Apache Spark is an essential tool for data scientists,
offering a robust platform for a variety of applications ranging from large scale data transformation to
analytics to machine learning.

Each time deta scientist come with their application/model it uses different set of libraries and dependencies.
we use standalone , self managed spark cluster. So its becoming difficult to
distributed dependencies on cluster every time.
Also running multiple jobs in parallel becoming tricky due to these dependencies.

Data scientists and ML engineers are now adopting container based appliactions to improve their workflow,
packaging dependencies and creating reproducible artefacts.

We are living in container deployment era

With containers its becoming super easy to bundle your application along with all dependencies and run it on any Cloud,
OnPremise. Containers are ephemeral which means they can get killed any time,
when you run your application in containers you need to make sure there is no downtime
and another containers restarts on its own.

Thats how tool like Kubernetes comes into and play a important role to manage Containers with zero downtime.
Kubernetes can take care of scaling requirements, failover, deployment patterns, and more.

Kuberenetes is one of the fastest growing and adaptable technologies in the DevOps
Universe.

What Kubernetes is?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and
services, that facilitates both declarative configuration and automation. It has a large,
rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Outline

We will go through the ““why we organizations should consider running Spark jobs on Kubernetes? rather than running it on inbuilt resource manager.””

  • Containerization

  • Better resource utilization

  • Overcoming the standalone scheduler limitations
    We will walk through the demo for running a basic Spark Job and how to do monitoring for the same."

Requirements

NIL

Speaker bio

Sandesh Deshmane
Big Data Architect
Talentica software
https://www.linkedin.com/in/sandesh-deshmane-79997718/

AND

Abhishek leads the Cloud Infrastructure / DevSecOps team at Talentica Software, where he designs the next generation of Cloud Infrastructure in a cost-effective and reliable manner without comprising on infrastructure and application security. He has experience in working across various technology domains like Data Center Security, Cloud Operations, Cloud Automation, writing tools around infrastructure and Cloud Security.
His current focus is on Security Operations and Clojure.

Slides

https://docs.google.com/presentation/d/1u5EQ4Z6Z9CTdxuRD1JMP2OJW15B5T_E1cJp9APxsYXs/edit?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 21 Aug 2019, 10:30 AM

St. Laurn Hotel, Pune

Hosted by

Rootconf is a community-funded platform for activities and discussions on the following topics: Site Reliability Engineering (SRE). Infrastructure costs, including Cloud Costs - and optimization. Security - including Cloud Security. more