Rootconf is HasGeek’s annual conference – and now a growing community – around DevOps, systems engineering, DevSecOps, security and cloud. The annual Rootconf conference takes place in May each year, with the exception of 2019 when the conference will be held in June.
Besides the annual conference, we also run meetups, one-off public lectures, debates and open houses on DevOps, systems engineering, distributed systems, legacy infrastructure, and topics related to Rootconf.
This is the place to submit proposals for your work, and get them peer reviewed by practitioners from the community.
Topics for submission:
We seek proposals – for short and long talks, as well as workshops and tutorials – on the following topics:
- Case studies of shift from batch processing to stream processing
- Real-life examples of service discovery
- Case studies on move from monolith to service-oriented architecture
- Network security
- Monitoring, logging and alerting – running small-scale and large-scale systems
- Cloud architecture – implementations and lessons learned
- Optimizing infrastructure
- Immutable infrastructure
- Aligning people and teams with infrastructure at scale
- Security for infrastructure
If you have questions/queries, write to us on firstname.lastname@example.org
Serverless Spark on Kubernetes
In a world of serverless computing users tend to be frugal when it comes to expenditure on compute, storage and other resources. Paying for the same when they aren’t in use becomes a significant factor. Offering Spark as service on cloud presents very unique challenges. Apache Spark has evolved a lot from deploying it on baremetal machines to running it on containers to offering its as Serverless offering which gives benefits to its users in terms of ease of use, cost and still offer same experience of using Spark. The purpose of this talk to discuss the requirments of a Data Scientist and how they want to use Apache Spark. This talk covers challenges involved in providing Serverless Spark Clusters share the specific issues one can encounter when running large number machines running docker containers in production. This talk will also cover what are the hurdles for Spark using Function as a service offerings and how we can overcome them by running Spark on Kubernetes and still achieve the goal of running Spark as Serverless.
My talk will be based on my this blog https://medium.com/@rachit1arora/why-run-spark-on-kubernetes-51c0ccb39c9b
I will cover introduction so no prior knowledge required
Rachit Arora is a Senior Architect at IBM,India Software Labs. He is key designer of the IBM’s offerings on Cloud for Hadoop ecosystem . He has extensive experience in architecture, design and agile developmemt. Rachit is an expert in application development in Cloud architecture and development using hadoop and it’s ecosystem. He is also working on writing a Book on - Bigdata Analytics which will be published in 2019.
Rachit has been active speaker for BigData technologies in various conference like ContainerCon NA-2016, Container Camp Sydeny 2017, Microxchg Berlin 2018, DataworksSumit 2018.