Rootconf is India’s principal conference where systems and operations engineers share real world knowledge about building resilient and scalable systems.
We are now accepting submissions for our next edition which will take place in Bangalore 14-15 April 2016.
The theme for this edition will be learning from failure. We are keen to explore how devops think about failure when designing, building and scaling their systems. We invite presentations related to failure in database systems, servers and network infrastructure.
We encourage presentations that relate to failure not only in terms of avoidance but also in terms of mitigation and education. How do we decide which parts of our systems cannot fail? What measures do we take to mitigate failure when it does inevitably happen? And most importantly: what lessons can be learned from failure?
This year’s edition spans two days of hands-on workshops and conference. We are inviting proposals for:
- Full-length 40 minute talks.
- Crisp 15-minute talks.
- Sponsored sessions, 15 minute duration (limited slots available; subject to editorial scrutiny and approval).
- Hands-on Workshop sessions, 3 and 6 hour duration.
Proposals will be filtered and shortlisted by an Editorial Panel. We urge you to add links to videos / slide decks when submitting proposals. This will help us understand your past speaking experience. Blurbs or blog posts covering the relevance of a particular problem statement and how it is tackled will help the Editorial Panel better judge your proposals.
We expect you to submit an outline of your proposed talk – either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal.
We will notify you about the status of your proposal within three weeks of submission.
Selected speakers must participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.
There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
- Paper submission deadline: 31 January 2016
- Schedule announcement: 29 February 2016
- Conference dates: 14-15 April 2016
Rootconf will be held at the MLR Convention Centre, J P Nagar.
For more information about speaking proposals, tickets and sponsorships, contact firstname.lastname@example.org or call +91-7676332020.
Handling logs, events and metrics using Heka
- Intended to benefit folks building and operating distributed systems
- Goal is to use one (clean, consistent and fast) pipeline for collecting data (logs, events, metrics) instead of using a mishmash of different technologies and tools (such as StatsD, Graphite, logstash, etc.)
For any decently big distributed / SoA system, good monitoring is a must for smooth operations. Services emit different data for diagnosibility and instrumentation - logs, events and metrics. The semantics of these 3 types of data are fundamentally different. For ex: Events must have millisecond latencies and not a single event can be dropped. Metrics can be missed, there will be more of them, and can be aggregated at slightly larger (say 1 min) latencies. Log collection can have a longer latency (~ 10 min) and volume is typically huge and ideally it must be indexed and archived.
Typically complex distributed systems use a combination of tools and architectures to collect these different data. For example, statsd (with Graphite or InfluxDB) are be used for metrics collection. Logstash takes care of logs and piped to Kibana. Custom solutions (typically on distributed queues) are built for events.
We, at Exotel, have built a single pipeline for data collection using Heka. Heka is an incredibly powerful and versatile data collection and processing framework developed by Mozilla. Using a pipeline built on top of Heka, we collect all 3 types of data in a consistent way. We have written a library (currently in Go) which the any service uses for logging, eventing and instrumentation. Goal of this talk is to explain our data pipeline architecture so that it is hopefully useful for others building and operating distributed systems.
Basic knowledge of statsd, graphite, Kafka, Logstash
I am Co-founder and CTO @ Exotel. And prior to Exotel, I was with Microsoft. I love distributed systems - building, scaling, making them robust and performant, and most importantly, maintainable.