Rootconf 2016

Rootconf is India's principal conference where systems and operations engineers share real world knowledge about building resilient and scalable systems.

Rootconf is India’s principal conference where systems and operations engineers share real world knowledge about building resilient and scalable systems.

We are now accepting submissions for our next edition which will take place in Bangalore 14-15 April 2016.

Theme

The theme for this edition will be learning from failure. We are keen to explore how devops think about failure when designing, building and scaling their systems. We invite presentations related to failure in database systems, servers and network infrastructure.

We encourage presentations that relate to failure not only in terms of avoidance but also in terms of mitigation and education. How do we decide which parts of our systems cannot fail? What measures do we take to mitigate failure when it does inevitably happen? And most importantly: what lessons can be learned from failure?

Format

This year’s edition spans two days of hands-on workshops and conference. We are inviting proposals for:

  • Full-length 40 minute talks.
  • Crisp 15-minute talks.
  • Sponsored sessions, 15 minute duration (limited slots available; subject to editorial scrutiny and approval).
  • Hands-on Workshop sessions, 3 and 6 hour duration.

Selection process

Proposals will be filtered and shortlisted by an Editorial Panel. We urge you to add links to videos / slide decks when submitting proposals. This will help us understand your past speaking experience. Blurbs or blog posts covering the relevance of a particular problem statement and how it is tackled will help the Editorial Panel better judge your proposals.

We expect you to submit an outline of your proposed talk – either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal.

We will notify you about the status of your proposal within three weeks of submission.

Selected speakers must participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.

There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).

Commitment to open source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.

Key dates and deadlines

  • Paper submission deadline: 31 January 2016
  • Schedule announcement: 29 February 2016
  • Conference dates: 14-15 April 2016

Venue

Rootconf will be held at the MLR Convention Centre, J P Nagar.

Contact

For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.


Related events

Hosted by

Rootconf is a forum for discussions about DevOps, infrastructure management, IT operations, systems engineering, SRE and security (from infrastructure defence perspective). more

Harsh

@mindfuck

Internals of Hadoop, Hive and Hbase, and how we made it scalable and Highly available.

Submitted Jan 20, 2016

To discuss the internal architecture of Hadoop(hdfs),Hbase and Hive. I will also discuss, how we Designed our Data in Hive and Hbase based on our need, what problems we faced in production cluster and how we made it scalable and highly available.

Outline

In this talk, I will discuss my experience with hadoop, hive and hbase. I will first talk about hdfs architecture and its internal(like hdfs block, I/O operations while reading and writing, etc). I will cover basic workflow of mapreduce with yarn architecture.
In Hive, I will discuss about hive workflow, how execution engine executes DAG of stages(mapreduce jobs) and our use case for using hive.
Then I will discuss about hbase. Its complete architecute(block cache, Memstore etc),its internal flow and how it is linked with hdfs. I will also talk about how we designed row key in hbase according to our use case. At last I will discuss about scaling and high availablity of our production cluster.

Requirements

Basic knowledge of any file system internals.

Speaker bio

I am Harsh. I have worked 1.6 years in Directi(Media.net) as DevOps Engineer and currently I am working in Linkedin as Site Reliability Engineer.In Directi I have majorly worked on Hadoop technology and understood the internals of it.
https://www.linkedin.com/in/sharmaharsh1

Links

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Rootconf is a forum for discussions about DevOps, infrastructure management, IT operations, systems engineering, SRE and security (from infrastructure defence perspective). more