About Rootconf 2019:
The seventh edition of Rootconf is a two-track conference with:
- Security talks and tutorials in audi 1 and 2 on 21 June.
- Talks on DevOps, distributed systems and SRE in audi 1 and audi 2 on 22 June.
Topics and schedule:
View full schedule here: https://hasgeek.com/rootconf/2019/schedule
Rootconf 2019 includes talks and Birds of Feather (BOF) sessions on:
- OSINT and its applications
- Key management, encryption and its costs
- Running a bug bounty programme in your organization
- PolarDB architecture as Cloud Native Architecture, developed by Alibaba Cloud
- SRE and running distributed teams
- Routing security
- Log analytics
- Enabling SRE via automated feedback loops
- TOR for DevOps
Who should attend Rootconf?
- DevOps programmers
- DevOps leads
- Systems engineers
- Infrastructure security professionals and experts
- DevSecOps teams
- Cloud service providers
- Companies with heavy cloud usage
- Providers of the pieces on which an organization’s IT infrastructure runs – monitoring, log management, alerting, etc
- Organizations dealing with large network systems where data must be protected
- VPs of engineering
- Engineering managers looking to optimize infrastructure and teams
For information about Rootconf and bulk ticket purchases, contact firstname.lastname@example.org or call 7676332020. Only community sponsorships available.
Rootconf 2019 sponsors:
A Beast to Process Kafka Events
Building an event processing library comes with own baggage.
No Data loss takes highest priority, then comes Performance and Scalability. Scaling the tool for millions of messages/minute with architecture than with language. Also ensuring it is generic so we can deploy for any schema/table by config change.
Will walk you through the journey of building this library and learnings through the process.
We’ve built our own event processing library to consume events from kafka, and pushes to bigquery. All of our micro services are event sourced. We’ve high load of 21K messages/second for few topics, and hundreds of topics.
In this talk, will cover the learnings,
- why we built our custom event processing tool Beast
- customising code for each input/output combination and old way of deployment
- limitations with existing systems for our usecase
- Ensuring no data loss
- How could we test the application for data loss
- How could we monitor data loss in bigquery?
- How we achieved performance which handles high throughput with acceptable latency.
- Architecture (processing with Queues), (why we didn’t pick redis?)
- why we couldn’t use go language
- How we achieved scalability using kubernetes.
- load testing
- chaos testing
Enhancements (ease of deployment)
- parser to generate config from proto
- auto update the table schema for new fields in proto
The learnings are generic irrespective of the language.
- Basic Understanding of Kafka or Pub/Sub tools
- Basic usecase for Biquery
- Basics of building applications in java
These will make the session more effective
Dinesh Kumar is a software developer, passionate about building products for impact. He works at Gojek, handling backend services which serves millions of users. Go enthusiast, active volunteer and co-organiser in go community. Artist at times.