About the event
Cloud server management brings with it as many challenges as it offers conveniences. It is time to unbundle questions about:
- Resource allocation: how best to allocate manpower, time, money and infrastructure capacity?
- Scaling: how best to utilize capacity in the present, and factors involved in planning for the future?
- Security: which scenarios must you plan for, and how best to secure your data, applications and systems?
Who should submit a talk
- Work with cloud servers,
- Plan and manage infrastructure,
- Make decisions on technology and architecture for your organization,
submit a talk for any of the three events in this series.
Each event is single-day, with about 4-5 short and long talks, 2-3 demos, one BOF, and a three-hour workshop on configuration management.
We are accepting proposals for:
- 30-minute talks – which cover conceptual topics and case studies.
- Crisp 15-minute talks – on new tools and techniques in cloud server management.
- 5-10 min demos.
- Birds of Feather (BOF) sessions, led by 1-3 persons from the community, on a relevant topic.
- 3-hour hands-on workshops on configuration management.
Proposals will be shortlisted and reviewed by an editorial team consisting of practitioners from the community. Make sure your abstract contains the following information:
- Key insights you will present, or takeaways for the audience.
- Overall flow of the content.
You must submit links to videos of talks you have delivered in the past, or record and upload a two-min self-recorded video explaining what your talk is about, and why is it relevant for this event.
Also consider submitting links to:
- A detailed outline, or
- Mindmap, explaining the structure of the talk, or
- Draft slides
along with your proposal.
Honorarium for selected speakers; travel grants
Selected speakers and workshop instructors will receive an honorarium of Rs. 3,000 each, at the end of their talk. Confirmed speakers and instructors also get a pass to the conference and networking dinner. We do not provide free passes for speakers’ colleagues and spouses.
Travel grants are available for domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans.
If you require a grant, request it when you submit your proposal in the field where you add your location. Rootconf Miniconf is funded through ticket purchases and sponsorships; travel grant budgets vary.
Cloud Sever Management Miniconf in Chennai: 25 November, 2017
Cloud Sever Management Miniconf in Mumbai: 8 December, 2017
Cloud Sever Management Miniconf in Delhi: 9 December, 2017
For more information about speaking, Rootconf, the Miniconf series, sponsorships, tickets, or any other information contact email@example.com or call 7676332020.
Building and scaling a log analytics platform - a serverless approach
Serverless architectures has been around for past few years and there has been quite a few skepticism surrounding it. Few might argue that it’s just another buzzword for marketing. But serverless architectures offer more than a catchy buzzword. In this talk we will discuss, what is serverless, when to and when not to use them and how can we use Amazon Web Services to implement a real-time, production grade serverless logging pipeline. By the end of the talk, audience will get an introduction to serverless and also get to know how to design, deploy and scale infrastructures using the same.
Being a Product Engineer who uses serverless functions as a part of products I build, I closely experience how one can leverage serverless architectures to design a resource efficient and highly scalable infrastructure. The infrastructure provider we use which is AWS, provides a range of FaaS components from the popular lambda functions to other managed services like athena, kinesis firehose and quick sight. In this talk, as I give an introduction about serverless, We will walk through how we use them in production enabling resource optimization and low maintenance time. By the end of this talk we would’ve implemented an end to end logging pipeline and brought the generated sample logs to presentation tier for business insights. Not to mention, the system we setup can scale to handle 1000s of microservices and billions of log messages.
- How our microservices looks like?
- Architecting a logging framework
- How the framework should be?
- Metrics we needed
- The conventional approach
- Kafka, Cold storage, ELK
- Problems we faced
- Going Serverless
- What is serverless
- The FaaS logging architecture
- AWS Athena
- AWS lambda
- Kinesis firehose
- Kinesis analytics
- AWS quicksight
- AWS s3
Naren is a Product Engineer with specific focus on building robust backend and scalable systems. He works on open source projects in his spare time. He loves speaking at tech conferences and currently helping MadStreetDen in scaling their Artifical Intelligence products. In his 4 years of industry experience he’s worn plenty of hats- like the one of a Trainer, Embedded Engineer and Backend/Product Engineer and sometimes even helmets- when he’s out cycling.
When he’s not stirring up code, you can find him whipping up a delicious gluten-free treat or travelling/cycling.
- Website: http://www.dudewho.codes
- Previous talks: http://blog.dudewho.codes/talks
- Github: https://www.github.com/DudeWhoCode