About the event
Cloud server management brings with it as many challenges as it offers conveniences. It is time to unbundle questions about:
- Resource allocation: how best to allocate manpower, time, money and infrastructure capacity?
- Scaling: how best to utilize capacity in the present, and factors involved in planning for the future?
- Security: which scenarios must you plan for, and how best to secure your data, applications and systems?
Who should submit a talk
- Work with cloud servers,
- Plan and manage infrastructure,
- Make decisions on technology and architecture for your organization,
submit a talk for any of the three events in this series.
Each event is single-day, with about 4-5 short and long talks, 2-3 demos, one BOF, and a three-hour workshop on configuration management.
We are accepting proposals for:
- 30-minute talks – which cover conceptual topics and case studies.
- Crisp 15-minute talks – on new tools and techniques in cloud server management.
- 5-10 min demos.
- Birds of Feather (BOF) sessions, led by 1-3 persons from the community, on a relevant topic.
- 3-hour hands-on workshops on configuration management.
Proposals will be shortlisted and reviewed by an editorial team consisting of practitioners from the community. Make sure your abstract contains the following information:
- Key insights you will present, or takeaways for the audience.
- Overall flow of the content.
You must submit links to videos of talks you have delivered in the past, or record and upload a two-min self-recorded video explaining what your talk is about, and why is it relevant for this event.
Also consider submitting links to:
- A detailed outline, or
- Mindmap, explaining the structure of the talk, or
- Draft slides
along with your proposal.
Honorarium for selected speakers; travel grants
Selected speakers and workshop instructors will receive an honorarium of Rs. 3,000 each, at the end of their talk. Confirmed speakers and instructors also get a pass to the conference and networking dinner. We do not provide free passes for speakers’ colleagues and spouses.
Travel grants are available for domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans.
If you require a grant, request it when you submit your proposal in the field where you add your location. Rootconf Miniconf is funded through ticket purchases and sponsorships; travel grant budgets vary.
Cloud Sever Management Miniconf in Chennai: 25 November, 2017
Cloud Sever Management Miniconf in Mumbai: 8 December, 2017
Cloud Sever Management Miniconf in Delhi: 9 December, 2017
For more information about speaking, Rootconf, the Miniconf series, sponsorships, tickets, or any other information contact firstname.lastname@example.org or call 7676332020.
Doing Data Science on Cloud
With the increase in data size for running DS models,it is important to look into possible infrastructure options which provide enough scalability to run DS algo successfully.Optimal use of infrastructure in terms of cost is the need of hour.For example,running task using multiple GPU for finite amount of time.
A discussion around a generic infrastructure.
Almost all the Cloud vendors(AWS,Google,Microsoft) provide different kind of services for this situation.This talk will primarily make a comparison into advantages and disadvantes of such services provided by cloud providers.It will also look into various options for running tasks in a particular cloud provider.A discussion of MLaaS services.
In Short, answers to following questions will be addressed-
For generic Infrstructure on Cloud
- How to support an altogether different flavour of DS as well as non DS job on a Cloud Vendor?
Constructing a numpy file
Running spark jobs for transformations
Any new hypothetical task on any new technology
- How to work with different versions of languages supported out of the box?
- How to have an auto scalable infrastructure which is cost effective?
- How to have a cloud vendor independent deployment for you DS jobs?
For MLaaS services-
- How can we install a library which is not pre installed?
- How to use custom hardware resources?
Importance of running DS on Cloud
Introduction to MLaaS
Demo: Running DS models using Tensor Flow and keras on Google Cloud ML(Using GPUs).
Doing data science using workbenches - Sense.io, Domino data lab,Google Datalab
Demo: Running a simple model on Google Datalab
Demo: Predicting an image using Google Vision API using REST calls.
Discussion on how to develop a generic infinitely scalable infrastructure - Why?What?How?
Demo: Running multiple R jobs to show auto scaling feature of the infrastructure.
Swapnil is right now contributing to Schlumberger Data Science team applying analytics in field of Oil and Natural Gas.Prior to this he was part of Snapdeal Realtime Analytics team as Lead Enginner. Swapnil in the past has worked as Cloudera Trainer.He belives in learning and sharing his learning across the community.A frequent speaker in meetups and active presenter in conferences.
With more than 8+ years of experience, Swapnil has contributed in Domains of BFSI,Ad Serving and eCommerce with Hadoop,Spark and GCP as primary tech stack.
Past conferences & Meetups:
Dr Dobbs conference-Bangalore- April 11-12,2014