##About the conference and topics for submitting talks:
The Fifth Elephant is rated as India’s best data conference. It is a conference for practitioners, by practitioners. In 2018, The Fifth Elephant will complete its seventh edition.
The Fifth Elephant is an evolving community of stakeholders invested in data in India. Our goal is to strengthen and grow this community by presenting talks, panels and Off The Record (OTR) sessions that present real insights about:
- Data engineering and architecture: tools, frameworks, infrastructure, architecture, case studies and scaling.
- Data science and machine learning: fundamentals, algorithms, streaming, tools, domain specific and data specific examples, case studies.
- The journey and challenges in building data driven products: design, data insights, visualisation, culture, security, governance and case studies.
- Talks around an emerging domain: such as IoT, finance, e-commerce, payments or data in government.
You should attend and speak at The Fifth Elephant if your work involves:
- Engineering and architecting data pipelines.
- Building ML models, pipelines and architectures.
- ML engineering.
- Analyzing data to build features for existing products.
- Using data to predict outcomes.
- Using data to create / model visualizations.
- Building products with data – either as product managers or as decision scientists.
- Researching concepts and deciding on algorithms for analyzing datasets.
- Mining data with greater speed and efficiency.
- Developer evangelists from organizations which want developers to use their APIs and technologies for machine learning, full stack engineering, and data science.
##Perks for submitting proposals:
Submitting a proposal, especially with our process, is hard work. We appreciate your effort.
We offer one conference ticket at discounted price to each proposer, and a t-shirt.
We only accept one speaker per talk. This is non-negotiable. Workshops may have more than one instructor.
In case of proposals where more than one person has been mentioned as collaborator, we offer the discounted ticket and t-shirt only to the person with who the editorial team corresponded directly during the evaluation process.
The Fifth Elephant is a two-day conference with two tracks on each day. Track details will be announced with a draft schedule in February 2018.
We are accepting sessions with the following formats:
- Full talks of 40 minutes.
- Crisp talks of 20 minutes.
- Off the Record (OTR) sessions on focussed topics / questions. An OTR is 60-90 minutes long and typically has up to four facilitators and one moderator.
- Workshops and tutorials of 3-6 hours duration on Machine Learning concepts and tools, full stack data engineering, and data science concepts and tools.
- Pre-events. Birds Of Feather (BOF) sessions, talks, and workshops for open houses and pre-events in Bangalore and other cities between October 2017 and June 2018.** Reach out to firstname.lastname@example.org should you be interested in speaking and/or hosting a community event between now and the conference in July 2018.
The first filter for a proposal is whether the technology or solution you are referring to is open source or not. The following criteria apply for closed source talks:
- If the technology or solution is proprietary, and you want to speak about your proprietary solution to make a pitch to the audience, you should pick up a sponsored session. This involves paying for the speaking slot. Write to email@example.com
- If the technology or solution is in the process of being open sourced, we will consider the talk only if the solution is open sourced at least three months before the conference.
- If your solution is closed source, you should consider proposing a talk explaining why you built it in the first place; what options did you consider (business-wise and technology-wise) before making the decision to develop the solution; or, what is your specific use case that left you without existing options and necessitated creating the in-house solution.
The criteria for selecting proposals, in the order of importance, are:
- Key insight or takeaway: what can you share with participants that will help them in their work and in thinking about the ML, big data and data science problem space?
- Structure of the talk and flow of content: a detailed outline – either as mindmap or draft slides or textual description – will help us understand the focus of the talk, and the clarity of your thought process.
- Ability to communicate succinctly, and how you engage with the audience. You must submit link to a two-minute preview video explaining what your talk is about, and what is the key takeaway for the audience.
No one submits the perfect proposal in the first instance. We therefore encourage you to:
- Submit your proposal early so that we have more time to iterate if the proposal has potential.
- Talk to us on our community Slack channel: https://friends.hasgeek.com if you want to discuss an idea for your proposal, and need help / advice on how to structure it. Head over to the link to request an invite and join #fifthel.
Our editorial team helps potential speakers in honing their speaking skills, fine tuning and rehearsing content at least twice - before the main conference - and sharpening the focus of talks.
##How to submit a proposal (and increase your chances of getting selected):
The following guidelines will help you in submitting a proposal:
- Focus on why, not how. Explain to participants why you made a business or engineering decision, or why you chose a particular approach to solving your problem.
- The journey is more important than the solution you may want to explain. We are interested in the journey, not the outcome alone. Share as much detail as possible about how you solved the problem. Glossing over details does not help participants grasp real insights.
- Focus on what participants from other domains can learn/abstract from your journey / solution. Refer to these talks from The Fifth Elephant 2017, which participants liked most: http://hsgk.in/2uvYKI9 and http://hsgk.in/2ufhbWb
- We do not accept how-to talks unless they demonstrate latest technology. If you are demonstrating new tech, show enough to motivate participants to explore the technology later. Refer to talks such as this: http://hsgk.in/2vDpag4 and http://hsgk.in/2varOqt to structure your proposal.
- Similarly, we don’t accept talks on topics that have already been covered in the previous editions. If you are unsure about whether your proposal falls in this category, drop an email to: firstname.lastname@example.org
- Content that can be read off the internet does not interest us. Our participants are keen to listen to use cases and experience stories that will help them in their practice.
To summarize, we do not accept talks that gloss over details or try to deliver high-level knowledge without covering depth. Talks have to be backed with real insights and experiences for the content to be useful to participants.
##Passes and honorarium for speakers:
We pay an honorarium of Rs. 3,000 to each speaker and workshop instructor at the end of their talk/workshop. Confirmed speakers and instructors also get a pass to the conference and networking dinner. We do not provide free passes for speakers’ colleagues and spouses.
##Travel grants for outstation speakers:
Travel grants are available for international and domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans. If you require a grant, request it when you submit your proposal in the field where you add your location. The Fifth Elephant is funded through ticket purchases and sponsorships; travel grant budgets vary.
##Last date for submitting proposals is: 31 March 2018.
You must submit the following details along with your proposal, or within 10 days of submission:
- Draft slides, mind map or a textual description detailing the structure and content of your talk.
- Link to a self-recorded, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond the solution you have built, or your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant.
- If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and a document showing the full workshop plan.
For more information about the conference, sponsorships, or any other information contact email@example.com or call 7676332020.
Using Data to make data processing reliable again
Data Driven performance management of Big Data Infrastructure is very different from performance management of standard applications like web servers. A single cluster is submitted multiple simultaneous discrete applications where each of these applications can comprise up to hundreds of thousands of tasks of varying complexities. If these jobs are not tuned properly, then it’s easy to both blow up the costs because of an underutilized cluster or starve the jobs and miss SLA’s because of shortage of resources.
This talk is targeted towards engineers who administer Big Data Clusters and would like to improve the efficiency and utilization of their clusters using a data-driven methodology.
Say, You have been storing the job characteristics for SQL queries that are run on you cluster
- Query Schedule, start and end times
- Number of Map and Reduce tasks
- Cumulative CPU seconds and Memory seconds
- Data scanned, processed, and written
And you also know the layout of the data which form the input to these queries
- Column types, shape and range
- Partitioned columns and size of those partitions
- Data serialization format
With these two datasets, stored over a period of time, we will try to answer the following questions:
- What do we know about the most expensive jobs running on our cluster?
- Can we identify the most common anti-patterns in our adhoc workload and take some defensive action against those suspect queries.
- Can we identify clusters of tables that are frequently joined together and recommend a better data layout/schema to reduce database load.
Though, there are other parameters like Cluster Configuration and Cluster Resource Allocation which also affect the job’s performance, but we will keep the scope of this talk limited to the Job Statistics and Data Layout. Also, we are going to discuss analysis of only the SQL workloads, which form the major percentage of jobs running on Hive, Spark or Presto clusters.
To serve these needs, we built Tenali, Qubole’s SQL parser and analyzer which we intend to open source shortly. Tenali is a collection of scoping rules and heuristics, that given a set of queries and corresponding job characteristics, generate insights to improve the jobs efficiency.
- Types of Data and how we capture them at Qubole
- Discuss design of Tenali and its approach for capturing table lineage and data flows.
- Discuss some well known algorithms and their performance on these datasets
- Examples of how we use this data to improve the efficiency of our data offerings at Qubole
Understanding of Data tools like Hadoop, Hive, Spark, etc.,
Familiarity with ML nomenculature like Classification, Clustering, Nearest Neighbour, etc.,
Devjyoti is working with Qubole as Data Engineer and helps the company gain more insights into the performance of its data processing tools.