The Fifth Elephant 2017

On data engineering and application of ML in diverse domains

##Theme and format
The Fifth Elephant 2017 is a four-track conference on:

  1. Data engineering – building pipelines and platforms; exposure to latest open source tools for data mining and real-time analytics.
  2. Application of Machine Learning (ML) in diverse domains such as IOT, payments, e-commerce, education, ecology, government, agriculture, computational biology, social network analysis and emerging markets.
  3. Hands-on tutorials on data mining tools, and ML platforms and techniques.
  4. Off-the-record (OTR) sessions on privacy issues concerning data; building data pipelines; failure stories in ML; interesting problems to solve with data science; and other relevant topics.

The Fifth Elephant is a conference for practitioners, by practitioners.

Talk submissions are now closed.

You must submit the following details along with your proposal, or within 10 days of submission:

  1. Draft slides, mind map or a textual description detailing the structure and content of your talk.
  2. Link to a self-record, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant.
  3. If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and documents showing the full workshop plan.

##About the conference
This year is the sixth edition of The Fifth Elephant. The conference is a renowned gathering of data scientists, programmers, analysts, researchers, and technologists working in the areas of data mining, analytics, machine learning and deep learning from different domains.

We invite proposals for the following sessions, with a clear focus on the big picture and insights that participants can apply in their work:

  • Full-length, 40-minute talks.
  • Crisp, 15-minute talks.
  • Sponsored sessions, of 15 minutes and 40 minutes duration (limited slots available; subject to editorial scrutiny and approval).
  • Hands-on tutorials and workshop sessions of 3-hour and 6-hour duration where participants follow instructors on their laptops.
  • Off-the-record (OTR) sessions of 60-90 minutes duration.

##Selection Process

  1. Proposals will be filtered and shortlisted by an Editorial Panel.
  2. Proposers, editors and community members must respond to comments as openly as possible so that the selection processs is transparent.
  3. Proposers are also encouraged to vote and comment on other proposals submitted here.

Selection Process Flowchart

We will notify you if we move your proposal to the next round or reject it. A speaker is NOT confirmed for a slot unless we explicitly mention so in an email or over any other medium of communication.

Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.

There is only one speaker per session. Entry is free for selected speakers.

##Travel grants
Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.

##Commitment to Open Source
We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.

##Important Dates:

  • Deadline for submitting proposals: June 10
  • First draft of the coference schedule: June 20
  • Tutorial and workshop announcements: June 20
  • Final conference schedule: July 5
  • Conference dates: 27-28 July

##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Ketan Khairnar

@ketankhairnar

Unless you measure it; you can’t improve it - Data pipelines for your business KPIs and KRAs

Submitted Jun 8, 2017

Abstract

Any business can gain unfair advantage through actionable insights using data pipelines and some common sense. We’re already experiencing this through our interactions online (amazon , medium.com) and through mobile apps (uber, ola and many more)

Why Data Infrastructure is important?

Important advantages of data pipelines is

  • surge in productivity - through nice CQRS interface
  • informed decision making - through trends, aggregations, leaderboards
  • Think about it as platform for A/B testing your business goals

In this workshop; you’ll be building data pipeline stack for your airbnb clone. You’ll be wearing multiple hats including.

  • Engineering or SRE team

  • Customer support team

  • Product managers

  • CXO

    and come up with quick and nimble solutions for questions at hand. You’ll also make sure to grow, change and evolve your data pipeline along with the business needs

Add to this two important interaction paradigms which can make

  • conversational patterns using chat bots - NLP and custom bot server
  • real time dashboards to track KPIs and KRAs – for different stakeholders

Note: Actual businesses are 10x complex but this is good starting point to experiment and explore these ideas.

Outline

Course Content

Key actionable insights are worth the effort of building data highway network within your company.

This workshop would introduce data pipelines as concept and help them to build one for pseudo business aka Airbnb clone. It will help them to get conversant with technology as well as thinking about data engineering. i.e. Data you generate and data you consume.

There are many vendors in this segment but impedance mismatch in your engineering practices and data and their feature set is big issue. You building your data pipelines is worth the effort as long as you stick to basics. We’ll talk about this key architectural decisions as well.

Key Takeaways:

  • Telemetry and Audit events within the application
  • Time series database and schema patterns associated with it
  • Few important patterns - Event sourcing, Polyglot persistence, CQRS
  • Transactions and Events as log
  • Chat bots using NLP as interaction pattern for customer support as well as for SRE/Engineering for remote troubleshooting
  • KPI & KRA dashboards using grafana

Requirements

  • Basic programming skills in java, javascript etc
  • Basic understanding of AWS services EC2, S3, Kinesis, Lambda

We can accommodate 40 participants to make sure we finish on time ( 3 hours )

We’ll be sharing service account keys for AWS with custom built app stack for each participant. It would need 3-4 hours to complete whole set of exercises. I’ll share costs associated with it shortly.

Speaker bio

Ketan has been working on key data pipeline projects for last few years. Building transation log for ad-tech transactions along with audit and throughput event stream helped my earlier employer ( startup ) to solve very disparate goals through same engineering infrastructure (reduce latency and optimize business ).

In his current job Ketan, Rupesh, Sumeet and rest of Ketan’s team has built complete fault monitoring solution for few thousand servers using few off the shelf open soure components. This includes home grown alert management component, bot server, custom reporting jobs. We consider all of these as applications on continuously evolving data pipeline.
This not only helps them solve production issues but helps product management to decide which features are most bang for the buck.Also helps SRE team to remotely troubleshoot data using conversational pattern with smart bot integration in slack.

Slides

https://www.slideshare.net/morbid/fifth-elephant-2017-data-pipeline-workshop

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more