Theme and format
The Fifth Elephant 2017 is a four-track conference on:
- Data engineering – building pipelines and platforms; exposure to latest open source tools for data mining and real-time analytics.
- Application of Machine Learning (ML) in diverse domains such as IOT, payments, e-commerce, education, ecology, government, agriculture, computational biology, social network analysis and emerging markets.
- Hands-on tutorials on data mining tools, and ML platforms and techniques.
- Off-the-record (OTR) sessions on privacy issues concerning data; building data pipelines; failure stories in ML; interesting problems to solve with data science; and other relevant topics.
The Fifth Elephant is a conference for practitioners, by practitioners.
Talk submissions are now closed.
You must submit the following details along with your proposal, or within 10 days of submission:
- Draft slides, mind map or a textual description detailing the structure and content of your talk.
- Link to a self-record, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant.
- If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and documents showing the full workshop plan.
About the conference
This year is the sixth edition of The Fifth Elephant. The conference is a renowned gathering of data scientists, programmers, analysts, researchers, and technologists working in the areas of data mining, analytics, machine learning and deep learning from different domains.
We invite proposals for the following sessions, with a clear focus on the big picture and insights that participants can apply in their work:
- Full-length, 40-minute talks.
- Crisp, 15-minute talks.
- Sponsored sessions, of 15 minutes and 40 minutes duration (limited slots available; subject to editorial scrutiny and approval).
- Hands-on tutorials and workshop sessions of 3-hour and 6-hour duration where participants follow instructors on their laptops.
- Off-the-record (OTR) sessions of 60-90 minutes duration.
- Proposals will be filtered and shortlisted by an Editorial Panel.
- Proposers, editors and community members must respond to comments as openly as possible so that the selection processs is transparent.
- Proposers are also encouraged to vote and comment on other proposals submitted here.
We will notify you if we move your proposal to the next round or reject it. A speaker is NOT confirmed for a slot unless we explicitly mention so in an email or over any other medium of communication.
Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.
There is only one speaker per session. Entry is free for selected speakers.
Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.
Commitment to Open Source
We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.
- Deadline for submitting proposals: June 10
- First draft of the coference schedule: June 20
- Tutorial and workshop announcements: June 20
- Final conference schedule: July 5
- Conference dates: 27-28 July
For more information about speaking proposals, tickets and sponsorships, contact firstname.lastname@example.org or call +91-7676332020.
How Machine Learning Algorithms evolved at Haptik while it's Chatbot catered to 200 million messages
Evolution of automated messaging, which started in 1966 with first Chatbot, ELIZA, has now reached a stage where Chatbots have found there application in several industry domains like personal assistance, customer care, banking, e-commerce, healthcare, etc. With early experiments showing positive results , we have reached a stage where chatbots are no longer merely an application to play around with but have proven their utility in solving real problems. As a result, data scientists need to now figure out how to fuse NLP, conventional machine learning algorithms and deep learning systems into a single dialogue system which can scale easily across datasets from different domains and is capable of digesting training data from real conversations.
During our journey at Haptik, we ended up building and customizing different machine learning modules specifically focused on building Chatbots on narrow domains and targeted at end to end completion of a specific task such as making travel bookings, gift recommendation and ordering, lead generation for different businesses, etc. I would specifically like to share how our machine learning stack grew organically and finally found a stable state containing an ideal mix of simple and complex machine learning algorithms.
In the order of increasing complexity –
Introduction [2-3 mins]
Highlighting different problems that chatbots are solving today with few examples. Introducing why dialogue systems needs to scale and efficiently utilize reseach that happened over last 5 decades.
Keep it simple, start collecting Data [5-7 mins]
How to build a simple system from ground zero which is good enough to go live and helps you collect next million messages.
Analize your conversations, refine the content, make it little more smarter [8 minutes]
Cluster your data and extend your system to use retrieval/classification algorithm to make a bit more intelligent.
When you have enough data[8 mins]
Use complex deep learning models with simpler approaches and utilize every bit of conversational data available with you in most efficient way.
Architecture to stack all the above algorithms[8 mins]
Make sure simpler conversations are catered by simple algorithms and complex ones are in your control while your Chatbot responds fast and accurate.
Challenges [5-7 minutes]
Open challenges existing in the industry and how to foresee/avoid them.
Basic understading of what are Chatbots and what is Machine Learning
I have worked as a Researcher, Engineer and Machine Learning Scientist during different stages of my career. I love to invent, patent, build and architect end to end machine learning solutions to make our life easy. One of my achievements includes creating a chatbot which has seen more than 200 million messages from different domains and is still learning with a long way to go forward. I love to share my learnings with the community by open sourcing and actively participating in Data Science meetups.