The Fifth Elephant 2017
On data engineering and application of ML in diverse domains
Jul 2017
24 Mon
25 Tue
26 Wed
27 Thu 08:15 AM – 10:00 PM IST
28 Fri 08:15 AM – 06:25 PM IST
29 Sat
30 Sun
On data engineering and application of ML in diverse domains
Jul 2017
24 Mon
25 Tue
26 Wed
27 Thu 08:15 AM – 10:00 PM IST
28 Fri 08:15 AM – 06:25 PM IST
29 Sat
30 Sun
##Theme and format
The Fifth Elephant 2017 is a four-track conference on:
The Fifth Elephant is a conference for practitioners, by practitioners.
Talk submissions are now closed.
You must submit the following details along with your proposal, or within 10 days of submission:
##About the conference
This year is the sixth edition of The Fifth Elephant. The conference is a renowned gathering of data scientists, programmers, analysts, researchers, and technologists working in the areas of data mining, analytics, machine learning and deep learning from different domains.
We invite proposals for the following sessions, with a clear focus on the big picture and insights that participants can apply in their work:
##Selection Process
We will notify you if we move your proposal to the next round or reject it. A speaker is NOT confirmed for a slot unless we explicitly mention so in an email or over any other medium of communication.
Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.
There is only one speaker per session. Entry is free for selected speakers.
##Travel grants
Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.
##Commitment to Open Source
We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.
##Important Dates:
##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.
Hosted by
Harjindersingh Mistry
@harjinder_hari
Submitted Jun 9, 2017
Many applications we use today are powered by cloud and mobile. One of the critical components that drives engagement for the platforms on cloud is the recommendation engine. Recommendation systems are becoming all-pervasive. The transactions/interactions we have with the platform decide the next set of recommended items. As both users and the number of products offered on the platform scale, we are hit with two kinds of challenges - engineering and machine learning.
This talk is about how we designed a real-time recommendation engine at Red Hat when the transactions data is both Big Data and Wide data. We define wide data as those whose number of items in a transaction basket is greater than 1000. Some examples of big and wide data are: Financial Instruments traded by portfolio manager in a day; Products shipped from a warehouse; software components in a cloud platform etc.
The standard approaches have been market basket analysis(frequent pattern mining), collaborative filtering(matrix factorization) and currently deep learning.
Apache Spark lends itself nicely to build the data science pipeline: ingestion, data processing and machine learning. Out of the box, we have a parallel implementation of FP Growth algorithm for mining frequent itemsets. But as our data became wider, model training performance took a hit. This talk discusses how we used another popular recommendation algorithm in Spark - Alternating Least Squares to generate frequent itemsets. The new approach was faster and scaled well for big and wide data.
The engineering and data science approaches are novel and the attendees will learn how to build recommendation systems on the cloud, some of the challenges and some ideas on how to overcome them.
Who is this presentation for:
Data Scientists, Product Managers
Audience Level:
Beginner
Takeaway:
Attendees will learn how to build recommendation systems for wide data. Attendees will also learn how to use Apache Spark to build ML pipeline for recommendation system.
Pre-req knowledge
Interest in data science. It will help if attendees know some examples of recommendation systems.
Harjinder Mistry is currently a member of Developer-Tools team in RedHat, where he is incorporating data science into next-generation developer tools powered by Spark. Prior to RedHat, he was a member of IBM Analytics team and he developed Spark-ML pipeline components of IBM Analytics Platform. Earlier, he had spent several years in DB2 SQL Query Optimizer team building and fixing the mathematical model that decides the query execution plan. He holds M.Tech. degree from IIIT, Bangalore, India.
https://speakerdeck.com/harjinderhari/recommendation-engine-for-wide-transactions
Jul 2017
24 Mon
25 Tue
26 Wed
27 Thu 08:15 AM – 10:00 PM IST
28 Fri 08:15 AM – 06:25 PM IST
29 Sat
30 Sun
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}