Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
Track 1: Discovering Insights and Driving Decisions
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
- Statistical Visualizations
- Unsupervised Learning
- Supervised Learning
- Semi-Supervised Learning
- Active Learning
- Reinforcement Learning
- Monte-carlo techniques and probabilistic programming
- Deep Learning
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
Track 2: Speed at Scale
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
- Distributed and Parallel Computing
- Real Time Analytics and Stream Processing
- MapReduce and Graph Computing frameworks
- Kafka, Spark, Hadoop, MPI
- Stories of parallelizing sequential programs
- Cost/Security/Disaster Management of Data
Commitment to Open Source
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Are these the same pair of shoes? - Matching retail products at scale
Matching identical products from different retail websites is one of the hardest and the most impactful problems in the space of product intelligence. This talk will cover the breadth of algorithms and models we use for matching products across customer catalogs. It will also cover some practical aspects of taking these algorithms and models to production.
Product matching is the problem of resolving product entities across e-commerce sites. This involves a complex sequence of tasks which include -
1) automatic extraction of key information regions from raw HTML (for example, product titles, UPCs etc.)
2) categorising products into a unified taxonomy
3) semantic parsing of product titles and specifications
4) standardization of attributes such as brands, colours etc.,
5) grouping products into clusters of matched products based on a similarity function or inferencing model. This is a challenging problem because unique and universally agreed upon identifiers are not always available and product details are noisy and often sparse. So we have to develop contextual understanding of product specifications, which are often expressed differently by retailers, merchants, aggregators etc.
To scale the matching problem to half a billion products, we also need to prune and bucket effectively while achieving good recall. Matches need to be highly precise since customers may use them for sensitive tasks such as price comparison, competitive analysis and catalog enrichment. We employ an ensemble of online and offline algorithms and models to perform matching at scale for a large number of stores, categories and brands.
Nikhil Ketkar leads the data science team at Indix (www.indix.com) which does the R&D around product categorization, standardization, matching, search relevance and ranking. He brings along a decade of experience in making data-driven decisions and building machine learning models.