Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
- Statistical Visualizations
- Unsupervised Learning
- Supervised Learning
- Semi-Supervised Learning
- Active Learning
- Reinforcement Learning
- Monte-carlo techniques and probabilistic programming
- Deep Learning
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
- Distributed and Parallel Computing
- Real Time Analytics and Stream Processing
- MapReduce and Graph Computing frameworks
- Kafka, Spark, Hadoop, MPI
- Stories of parallelizing sequential programs
- Cost/Security/Disaster Management of Data
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Making a contextual recommendation engine using Python and Deep Learning at ParallelDots
ParallelDots ( paralleldots.com ) is a recommendation engine for publishers to increase engagement/monetization on their websites. For the end user, it solves the problem of information overload by providing set of relevant stories and history about whatever he/she is reading. ParallelDots provides a set of recommendation engines which include the most accurate related posts widget, automated timeline views over news articles and related content on social media and sentiment around it. These are implemented in form of microservices in Python, Go and Julia. One of the various recommendation engines paralleldots offers to online publishers is its state of the art accurate related posts plugin, which uses Deep Learning.
This talks walks through our technology decisions and algorithms choices while we decided to write a enhanced accuracy/cost optimized search engine which searches for related posts.
Attendees will walk out knowing what is Deep Learning, types of Deep Learning algorithms and various libraries to use them in Python. They will also learn about how we found Deep Learning techniques better than traditional topic models and how we use it to make a search engine for related documents.
We will also talk about the hacks which we used to scale up the web services to handle thousands of concurrent recommendations. Join us to learn what Deep Learning is, how it enhances the accuracy over traditional algorithms and how we incorporated it into production.
Making a Recommendation Engine at ParallelDots
a. Why normal full-text search will not work: The problem of incorrect tagging and slow search queries.
b. ParallelDots’ MVP with Topic Models: Issues with accuracy and scaling.
c. Decision to use Deep Learning and aims of the new architecture (Not enough funds for distributed system, search related posts from millions of documents in reasonable time)
Basics of Deep Learning
a. Deep Neural Networks
b. Types of Deep Neural Networks. Convolutional, DBNs, Recurrent and Recursive. How do they differ in structure, types of neurons and training.
c. Backpropogation and its variants
d. Features of various Deep Learning libraries in Python.
Deep Learning in NLP
a. Solving problem of high-dimensionality using word embeddings.
b. Common approaches to word embedding.
c. Modelling language as a series of characters using Recurrent Neural Networks .
c. Models we use : Named Entity Recognition with Neural Nets
d. Models we use: Combining word embeddings using heuristics and recursive neural networks.
a. Using Search Data Structures to convert search related posts operations from O(n) to O(log(n))
b. Space Partitioning Trees : Search for nearest Neighbours. Examples of such trees: KD-Tree / Ball Tree / VP Tree
c. Why we chose VP Tree ? What libraries to use to code up in Python ?
d. Parallelization. Data Parallel Python’s multiprocessing parallelization not the best, working towards a shared memory parallel version.
Scaling up system
a. Hacks to scale up recommendations.
b. Using golang’s channels to unique requests.
Muktabh is one of co-founders of ParallelDots. He handles the Data Science and Software Architecture at the startup. Previously, He has worked at Opera Solutions and as consultant Data Scientist, wherein he helped solving many data based problems in healthcare, internet, procurement,retail and personal finance . He has a degree in Information Systems from BITS Pilani, Pilani. Social handles: in.linkedin.com/in/muktabh / @muktabh