Anthill Inside For members

Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Tickets

Loading…

##About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
  • Myths and realities of labelling datasets for Deep Learning.
  • Practical experience with using Knowledge Graphs for different use cases.
  • Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
  • Pros and cons of using custom and open source tooling for AI/DL/ML.

#Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


#Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


#Bronze Sponsor

iMerit Impetus

#Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

ANKIT JAIN

@ankitjain22

Design a real-time anomaly detection application using Spark and Machine Learning

Submitted Jun 15, 2019

Our team works on a streaming application that produces outputs in the form of real-time comparisons between us and our competitors.
The comparisons are consumed by our market managers and hotel partners and are used by them for making their day-to-day decisions and prioritize their business actions for the day.
Owing to the importance and scale of the data, it is important to ensure that our Streaming application always produces the correct comparisons(free of anomalies) or if there are anomalies we quickly come to know and fix the root cause.
That motivated us to build an application that can read the comparisons in real-time and quickly detect anomalies in them.
We are using Spark Streaming as the framework and leveraging the scikit-learn library in Python for the machine learning algorithms.

The best part about our journey is that we started off with just the name of the machine learning algorithm that we wanted to implement and chalked out all the design for implementing it for our Streaming data in less than a week and that is what motivated me to share this journey with you guys.

Outline

The session as mentioned in the abstract is primarily about our journey wherein we just started with the motivation to identify anomolous streaming outputs produced by our system.

Ideas to be presented and what should everyone do to implement and design such pipelines quickly. I will be elaborating on these steps during my talk :

Step 1: Find a motivation. Machine learning actually comes into the picture when static rules-based learning from the past data does not work. There are many dimensions to the data and complex relationships between the dimensions.

Step 2: Think of what you are trying to do intuitively. Like in our case we are trying to find anomolous data so naturally it follows that some form of Outlier Detection is what we need. The next step is to study the different algorithms that are available/ already implemented and study them. We chose Local Outlier Factor algorithm for our use case.

Step 3 : Know your data. The system design would heavily depend on the kind of data on which you want to train, how frequently you want to train and what is the kind of predicted data that you want. In our case, we train on a combination of pre-computed and streaming data and the predictions are real-time.

Step 4 : Choose a framework of choice depending on the scale and complexity/specifics of the problem. This is the part wherein we chose to make a Spark Application and further because we were implementing pure Outlier Detection(with novelty set to False) so we went for Structured Streaming because things like windowing, watermarking, concept of unbounded data frame were something that were a direect fit to our problem.

Step 5 : Write the code. First we developed a small prototype in Python and then tested each and every line of code in PySpark. We particularly checked that each and every element/library function works fine with the Spark Dataframes or not in terms of whether it serializable or not.. whether it runs into performance issues..

I will try to explain each of the steps with diagrams so that they very untuitive to understand.

Requirements

No requirements as this is a crisp talk..

Speaker bio

I am Ankit Jain. I completed by B.Tech in Computer Engineering from Delhi Technoloigical University in 2015.
So, I have been working as a Software Developer at the Expedia Group, Gurgaon since then.

My interests include Spark, Scala, Streaming Systems and Big Data. The most recent addition to my list of interests is Machine learning and I am astounded by the kind of cool problems that we can solve using Machine learning.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more