##About the 2019 edition:
The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule
The conference has three tracks:
- Talks in the main conference hall track
- Poster sessions featuring novel ideas and projects in the poster session track
- Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
- Myths and realities of labelling datasets for Deep Learning.
- Practical experience with using Knowledge Graphs for different use cases.
- Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
- Pros and cons of using custom and open source tooling for AI/DL/ML.
#Who should attend Anthill Inside:
Anthill Inside is a platform for:
- Data scientists
- AI, DL and ML engineers
- Cloud providers
- Companies which make tooling for AI, ML and Deep Learning
- Companies working with NLP and Computer Vision who want to share their work and learnings with the community
For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to email@example.com
Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.
Tools for AI & ML for machine learning at Scale.
As applications become more business critical and application teams are receiving monitoring data for these mission critical business applications as a continuous stream it becomes difficult to manually monitor them and create dashboards/reporting around these applications.
It is becoming increasingly clear that the only way to fix this is to have the right set of tools in place that can help teams to monitor their applications in an automated manner using various machine learning techniques.
We at Appdynamics have built a product that helps address the above requirement using AI and Machine learning. Our algorithms continuously monitor business critical applications, find anomalies and root causes for these anomalies, and give users insights that would have otherwise taken days and weeks to find.
This talk revolves around the various open source tools being used while implementing the above solution.
Our ML/AI platform learns the normal behavior of an application’s data and find anomalies instantly. We are also able to leverage our understanding of the application’s architecture and the correlation between different metrics. Once the anomalies are detected we automatically correlate the anomalies and events for the fastest Root Cause Analysis.
Collecting , Ingesting , storing, and processing billions of events per second in real time for monitoring the application is not a simple process. For doing this in a seamless manner we would need to identify the right set of tools as well as the Infrastructure/Cost requirements for running these tools.
Some of the Open Source Tools being used by us to achieve the above functionality are
- Apache HBase
- Kafka/kafka Streams
- Confluent Avro Schema Registry
In this session we would talk around why we chose these set of tools considering some of the challenges around Machine Learning at a scale.
Saurabh is Principal Software Engineer at AppDynamics, where he work on building solutions around real time streaming datapipelines and various machine learning algorithms for automated anomaly detection and Root Cause Analysis of problems.
His interests lies on how to combine data science with software engineering and solve real time business critical problems.