Anthill Inside 2019

A conference on AI and Deep Learning

Tickets

Loading…

Anuradha K

@anuradhak

Network health predictions and optimization recommendation using Deep learning Neural network models and Reinforcement learning

Submitted May 7, 2019

Time series prediction of network parameters and detecting network health with network performance optimization, has been an interesting problem to solve for researchers in the field of Machine Learning and Data Mining community. These use cases are present across different industries like retail, telecom, transport with good presence in Telecom industry. However, there remains a challenge in getting a good prediction accuracy and efficiency while solving these problems. Traditional approaches typically involve extracting discriminative features from the original time series using dynamic time warping (DTW) or shapelet transformation, and traditional ML models are applied on top of these transformations to get decent accuracy. These methods are mostly ad-hoc and the performance of these models are limited as there is a separate process to extract features and another process to predict. Recommending an optimal parameters for network is normally done by training more data in traditional supervised models. There lies a challenge in the supervised learning models as these models are data hungry. If the data is insufficient, the traditional supervised models fail to converge, and mining patterns in the data can be a challenge. To address the first challenge, we propose end-to-end neural network architecture models such as Univariate/Multivariate-LSTM, LSTM - Convolutional Neural Network , CNN, LSTM-CNN which incorporates feature extraction and prediction in a single framework. To address the second challenge, Deep Reinforcement learning has been used to recommend the optimal parameters with predicted network parameters which in turn can lead to good network health. We did comprehensive empirical evaluation with various proposed methods on a large number of benchmark datasets, the approach based on Deep learning neural network models and Deep reinforcement learning methods in network parameter optimization has provided a good accuracy when compared to the existing models.

Outline

INTRODUCTION
AUTHORS INFORMATION
PROBLEM STATEMENT/CONTEXT
EXISTING TOOLS/SOLUTIONS
PROPOSED SOLUTION
BENEFITS OF OUR PROPOSED SOLUTION
CURRENT SOLUTION SCENARIO
PROPOSED SOLUTION SCENARIO
DIFFERENT MODELS /APPROACHES AND RESULTS
CNN LSTM ARCHITECTURE
COMPLETE PROPOSED SOLUTION ARCHITECTURE
TRADE-OFFS IN PROCESS FOR SOLUTION
PRIVACY, REGULATORY AND ETHICAL CONSIDERATIONS FOR THE DESIGNED SOLUTION
KEY TAKEAWAYS
ADDITIONAL DETAILS THAT WILL BE SHARED DURING PRESENTATION
QUESTIONS???

Requirements

Anaconda,python installation to be completed in laptop

Speaker bio

https://www.linkedin.com/in/anuradha-karuppasamy-b5378813b/

I am a Senior Data Scientist from Ericsson GAIA (R&D). I have 18+ Years of experience in the areas of Telecom, Retail services, Sales and transportations, Financial and Banking and delivered projects of various sizes. I have played several roles like Senior data scientist, AI technology Architect Delivery Manager, Analytics Tech lead, Data scientist, Data Statistical Analyst, Solution Project Manager, Technical/Project lead and developer. My key Capabilities are Machine learning, Analytics, Artificial intelligence, Deep learning, Reinforcement learning, Python, IBM Watson, Azure ML, Biometrics (Facial and Emotion recognition), Predictive Modeling, and Speech to Text.

Slides

https://drive.google.com/file/d/1ZtFsyuW6Cm5m112YZPYDTco8WbBUpSVK/view?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more