In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Filtering the noise from an avalanche of Google Analytics Metrics : Anomaly Detection
At Tatvic, we have built an Anomaly Detection Engine that alerts the analyst about sporadic changes in Google Analytics metrics. Additionally, the analyst can also drill down into the possible root causes of the anomaly which enables him to take quicker business decisions.
This talk will focus on the methods used for Anomaly Detection as well as its utility to the end user.
Analysis and drill down used to be a simpler problem earlier. Plot each metric on its dashboard and keep updating it regularly. As Google Analytics has vastly increased its coverage of metrics, having a dashboard for each metric does not solve the problem. The analyst might easily miss out on some of the key changes in their data. Imagine the loss caused to an eCommerce website if the Page Load Time of their Home Page spikes. Anomaly Detection Systems can aid the analyst figuring out these sporadic patterns faster and taking quicker business actions. Once an anomalous metric is discovered, it is possible to drill down into why that happened.
The challenge herein lies in detecting anomaly patterns with a fair amount of accuracy as well as transforming it into insights that are immediately useful. Some of the open source technologies used are the Python Scientific Stack (Algorithms), Mongodb (Backend) and d3.js (Frontend)
Kushan Shah is a Web Analyst at Tatvic. He works towards solving business problems using a combination of data and algorithms.