In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to firstname.lastname@example.org
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Machine learning + Interactive visualization: A pragmatic approach to fixing knowledge bases
We wish to explore how the use of recommenders and visualization can help in fixing problems inherent to knowledge bases. We will tackle one such problem which is incorrect/missing assignment of tags to articles in a knowledge base. We will also demonstrate how off-the-shelf software in the Hadoop ecosystem could be used to improve the richness of this data through processing and visualization. We will be showing how this concept works with a couple of examples.
Many publicly available knowledge bases offer extremely rich data sets in that they are almost entirely human-edited, in natural language and tagged using a likely non-curated system of tags. We wanted to explore if we could improve the richness of the latter using software from the Hadoop ecosystem and aiding the comprehension of results using visualization.
To begin with, we defined a relationship between tags and, through a series of operations, extracted numerical data about this relationship from the knowledge base data. Then, we used a recommender to come up with more suggested relations between such tags. Finally, all of the results were visualized as an interactive graph to help faster understanding.
When we did this for Wikipedia, this helped us spot some interesting relationships and also some new ones which are missing and should be edited in. We are in the process of trying this out for another such database.
In the session, we will be covering how we came up with the problem and how we solved it. We will talk abouts details of the examples and some interesting results we came up with as we played around with the visualization. Here is a list of tools we used to accomplish this, although we will be discussing these only briefly during the talk:
- Hadoop MapReduce
- Apache Mahout
Viraj is a Software Architect at GS Lab. For the last 8 years while at GS Lab, he has worked in the area of Web Applications. His current area of focus is Data Analytics and Visualization, Design/Development of Scalable Web Applications and exploring Data Analytics use cases in Web based products. As part of this effort, he led a team of engineers to develop a social news reader application with a recommendation engine suggesting news and products based on users’ reading habits, their social relations and likes.
Prior to Analytics, he has worked in Web Applications Security, developing Web-based attacks for an enterprise Web security assessment product. He is a Computer Science M.Tech. from IIT-Kharagpur. Before that, he did his M.Sc. in Mathematics from University of Pune.