In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious. Insights – what insights does the proposal share with the audience that they did not know earlier. Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used. Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Realizing Large-scale Distributed Deep Learning Networks over GraphLab
The main objective is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives of the talk can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
Large scale distributed deep learning networks are the holy grail of the machine learning/AI/data science fields. The applications of deep learning networks are in image processing, speech recognition and video analytics. We have implemented such a network over GraphLab, the open source graph processing framework. As can be expected, several extensions were made to GraphLab for implementing such deep learning networks. The key extension was the ability to run multiple instances of the distributed GraphLab engine in the same cluster – this allows the training data to be parallelized. Another missing abstraction was that of a layer – which comprises/aggregates several nodes (vertices of GraphLab). An additional requirement was mass communication between two layers, the ability to send a message to all vertices that belong to a layer. We have implemented these abstractions in GraphLab and have consequently realized the deep learning network over a cluster of nodes. We have used this network for Arrhythmia detection from ECG images. The talk would also articulate some of the performance studies we have conducted on our distributed deep learning network.
Dr. Vijay Srinivas Agneeswaran has a Bachelor’s degree in Computer Science & Engineering from SVCE, Madras University (1998), an MS (By Research) from IIT Madras in 2001 and a PhD from IIT Madras (2008). He was a post-doctoral research fellow in the LSIR Labs, Swiss Federal Institute of Technology, Lausanne (EPFL) for a year. He has spent the last seven years creating intellectual property and building products in the big data area in Oracle, Cogniizant and Impetus, where he is now Director, Big Data Labs. He has built PMML support into Spark/Storm and is building a big data governance product for a role-based fine-grained access control inside of Hadoop YARN. He is a professional member of the ACM and the IEEE for the last 8+ years. He has filed patents with US and European patent office’s (with two issued US patents) and published in leading journals and conferences, including IEEE transactions. His research interests include distributed systems - cloud, grid, peer-to-peer computing as well as machine learning for Big-Data and other emerging technologies.
- Profile: http://in.linkedin.com/in/vijaysrinivasagneeswaran/
- Strata NY 2013 talk: http://strataconf.com/stratany2013/public/schedule/proceedings
- Big Data Journal paper: http://online.liebertpub.com/doi/pdfplus/10.1089/big.2013.0018
- Presenting in NoSQL Now 2014: http://nosql2014.dataversity.net/agenda.cfm?confid=81&scheduleDay=PRINT