In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Using Cascalog and Clojure to make the elephant move!
Intent is to highlight benefits gained by using Clojure, a functional language which works on JVM and Cascalog data processing library for Hadoop. The participants will be exposed to,
Rationale behind Cascalog
Basic abstractions of this library
Use cases where a high level functional language and a declarative programming library helped immensely
Short code snippets showing extraction of interesting customer behavioural patterns using Cascalog and Clojure
Highlight how functional abstractions might help bridge the gap between lab culture of data science and so called production environments
Machine learning practitioners often spend 70%-80% of their time on cleaning, transforming and scrubbing data. Quick iterations and ability to create large number of features out of raw data are pre-requisites for any serious machine learning activity. There is also a large gap between a skill sets and objectives of data scientists and programmers. Data scientists need to slice and dice through real world big data. Implementing all the low level plumbing of multiple, dependent Hadoop jobs is a serious impediment to this goal. On the other hand, companies cannot afford to have lab tools running, as they have large impedance mismatch with production environments.
We have seen Cascalog library and functional abstractions of Clojure providing a sweet spot. It lets data scientists focus on their job of understanding data without worrying about complicated class hierarchies or un-intuitive domain specific languages. On the other hand, Cascalog and Clojure seamlessly integrate with JVM based stack. The result is a fun and productive way to process data at scale!
Harshad Saykhedkar is senior data scientist at Sokrati, a digital advertising startup based out of Pune. He has experience of applying machine learning to problems in advertising, banking and telecom sectors for past four years. He has used multiple tools (R, Python, SAS) in this journey and lately fallen in love with the Clojure ecosystem, hacking his way to create data processing tools at Sokrati. Harshad holds a master’s degree in Operations Research from Indian Institute of Technology, Mumbai.
- Linkedin profile (http://in.linkedin.com/pub/harshad-saykhedkar/17/569/b99)
- Link to machine learning workshop, conducted as a part of run up event for Fifth Elephant in Mumbai (https://hasgeek.tv/fifthelephant/the-fifth-elephant-2014-run-up-mumbai/943-introduction-to-machine-learning-workshop)