In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious. Insights – what insights does the proposal share with the audience that they did not know earlier. Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used. Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to firstname.lastname@example.org
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
De-dup @ Scale : Experiments with DynamoDB
What should you know if you want to integrate DynamoDB into your BigData application ?
In this talk, I wish to share experiences my team and I had building a BigData application using DynamoDB. Through this, we had the opportunity to explore various options you would have with using DynamoDB, the issues you would face, and the trade-offs you may need to make. By sharing these experience, I hope it with equip developers with what to watch out for under similar circumstances. Also, it will serve to highlight some of the underlying concepts of a NoSQL technology such as DynamoDB.
Mobile device vendors and app writers try to capture very fine grained application events from millions of devices around the world. They then mine this data using BigData technologies to extract insights around device and application usage. We worked with a client building an ETL application for mining this data using AWS technologies. An interesting part of this application was handling a de-duplication step for events being captured. We used DynamoDB for building the de-dup step.
In this talk, I’d like to share the team’s experiences in building the application with DynamoDB. It’s not so much of a “What’s DynamoDB” talk as it is about “What do you need to know to use DynamoDB”, particularly when you are integrating it from applications like Elastic MapReduce. In the process, I will explain some underlying concepts and capabilities of DynamoDB.
I have had a long relationship with BigData technologies - dating back to early days of Hadoop at Yahoo!, where I was a committer to the Hadoop MapReduce & (now retired) Hadoop on Demand components since Hadoop 0.16 (long before it became Hadoop 1.0). Since 2010, I have been at ThoughtWorks, where I’ve been helping our clients on delivery projects involving BigData solutions with Hadoop and AWS components. I am passionate about distributed computing and keeping in touch with its rapid growth is a constant struggle I am happy to go through. I also love to share whatever knowledge I gain with others via talks at conferences and my blog at yhemanth.wordpress.com.