In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
How to Make Big Data Real and Valuable ...
The Objective of the session is,
Give the participants a very quick overview of the Data Landscape and Journey from Legacy System/Applications -> Data Integration -> ETL -> Data Warehouse -> Real Time Streaming and how all of this culminates in Big Data Architecture.
In order to make Big Data Real Customers need to understand how and where to weave in Big Data into existing IT systems. This talk will hopefully give that understanding to the participants.
The Power of Big Data can be exponentially multiplied if one can use it to derive value for business at the right moment or as they say “At the Moment of Truth” when it is required the most and has the maximum impact.
Big Data Architecture if weaved in seamlessly with Real Time Data Architecture can have lot of impact to customers. This crisp talk will talk in technical detail about what is real time, how is it different from other in - memory systems and how can one use hybrid architecture or LAMDA principles to mix Real time data with Peristent Data (Warehouse, Legacy system) to derive value of big data architecture.
The Particpants will get a good understanding of how hybrid architecture of (Real time vs. Persisted Data) can be used to solve customer use cases.
Participants need to have high level understanding of ETL, Data Warehouse, Data Integration.
I have a background of Middlware, Applications, Security and Data (Real Time, Batch, ETL). In the industry since last 17 years. Started as Developer and have played Sales Engineering, Product Management and Technical Evangelist Roles.