In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Engineering custom visualisations with advanced d3.js
d3.js is a very complex library with a lot of functionality. That said, there are a lot of ready examples available on the Internet, which in turn promotes a culture of copy-paste-code. Hence, one ends up seeing recurring themes of the same charts - Sankey, Chord, Matrix, Force layout, etc. repeatedly. The objective of this workshop help a d3 developer truly harness the power of d3.js to make custom visualisations.
Example: One of our client came to us with a list of 20-30 data points that they wanted to visualise. Instead of trying to forcefully fit them into any existing d3 chart, we conceptualised, designed and developed a custom d3 chart called as River Chart.
The idea of the talk to take a business problem and build a custom visualization from scratch. It will include advanced topics like:
- Injecting real time data with KnockOut.js
- How to read complex JSONs with Underscore, D3.set and D3.nest
- SVG Path Generators: Area generator, Line generator, Arc generator, Bezier curve generator
- Introduction to various D3 layouts and how to create a custom d3 layout
- Using D3 components like Brush, Zoom, Drag, Context to improve interactivity
- Leveraging D3 plugins like Sankey, Fisheye, JsonP, Key binding
- Adding D3 transitions to introduce flash like animations
- Combine power of visualization with simplicity of spreadsheets (EXTJS)
- Building reusable charts with MISO project’s dataset, d3.chart and storyboard.
- You should have basic knowledge of d3.js.
- A computer with a) Chrome Browser b) Python WEB server c) Text-editor (even notepad will do).