In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Extracting and Employing Domain-Specific Knowledge Graphs (DKGraphs)
Assume that you got an opportunity to work with vast amount of unstructured and semi-structured text data in a specific domain e.g. automobiles, agriculture, medical, internet, etc. Your task is to derive business value out of this textual data by extracting a domain-specific knowledge graph (DKGraph) and employing it for various business use cases. In this problem, there are several key challenges:
- Since developing DKGraphs is not a common task hence there are limited open source/commercial tools available to develop it. One needs to use a combination of NLP, IR, ML techniques to develop the DKGraphs. What are various NLP, ML in order to build and employ the DKGraphs?
- How to make a balance between automation and audits from domain experts?
- How to employ the DKGraphs to derive business value?
In this talk, I will make an attempt to answer above questions. I will share my experience in building the DKGraphs from scratch in two industries (Automobiles and Smartphones).
Almost, in every industry, data is being used to make key business decisions and nearly 70-80% of the data is in either semi-structured or unstructured text. One of the key challenges is to make sense out of this vast text data. DKGraphs is one of the popular systems to capture the domain knowledge in structured form and enable several use cases e.g. capturing semantic variations of domain-specific entities, using domain-specific rules and the DKGraphs to find errors/inaccuracies in documentations and enable system-level diagnosis and root cause analysis.
In this talk, I shall discuss the methodology, results, challenges, and business impact of two DKGraphs. Here is an outline of my talk:
Brief Introduction to common text analytics tasks and common NLP modules
Introduction to the DKGraphs
In-depth discussion on NLP pipeline to develop DKGraphs (using automobile and smartphone domains)
Domain-specific Linguistic Preprocessing (Sentence boundary detection, Tokenizer, Morphological Analyzer, POS Tagger, Chunker), Domain-specific Entity Extraction and Entity Disambiguation, Relationship Extraction module
DKGraph Creation and Visualization
Machine learning techniques (Bayesian Networks, Factorial Hidden Markov Models) to employ DKGraphs and enable various use cases e.g. root cause analysis
Business Use cases of the DKGraphs in Automobiles and Smartphone industries
Preliminary knowledge of text mining, data mining would be helpful in understanding the deeper concepts discussed in my talk.
I am a Data Geek/Data Scientist who has both academic knowledge (PhD degree) and 10+ years of work experience in building data products from scratch. I have worked in various industries (refineries, automobiles, smartphones, etc.).Apart from data crunching, I love meeting people, outdoor sports, running and biking.
My detailed profile is available at LinkedIn.