In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
What chemistry can teach us about designing better NLP algorithms
The main idea behind this talk is how context is formed in language and how location, time, and order of words also has an effect on it.
Machine Learning, Artificial Intelligence and Automated Natural Language Analyses present some of the most interesting challenges for next generation computing. And as much as we’d like to believe otherwise, we are still a long way from developing bots that understand the universe of human language.
It isn’t an easy problem because the idiosyncrasies of our language present certain difficulties for the systematic and logical brain of the machine. For instance, the meaning of a word can change based on the context.
The group has achieved fair and equal representation for all its members.
She is very fair with blue eyes.
Now it’s very easy for the human eye to discern what the intent is, but how will the computer?
In this talk I am going to explain how natural language processing (NLP) can learn from chemistry in designing smarter engines. Yes, the chemistry of organic bonds and covalent bonds.
I will first show how chemistry and NLP are related and how chemical reactions and element knowledge can help us in NLP. Following this, I will compare periodic table elements in chemistry to NLP entities. There are very interesting linkages between radio active elements, isotopes in chemistry similar to words, places and meaning in the semantic world.
Participants should have a basic knowledge of natural language concepts.
Siva is a developer with Compile, where he works on practical applications for NLP algorithms.