The Fifth Elephant 2014

A conference on big data and analytics

In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.

Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.

Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.

Tickets: http://fifthel.doattend.com

Website: https://fifthelephant.in/2014

For queries about proposals / submissions, write to info@hasgeek.com

Theme

  1. Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.

  2. Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).

  3. Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.

  4. Real-time analytics

  5. Opportunity analytics

  6. Big data and security

  7. Big data and internet of things

  8. Data Usage and BI (Business Intelligence) in different sectors.

Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Manish Shukla

@manishshukla

Migrating traditional warehouse and its applications to a Big-data platform

Submitted Jun 4, 2014

  1. Understanding the capabilities/limitations of Hadoop platform for efficient migration
  2. Identifying the pitfalls and anti-patterns to avoid

There is no “the solution” when you start thinking about the migration of a traditional warehouse and its applications to big data platform. Every problem is different and so are their solutions. Inspite of this there are common mistakes that developers make because of lack of understanding of the new space. Knowing these pitfalls and common patterns will certainly help the existing big data developers and for them who are looking for an opportunityto work in this space.

Outline

A prevalent trend in the industry is that with growing volumes of data and hence exponentially growing cost of processing, enterprises find it hard to scale their traditional data warehouses. Approaches such as building specialised data marts, projecting a view of a subset of the entire data, act as short term tactical fixes, but cause other problems. A more strategic option to handle this situation is to migrate to BigData platforms like Hadoop. However, such a migration should be done keeping in mind the capabilities and limitations of the BigData system, in order to build an efficient solution.

I see three aspects of a data analytics solution: data ingestion and preparation, data aggregation and statistical computation (e.g. various forecast algorithms etc.). In this talk, I will share my experiences in migrating an application to the Hadoop ecosystem. I will describe the options in each of the above aspects. I will also talk about some common pitfalls and anti-patterns that we should identify and avoid.

Speaker bio

I am a developer at Thoughtworks with around eleven years of experience in various technologies. A few years back I started working on Hadoop platform helping our clients to migrate traditional data warehouses and analytical applications to hadoop eco system. I believe that the analytical and the transactional systems is going to converge at some point to big data and hence distributed computing is going to be the solution in future. I have presented a similar topic at Great Indian Developers Summit also.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more