Jul 2016
25 Mon
26 Tue
27 Wed
28 Thu 08:30 AM – 06:25 PM IST
29 Fri 08:30 AM – 06:15 PM IST
30 Sat 08:45 AM – 05:00 PM IST
31 Sun 08:15 AM – 06:00 PM IST
The Fifth Elephant is India’s most renowned data science conference. It is a space for discussing some of the most cutting edge developments in the fields of machine learning, data science and technology that powers data collection and analysis.
Machine Learning, Distributed and Parallel Computing, and High-performance Computing continue to be the themes for this year’s edition of Fifth Elephant.
We are now accepting submissions for our next edition which will take place in Bangalore 28-29 July 2016.
#Tracks
We are looking for application level and tool-centric talks and tutorials on the following topics:
The deadline for submitting proposals is 30th April 2016
This year’s edition spans two days of hands-on workshops and conference. We are inviting proposals for:
Proposals will be filtered and shortlisted by an Editorial Panel. We urge you to add links to videos / slide decks when submitting proposals. This will help us understand your past speaking experience. Blurbs or blog posts covering the relevance of a particular problem statement and how it is tackled will help the Editorial Panel better judge your proposals.
We expect you to submit an outline of your proposed talk – either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal.
We will notify you about the status of your proposal within three weeks of submission.
Selected speakers must participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.
There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
##Venue
The Fifth Elephant will be held at the NIMHANS Convention Centre, Dairy Circle, Bangalore.
##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.
Hosted by
Chandraprakash Bhagtani
@cpbhagtani
Submitted Jun 14, 2016
The speed of today’s processing systems have moved from classical data warehousing batch reporting to the real-time processing and analytics. RDBMS (OLTP) data is one such type of data required for analysis and deriving business insights. Traditional way of ingesting RDBMS data into analytical system (hadoop etc.) is via bulk import or query based ingestion. This approach has following issues
-- A process that periodically copies a snapshot of the entire source consumes too much time and resources.
-- Alternate approaches that include timestamp columns, triggers, or complex queries often hurt performance and increase complexity.
-- Source table needs to make lots of changes in their schema and system to support data copy . for ex: Support of CDC columns, Read slave etc.
-- Periodic bulk copy also stresses the network and other resources. Its a sinusoidal wave of usage. These process create huge spikes and rest of the time systems are idle.
-- Since really old DB schema aren’t designed with Data ingestion in mind , hard deletes and updates without CDC timestamps are common occurrence . This results in loosing facts .
We have built R3D3 which is source agnostic distributed change data capture platform . This platform handles thousands of CDC events per second per server and support strong look back capabilities and subscription model. By providing Source DB agnostic CDC schema(Avro) it provides pluggable replication to any kind of Secondary storage( Hive, Hbase and cassandra) . In addition by providing a rich subscription model , R3D3 allows both batch ingestion and real time streaming on top of single pipeline. Additional features include:
-- Replication in near realtime
-- No extra pressure on source RDBMS and no CDC column is required for tables
-- Fault tolerant, at-least once semantics, and order guarantees.
-- Replays in case of failure.
-- Schema evoluation.
-- Safeguard PII and sensitive data via encryption/masking by using classification metadata
-- Realtime publishing of auditing/metrics events and dashboarding.
-- Bootstrapping (getting history) a table.
In this talk, I will talk about the following
Chandra has 8 years of experience in Big data systems. Working as a staff engineer in Intuit.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}