In 2013, commodity hardware and computing capacity for storing and processing large and small volumes of data are easily available on demand. The bigger issues pertain to questions of how to scale data processing, handle data diversity, manage infrastructure costs, decide which technologies work best for different contexts and problems, and build products from the insights and intelligence that the data is presenting to you.
The Fifth Elephant 2013 is a three-day workshop and conference on big data, storage and analytics, with product demos and hacker corners.
The Fifth Elephant 2013 invites proposals on use cases and real-life examples. Tell us what specific problem you faced, which technology/tools worked for your use case and why, how you have developed business intelligence on the data you are collecting, and analytics tools and techniques you employ. Our preference is for showcasing original work with clear take-aways for the audience. Please emphasize these in your proposal.
The conference will have two parallel tracks on 12th and 13th July:
- Storage: OLTP, messaging and notifications, databases and big data, NoSQL
- Analytics: Metrics and tools, cloud computing, mathematical modelling and statistical analysis, visualization
This year we are adding a preliminary day of workshops, on 11th July, to provide attendees more in-depth, hands-on training on open source frameworks and tools (Pig, Hadoop, Hive, etc), commercial solutions (sponsored), programming languages such as R, and visualization techniques and tricks, among others.
We have a demo track for startups and companies who want to showcase their product to customers at The Fifth Elephant 2013 and get feedback. Slots are also open for 4-6 sponsored sessions for companies who want to talk about their technologies and reach out to developers, CTOs, CIOs and product managers at The Fifth Elephant. For more information on demo and sponsored session proposals, write to email@example.com.
HasGeek believes in open source as the foundation of the internet. Our aim is to strengthen these foundations for future generations. If your talk describes a codebase for developers to work with, we require that it is available under a license that does not impose itself on subsequent work. This is typically a permissive open source license (almost anything that is listed at opensource.org/licenses and is not GPL or AGPL), but restrictive and commercial licenses are also considered depending on how they affect the developer’s relationship with the user.
If you’d like to showcase commercial work that makes money for you, please consider supporting the event with a sponsorship.
Voting is open to attendees who have purchased event tickets. If there is a proposal you find notable, please vote for it and leave a comment to initiate discussions. Your vote will be reflected immediately, but will be counted towards selections only if you purchase a ticket. Proposals will also be evaluated by a program committee consisting of:
- Gopal Vijayraghavan, Hortonworks
- Govind Kanshi, Microsoft
- Joydeep Sen Sharma, Qubole
- Srinivasan Seshadri (Sesh), Boltell
Emphasis will be placed on original work and talks which present new insights to the audience.
The programme committee will interview proposers who have received maximum votes from attendees and the committee. Proposers must submit presentation drafts as part of the selection process to ensure the talk is in line with the original proposal and to help the program committee build a coherent line-up for the event.
There is only one speaker per session. Attendance is free for selected speakers. HasGeek will cover your travel to and accommodation in Bangalore from anywhere in the world. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. If you are able to raise support for your trip, we will count that towards an event sponsorship.
If your proposal is not accepted, you can buy a ticket at the same rate as was available on the day you proposed. We’ll send you a code.
Discounted tickets are available from http://fifthelephant.doattend.com/
The program committee will announce the first round of selected proposals by end of April, a second round by end-May, and will finalize the schedule by 20th June. The funnel will close on 5th June. The event is on 11th-13th July 2013.
Big Data, Real-time Processing and Storm
Participants will learn:
- And understand concepts and salient features of Storm.
- How Storm can be used for processing Big Data and in real-time.
- Storm through a simple example.
- Storm and Hadoop.
- A case study: Real-time analysis of tweets using Storm.
Hadoop is predominantly for batch processing. Did you ever wonder how to process Big Data in real-time? If yes, this workshop is for you.
To give an example, trends in Twitter are powered by Storm; Tweets are analyzed in real-time to find the trending topics / hashtags using Storm.
This workshop will introduce the basics of Storm and its salient features. We will discuss how Storm is similar / different from Hadoop. We will also run through the source of WordCount example and its demo. And finally we will discuss how Hadoop and Storm together can help process Big Data seamlessly.
If time permits, we will also check a simple demo of real-time processing of tweets using Storm.
Brief outline of the session has been uploaded to Slideshare, which is also embedded in slides section below.
Please check the slidedeck and let me know if you have any feedback and / or comments on the outline of the workshop.
Note: For this session, we will be using Storm Local Mode for developing and testing the code. So, any laptop with JDK and Maven should suffice.
- Basic understanding of Java.
- Worked on reasonably big chunks of data.
- Hadoop and MapReduce knowledge is good-to-have, but not mandatory.
- Laptop with latest Oracle JDK 7.0.x and Apache Maven 3.0.x installed.
- Internet connectivity.
- Twitter App:
- Participants need to create a Twitter app with read-only access on Twitter Developer portal before this session.
- Please keep the Consumer key, Consumer secret, Access token and Access token secret of this app handy.
- We will be utilizing these credentials for retrieving tweets using Twitter4J in our code.
Prashanth Babu is a Research Engineer with NTT DATA. He is working on an R & D initiative on Big Data using Apache Hadoop Ecosystem. He is also Cloudera Certified Developer for Apache Hadoop [CCDH].