In 2013, commodity hardware and computing capacity for storing and processing large and small volumes of data are easily available on demand. The bigger issues pertain to questions of how to scale data processing, handle data diversity, manage infrastructure costs, decide which technologies work best for different contexts and problems, and build products from the insights and intelligence that the data is presenting to you.
The Fifth Elephant 2013 is a three-day workshop and conference on big data, storage and analytics, with product demos and hacker corners.
Event format, themes and submission guidelines #
The Fifth Elephant 2013 invites proposals on use cases and real-life examples. Tell us what specific problem you faced, which technology/tools worked for your use case and why, how you have developed business intelligence on the data you are collecting, and analytics tools and techniques you employ. Our preference is for showcasing original work with clear take-aways for the audience. Please emphasize these in your proposal.
The conference will have two parallel tracks on 12th and 13th July:
- Storage: OLTP, messaging and notifications, databases and big data, NoSQL
- Analytics: Metrics and tools, cloud computing, mathematical modelling and statistical analysis, visualization
This year we are adding a preliminary day of workshops, on 11th July, to provide attendees more in-depth, hands-on training on open source frameworks and tools (Pig, Hadoop, Hive, etc), commercial solutions (sponsored), programming languages such as R, and visualization techniques and tricks, among others.
Product demos and sponsored sessions #
We have a demo track for startups and companies who want to showcase their product to customers at The Fifth Elephant 2013 and get feedback. Slots are also open for 4-6 sponsored sessions for companies who want to talk about their technologies and reach out to developers, CTOs, CIOs and product managers at The Fifth Elephant. For more information on demo and sponsored session proposals, write to email@example.com.
Commitment to open source #
HasGeek believes in open source as the foundation of the internet. Our aim is to strengthen these foundations for future generations. If your talk describes a codebase for developers to work with, we require that it is available under a license that does not impose itself on subsequent work. This is typically a permissive open source license (almost anything that is listed at opensource.org/licenses and is not GPL or AGPL), but restrictive and commercial licenses are also considered depending on how they affect the developer’s relationship with the user.
If you’d like to showcase commercial work that makes money for you, please consider supporting the event with a sponsorship.
Proposal selection process #
Voting is open to attendees who have purchased event tickets. If there is a proposal you find notable, please vote for it and leave a comment to initiate discussions. Your vote will be reflected immediately, but will be counted towards selections only if you purchase a ticket. Proposals will also be evaluated by a program committee consisting of:
- Gopal Vijayraghavan, Hortonworks
- Govind Kanshi, Microsoft
- Joydeep Sen Sharma, Qubole
- Srinivasan Seshadri (Sesh), Boltell
Emphasis will be placed on original work and talks which present new insights to the audience.
The programme committee will interview proposers who have received maximum votes from attendees and the committee. Proposers must submit presentation drafts as part of the selection process to ensure the talk is in line with the original proposal and to help the program committee build a coherent line-up for the event.
There is only one speaker per session. Attendance is free for selected speakers. HasGeek will cover your travel to and accommodation in Bangalore from anywhere in the world. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. If you are able to raise support for your trip, we will count that towards an event sponsorship.
If your proposal is not accepted, you can buy a ticket at the same rate as was available on the day you proposed. We’ll send you a code.
Discounted tickets are available from http://fifthelephant.doattend.com/
The program committee will announce the first round of selected proposals by end of April, a second round by end-May, and will finalize the schedule by 20th June. The funnel will close on 5th June. The event is on 11th-13th July 2013.
Transferring Gigabytes of Data to cloud at 10mbps on your 10mbps link
TCP/IP is known to be a pretty robust, reliable and fair mode of data transport. But what about the actual real throughput when you are transferring GB’s of data on a 10mbps link to cloud which is maybe 15 hops away.
- Do you get that 10mbps upload rate or is it lesser?
- Why is it lesser?
- What are the options?
In this session you will learn about inherent limitations about TCP/IP stack which makes it difficult to use when trying to extract the maximum throughput from a given broadband/dedicated link. You will also learn how things can be made better by using UDP instead and what some of the companies are doing to make this work.
This is not exactly a storage related talk per se, but it does involve Big Data. Huge amount of data to be transferred to the cloud at maximum throughput. Throughput that cannot be provided using a simple HTTP/FTP upload.
There are tools like Aspera, Data Expedition, File Catalyst etc.. which leverages on UDP protocol instead to provide higher throughput.
We will walk through the limitation of TCP/IP protocol. Look at some data to understand the problem. Talk about various WAN optimizations that people are doing to increase their throughput.
- Basic understanding of TCP/IP and UDP.
- Some experience in uploading/downloading GB’s of data from cloud.
Speaker bio #
I head the cloud porting team at Amagi. We are creating a cloud based infrastructure for TV broadcaster to manage their services over cloud instead of using a satellite link. This involves transferring gigabytes of data with maximum possible throughput and avoiding any additional latencies if possible.