The Fifth Elephant 2015
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Hosted by
Vishal
@vishalgokhale
Submitted Jun 14, 2015
This short talk aims to “deconstruct” Linear Regression and explain the steps done by the library functions before throwing out the intercept and slope.
We usually use linear regression when we know that our dependent variable has a linear relationship with the independent variable. We use the library functions to identify the parameters and move on.
But how does the library function choose a single line, out of the infinite possibilities?
How does it know that the line it chooses is the one that fits the data best?
Or rather, what is a best fit, in the first place?
What if the technique it uses has some inherent flaws... Can that guide me to make a smarter choice of model instead?
Have these questions come to your mind? Are you still in search of the answers? If yes, this talk is for you.
We won’t go into the depths of all the techniques and everything under the sky about Linear Regression.
We’ll just the scratch the surface and digest one small chunk of Simple Linear Regression.
Enough though, to get the curious minds wiggling !!!
Particpants: Familiarity with fundamental calculus.
Infra: White-board, duster and markers.
I am a java programmer and a stats/ math enthusiast with 10+ years of coding experience. With nearly 5 years of working with Data Scientists and Wildlife biologists.
As much as I love learning techniques (like Linear-Regression! for instance), I also love to learn about derivations and philosophy involved. And just in case it is not evident, I love to talk about that too.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}