The Fifth Elephant 2014

A conference on big data and analytics

In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.

Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.

Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.



For queries about proposals / submissions, write to


  1. Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.

  2. Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).

  3. Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.

  4. Real-time analytics

  5. Opportunity analytics

  6. Big data and security

  7. Big data and internet of things

  8. Data Usage and BI (Business Intelligence) in different sectors.

Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.

Hosted by

All about data science and machine learning

Bargava Subramanian


Machine Learning using R : Crash course in Classification Methods

Submitted Jun 1, 2014

The aim is to provide the attendees with an overview (implementation-wise) of some of the major classification methods using R. The focus of the workshop will be on breadth rather than depth. A lot of methods will be introduced, but their mathematical properties won’t be discussed in detail.

As a caveat, most of the real-life problems cannot be solved efficiently without further detailed understanding of these algorithms. But this workshop should give a quick and dirty start to solving the problems.

Target Audience: Beginner/Intermediate


The following topics would be covered. The format would be a bit of theory and then implementation using R

Introduction to Machine learning

  1. Types of Learning (Supervised/Unsupervised/Reinforced)
  2. Introduction to Generalization
  3. Train/Test/Validation Datasets
  4. Bias – Variance tradeoff
  5. Overfitting
  6. Cross-validation
  7. Regularization
  8. Grid Search
  9. Hyperparameter Optimization
  10. Feature Selection/Transformation
    a. Greedy feature selection (forward, backward, stepwise)
    b. Non-linear transformations, Kernels

Classification Techniques covered:

  1. Linear Regression
  2. Logistic Regression
  3. LASSO, Ridge and Elastic net regression
  4. kNN
  5. Discriminant Analysis
  6. Decision Trees, CART, CHAID
  7. Support Vector Machines
  8. Naïve Bayes
  9. Ensemble Methods
    a. Boosting
    b. Bagging
    c. Random Forest
    d. Regularized Random Forest
    e. Gradient Boosting Machines

Unsupervised learning techniques covered:

  1. Dimensionality Reduction: Principal Component Analysis
  2. K-Means clustering

Illustrating common pitfalls

  1. Data snooping
  2. Occam’s Razor

Big Data Analytics (*need AWS credit for implementation. And time permitting)

  1. Introduction to Big Data and Hadoop
  2. R and Big Data
    a. Hadoop
    b. Linear Model
    c. Random Forest



  1. The attendee should have an aptitude for solving data mining/machine learning problems.
  2. Preferred if attendees read a bit about R before coming (please see links below)

Any modern laptop configuration would work. It is good to have atleast 4+ GB of RAM with a dual core/quad core machine.


  1. Install latest R version from CRAN website :
  2. Install R Studio :
  3. For Hadoop, need AWS credit.

Dataset and required R packages
Please download data from the following location:

Please install the following R packages:
(To do: open R Studio, and enter install.packages(“package_name”))

  1. caret
  2. data.table
  3. e1071
  4. foba
  5. gbm
  6. glmnet
  7. mboost
  8. nnet
  9. gbm
  10. randomForest
  11. RRF

Update* (22 July):
Additional packages (please install, if possible)

Speaker bio

Data Analytics professional at Cisco Systems India Pvt Ltd.


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

All about data science and machine learning