Aug 2023
7 Mon
8 Tue
9 Wed
10 Thu
11 Fri 09:00 AM – 06:00 PM IST
12 Sat
13 Sun
The Fifth Elephant annual membership
The Fifth Elephant membership is valid for one year - 12 months. The member get the following benefits:
Memberships can be cancelled within 1 hour of purchase.
₹5100
Sale at this price closes on December 31, 2025
Total ₹0
Cancellation and refund policy
Memberships can be cancelled within 1 hour of purchase
Workshop tickets can be cancelled or transferred upto 24 hours prior to the workshop.
For further queries, please write to us at support@hasgeek.com or call us at +91 7676 33 2020.Vikas S Shetty
Submitted Apr 17, 2023
DNN (Deep Neural Network) models are nonlinear and have a high variance, which can be frustrating when preparing a final model for making predictions. In order to get good results with any model, there are certain criteria (data, hyperparameters) that need to be fulfilled. But in the real-world scenario, you might either end up with bad training data or might have a hard time figuring out appropriate hyperparameters.
A successful approach to reducing the variance of neural network models is to train multiple models instead of a single model and to combine the predictions from these models. Ensemble learning has been proven to improve the generalization ability effectively in both theory and practice. An ensemble system may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods than would have been improved by increasing resource use for a single method.
Using ensemble techniques for machine learning models has become a fairly common practice that combines multiple learners on a single-learning task but when you look at object detection (classification + localization) models it’s not so straightforward to apply them. One of the bigger problems with ensemble techniques is getting appropriate results without a significant drop in model inference time. Each technique comes with its own advantages and disadvantages and also depends on the models that are being used. To verify and demonstrate which methods work better, we compiled results for different ensemble techniques (including state-of-the-art techniques) on our own models trained for logo detection.
Hosted by
Supported by
Login to leave a comment
Sumod K Mohan
Thanks Vikas for the submission. Looks interesting. As the next step of review, could you share the the rough outline (rough split of where you will focus on, what topic you will delve deeper and what you will gloss over) and some more details on the core topics (why did this specific problem necessitate ensembling, what architecture you were using: CNN, ViT, what ensembling techniques did your try out etc). Nice to see that you are weaving theoretical with the real world issues (compute, storage, inference time etc). That would help us better understand the depth of the talk, the what the core takeaways are and know where the talk will fit in.
Vikas S Shetty
Hi Sumod, thanks for reaching and I'm really sorry for missing this.
A brief outline of the talk:
Hope that helps.
Anwesha Sen
@anwesha25 Editor & Promoter
Hi Vikas, please drop me an email at anwesha@hasgeek.com so I can schedule your talk rehearsal and share next steps. Thank you!