Aug 2023
7 Mon
8 Tue
9 Wed
10 Thu
11 Fri 09:00 AM – 06:00 PM IST
12 Sat
13 Sun
Vikas S Shetty
DNN (Deep Neural Network) models are nonlinear and have a high variance, which can be frustrating when preparing a final model for making predictions. In order to get good results with any model, there are certain criteria (data, hyperparameters) that need to be fulfilled. But in the real-world scenario, you might either end up with bad training data or might have a hard time figuring out appropriate hyperparameters.
A successful approach to reducing the variance of neural network models is to train multiple models instead of a single model and to combine the predictions from these models. Ensemble learning has been proven to improve the generalization ability effectively in both theory and practice. An ensemble system may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods than would have been improved by increasing resource use for a single method.
Using ensemble techniques for machine learning models has become a fairly common practice that combines multiple learners on a single-learning task but when you look at object detection (classification + localization) models it’s not so straightforward to apply them. One of the bigger problems with ensemble techniques is getting appropriate results without a significant drop in model inference time. Each technique comes with its own advantages and disadvantages and also depends on the models that are being used. To verify and demonstrate which methods work better, we compiled results for different ensemble techniques (including state-of-the-art techniques) on our own models trained for logo detection.
Hosted by
Supported by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}