Skip to content

VotingClassifier Weights Boost Accuracy by 8.3%

A recent experiment with sklearn’s VotingClassifier revealed surprising results. Three classifiers were used: GaussianNB with an accuracy of 0.795, LogisticRegression at 0.7925, and RandomForestClassifier leading with 0.94 accuracy. Two VotingClassifiers were set up with “hard” voting, where the decision is made by majority vote. The first classifier had equal weights for all models, while the second gave a higher weight to the RandomForest. Despite both using hard voting, their performance varied significantly. The equally weighted classifier scored 0.832 on the test set, whereas the one prioritizing RandomForest achieved a higher accuracy of 0.915. This demonstrates that even with hard voting, the weighting of individual classifiers can influence the overall accuracy of the ensemble.

Source: stackoverflow.com

Related Links

Related Videos