An ensemble of multiple fine-tuned BERT models based on bootstrap aggregating (bagging) is proposed and presented, finding that the F1-score drastically increases when ensembling up to 15 models, but the returns diminish for more models.
Authors
Julian Risch
7 papers
Ralf Krestel
6 papers
Views
Field of Study
Computer Science
Venue Information
Name
Workshop on Trolling, Aggression and Cyberbullying