3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in propaganda-detection-7
No benchmarks available.
Use these libraries to find propaganda-detection-7 models and implementations
This work presents SANDS, a new semi-supervised stance detector that starts from very few labeled tweets, and achieves a macro-F1 score of 0.55 (0.49) on US (India)-based datasets, outperforming 17 baselines substantially, particularly for minority stance labels and noisy text.
This study analyzes the potential and challenges of LLMs in complex tasks like propaganda detection, and demonstrates that GPT-4 achieves comparable results to the current state-of-the-art approach using RoBERTa.
This work performs comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus (WNC), and proposes BERT -based ensembles that outperform state-of-the-art methods like BERTlarge by a margin of 5.6 F1 score.
A fast solution to propaganda detection at SemEval-2020 Task 11, based on feature adjustment, using per-token vectorization of features and a simple Logistic Regression classifier to quickly test different hypotheses about the data.
This paper uses a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask, built using embeddings from BERT in combination with additional lexical features and extensive label post-processing.
A user study shows that prototype-based explanations help non-experts to better recognize propaganda in online news and a novel interleaved training algorithm that effectively handles classes characterized by ProtoTEx indicative features.
This research aims to bridge the information gap by providing a multi-labeled propaganda techniques dataset in Mandarin based on a state-backed information operation dataset provided by Twitter, and applies aMulti-label text classification using fine-tuned BERT.
The GloVe word representation, the BERT pretraining model, and the LSTM model architecture are implemented to accomplish propaganda detection techniques for news articles in the SemEval-2020 task 11, which significantly outperforms the officially released baseline method.
Adding a benchmark result helps the community track progress.