3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in propaganda-detection
No benchmarks available.
Use these libraries to find propaganda-detection models and implementations
This work presents SANDS, a new semi-supervised stance detector that starts from very few labeled tweets, and achieves a macro-F1 score of 0.55 (0.49) on US (India)-based datasets, outperforming 17 baselines substantially, particularly for minority stance labels and noisy text.
This work performs comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus (WNC), and proposes BERT -based ensembles that outperform state-of-the-art methods like BERTlarge by a margin of 5.6 F1 score.
This study analyzes the potential and challenges of LLMs in complex tasks like propaganda detection, and demonstrates that GPT-4 achieves comparable results to the current state-of-the-art approach using RoBERTa.
It is shown that BERT, while capable of handling imbalanced classes with no additional data augmentation, does not generalise well when the training and test data are sufficiently dissimilar, and how to address this problem by providing a statistical measure of similarity between datasets and a method of incorporating cost-weighting into BERT when theTraining and test sets are dissimilar.
A fast solution to propaganda detection at SemEval-2020 Task 11, based on feature adjustment, using per-token vectorization of features and a simple Logistic Regression classifier to quickly test different hypotheses about the data.
This research aims to bridge the information gap by providing a multi-labeled propaganda techniques dataset in Mandarin based on a state-backed information operation dataset provided by Twitter, and applies aMulti-label text classification using fine-tuned BERT.
The GloVe word representation, the BERT pretraining model, and the LSTM model architecture are implemented to accomplish propaganda detection techniques for news articles in the SemEval-2020 task 11, which significantly outperforms the officially released baseline method.
A user study shows that prototype-based explanations help non-experts to better recognize propaganda in online news and a novel interleaved training algorithm that effectively handles classes characterized by ProtoTEx indicative features.
This paper uses a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask, built using embeddings from BERT in combination with additional lexical features and extensive label post-processing.
The experimental results and analysis show that it does not help to use a much larger English corpus annotated with propaganda techniques, regardless of whether used in English or after translation to Arabic.
Adding a benchmark result helps the community track progress.