3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in news-classification-11
Use these libraries to find news-classification-11 models and implementations
No subtasks available.
A novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine is proposed, utilizing the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text.
This paper releases "AraCOVID19-MFH" a manually annotated multi-label Arabic COVID-19 fake news and hate speech detection dataset that contains 10,828 Arabic tweets annotated with 10 different labels.
To the authors' knowledge this is the first largerscale evaluation of how knowledge graph-based representations can be systematically incorporated into the process of fake news classification, and it is demonstrated that knowledge graphs already achieve competitive performance to conventionally accepted representation learners.
A novel practical framework is proposed by utilizing a two-tier attention architecture to decouple the complexity of explanation and the decision-making process and is applied in the context of a news article classification task.
The work here suggests that 01 loss sign activation networks could be further developed to create fool proof models against text adversarial attacks.
This work presents LSTM-Shuttle, which applies human speed reading techniques to natural language processing tasks for accurate and efficient comprehension and shows that it predicts both better and more quickly.
This report describes the entry by the Intelligent Knowledge Management (IKM) Lab in the WSDM 2019 Fake News Classification challenge, treating the task as natural language inference (NLI), and individually train a number of the strongest NLI models as well as BERT.
An interesting pattern in the way sentences interact with each other across different kind of news articles is observed, and a graph neural network-based model is proposed which does away with the need of feature engineering for fine grained fake news classification.
This paper focuses on robustness of text classification against word substitutions, aiming to provide guarantees that the model prediction does not change if a word is replaced with a plausible alternative, such as a synonym.
The experiments show that training embeddings on the relatively higher-resourced Kinyarwanda yields successful cross-lingual transfer to Kirundi, and the design of the created datasets allows for a wider use in NLP beyond text classification in future studies, such as representation learning, cross-lingsual learning with more distant languages, or as base for new annotations for tasks such as parsing, POS tagging, and NER.
Adding a benchmark result helps the community track progress.