3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in deception-detection-12
No benchmarks available.
Use these libraries to find deception-detection-12 models and implementations
This work introduces the Mafiascum dataset, a collection of over 700 games of Mafia, in which players are randomly assigned either deceptive or non-deceptive roles and then interact via forum postings to construct a set of hand-picked linguistic features based on prior deception research.
A crowdsourcing study where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews, it is observed that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
In the authors' experiments, a network trained in low-stakes lies had better accuracy classifying high-stakes deception thanLow-stakes, although using low- stakes lies as an augmentation strategy for the high- stakes dataset decreased its accuracy.
This work introduces DOLOS 1, the largest gameshow deception detection dataset with rich deceptive conversations, and proposes Parameter-Efficient Crossmodal Learning (PECL), where a Uniform Temporal Adapter (UT-Adapter) explores temporal attention in transformer-based architectures, and a crossmodal fusion module, Plug-in Audio-Visual Fusion (PAVF), combines crossmodal information from audio-visual features.
This paper surveys available English deception datasets which include domains like social media reviews, court testimonials, opinion statements on specific topics, and deceptive dialogues from online strategy games to conduct correlation analysis of linguistic cues of deception across datasets and perform cross-corpus modeling experiments which show that a cross-domain generalization is challenging to achieve.
Using Depth-wise Separable Convolution rather than conventional convolution layers inside each branch is considered in this paper as a feasible solution to how the eye blinking detection model can learn efficiently from different resolutions of eye pictures in diverse conditions.
Adding a benchmark result helps the community track progress.