3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in linguistic-steganography-3
No benchmarks available.
Use these libraries to find linguistic-steganography-3 models and implementations
No datasets available.
No subtasks available.
This work proposes a steganography technique based on arithmetic coding with large-scale neural language models that can generate realistic looking cover sentences as evaluated by humans, while at the same time preserving security by matching the cover message distribution with the language model distribution.
A new linguistic steganography method which encodes secret messages using self-adjusting arithmetic coding based on a neural language model which outperforms the previous state-of-the-art methods on four datasets by 15.3% and 38.9% in terms of bits/word and KL metrics, respectively.
The proposed method eliminates painstaking rule construction and has a high payload capacity for an edit-based model, and is shown to be more secure against automatic detection than a generation-based method while offering better control of the security/payload capacity trade-off.
A novel provably secure generative linguistic steganographic method ADG, which recursively embeds secret information by Adaptive Dynamic Grouping of tokens according to their probability given by an off-the-shelf language model is presented.
This paper demonstrates that segmentation ambiguity indeed causes occasional decoding failures at the receiver’s side, and proposes simple tricks to overcome this problem, which are even applicable to languages without explicit word boundaries.
A secure token-selection principle that the sum of selected tokens' probabilities is positively correlated to statistical imperceptibility is proposed and a lightweight disambiguating approach that is finding out a maximum weight independent set (MWIS) in one candidate graph only when candidate-level ambiguity occurs is presented.
This paper proposes a novel zero-shot approach based on in-context learning for linguistic steganography to achieve better perceptual and statistical imperceptibility and designs several new metrics and reproducible language evaluations to measure the imperceptibility of the stegotext.
A novel secure disambiguation method named SyncPool is proposed, which effectively addresses the segmentation ambiguity problem and has the potential to significantly improve the reliability and security of neural linguistic steganography systems.
Adding a benchmark result helps the community track progress.