3260 papers • 126 benchmarks • 313 datasets
Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.
(Image credit: Papersgraph)
These leaderboards are used to track progress in adversarial-text-61
No benchmarks available.
Use these libraries to find adversarial-text-61 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.