3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in cross-lingual-semantic-textual-similarity-2
No benchmarks available.
Use these libraries to find cross-lingual-semantic-textual-similarity-2 models and implementations
No subtasks available.
An unsupervised and a very resource-light approach for measuring semantic similarity between texts in different languages, applicable to virtually any pair of languages for which there exists a sufficiently large corpus, required to learn monolingual word embeddings.
This work demonstrates that it is possible to turn MLMs into effective lexical and sentence encoders even without any additional data, relying simply on self-supervision, and proposes an extremely simple, fast, and effective contrastive learning technique, termed Mirror-BERT.
Adding a benchmark result helps the community track progress.