3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in multilingual-word-embeddings-6
No benchmarks available.
Use these libraries to find multilingual-word-embeddings-6 models and implementations
No datasets available.
No subtasks available.
This work proposes a fully unsupervised framework for learning MWEs that directly exploits the relations between all language pairs and substantially outperforms previous approaches in the experiments on multilingual word translation and cross-lingual word similarity.
This work proposes word alignment methods that require no parallel data and finds that alignments created from embeddings are superior for four and comparable for two language pairs compared to those produced by traditional statistical aligners – even with abundant parallel data.
Luminoso’s participation in SemEval 2017 Task 2, “Multilingual and Cross-lingual Semantic Word Similarity”, with a system based on ConceptNet, which took first place in both subtasks.
This work proposes a novel geometric approach for learning bilingual mappings given monolingual embeddings and a bilingual dictionary that outperforms previous approaches on the bilingual lexicon induction and cross-lingual word similarity tasks.
New methods for estimating and evaluating embeddings of words in more than fifty languages in a single shared embedding space are introduced and a new evaluation method is shown to correlate better than previous ones with two downstream tasks.
This work presents ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data, based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams.
This work proposes a novel framework to model correlations between sememes and multi-lingual words in low-dimensional semantic space for sememe prediction, and shows that the proposed model achieves consistent and significant improvements as compared to baseline methods in cross-lingUAL sememe Prediction.
It is shown that the proposed new approach to learn multimodal multilingual embeddings for matching images and their relevant captions in two languages enables better generalization, achieving state-of-the-art performance in text-to-image and image- to-text retrieval task, and caption-caption similarity task.
Adding a benchmark result helps the community track progress.