3260 papers • 126 benchmarks • 313 datasets
Cross-lingual entity linking is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve entity linking tasks (i.e., assigning a unique identity to entities in a text) in another, commonly low-resource, language. Image Source: Towards Zero-resource Cross-lingual Entity Linking
(Image credit: Papersgraph)
These leaderboards are used to track progress in cross-lingual-entity-linking-8
No benchmarks available.
Use these libraries to find cross-lingual-entity-linking-8 models and implementations
This work develops the first XEL approach that combines supervision from multiple languages jointly, and trains a single entity linking model for multiple languages, improving upon individually trained models for each language.
This work proposes pivot-basedentity linking, which leverages information from a highresource “pivot” language to train character-level neural entity linking models that are transferred to the source lowresource language in a zero-shot manner.
This work examines the effect of resource assumptions and quantifies how much the availability of these resource affects overall quality of existing XEL systems and proposes three improvements to both entity candidate generation and disambiguation that make better use of the limited resources the authors do have in resource-scarce scenarios.
This paper assesses the problems faced by current entity candidate generation methods for low- resource XEL, then proposes three improvements that reduce the disconnect between entity mentions and KB entries, and improve the robustness of the model to low-resource scenarios.
It is concluded that the LRL setting requires the use of outside-Wikipedia cross-lingual resources and a simple yet effective zero-shot XEL system, QuEL, that utilizes search engines query logs, shows an average increase in gold candidate recall and end-to-end linking accuracy over state-of-the-art baselines.
This work proposes a method of “soft gazetteers” that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking.
A unified representation model for multilingual KB construction and completion, Prix-LM, is proposed, which integrates useful multilingual and KB-based factual knowledge into a single model and demonstrates its effectiveness on standard entity-related tasks.
An analysis of cross-lingual citations based on over one million English papers, spanning three scientific disciplines and a time span of three decades, finds an increasing rate of citations to publications written in Chinese, citations being primarily to local non-English languages, and consistency in citation intent between cross- and monolingual citations.
An in-depth analysis of the candidate generation problem in the context of cross-lingual entity linking with a focus on low-resource languages and proposes a light-weight and simple solution based on the construction of indexes whose design is motivated by more complex transfer learning based neural approaches.
Adding a benchmark result helps the community track progress.