3260 papers • 126 benchmarks • 313 datasets
Link to A Survey for Multi-modal Knowledge Graphs. Papers integrating Knowledge Graphs (KGs) and Multi-Modal Learning, focusing on research in two principal aspects: KG-driven Multi-Modal (KG4MM) learning, where KGs support multi-modal tasks, and Multi-Modal Knowledge Graph (MM4KG), which extends KG studies into the MMKG realm.
(Image credit: Papersgraph)
These leaderboards are used to track progress in multi-modal-knowledge-graph-6
No benchmarks available.
Use these libraries to find multi-modal-knowledge-graph-6 models and implementations
No subtasks available.
This survey aims to serve as a comprehensive reference for researchers already involved in or considering delving into KG and multi-modal learning research, offering insights into the evolving landscape of MMKG research and supporting future work.
A medical conversational question answering (CQA) system based on the multi-modal knowledge graph, namely"LingYi", which is designed as a pipeline framework to maintain high flexibility, which is more friendly to provide medical services to patients.
A novel Multi- modal Siamese Network for Entity Alignment (MSNEA) is proposed to align entities in different MMKGs, in which multi-modal knowledge could be comprehensively leveraged by the exploitation of inter-modAL effect.
Modality-Aware Negative Sampling (MANS) for multi-modal knowledge graph embedding (MMKGE) is proposed to address the mentioned problems and empirical results demonstrate that MANS outperforms existing NS methods.
This paper constructs AspectMMKG, the first MMKG with aspect-related images by matching images to different entity aspects, and proposes an aspects-related image retrieval (AIR) model, that aims to correct and expand aspect- related images in Aspect MMRKG.
A modality adversarial and contrastive framework (MACO) is proposed to solve the modality-missing problem in MMKGC and could achieve state-of-the-art results and serve as a versatile framework to bolster various MMK GC models.
A novel MMEA transformer is proposed, called MoAlign, that hierarchically introduces neighbor features, multi-modal attributes, and entity types to enhance the alignment task and outperforms strong competitors and achieves excellent entity alignment performance.
DESAlign is proposed, a robust method addressing the over-smoothing caused by semantic inconsistency and interpolating missing semantics using existing modalities using existing modalities, and a training strategy for multi-modal knowledge graph learning based on the proposed generalizable theoretical principle.
Adaptive Multi-modal Fusion and Modality Adversarial Training (AdaMF-MAT) is proposed to unleash the power of imbalanced modality information for MMKGC to achieve new state-of-the-art results on three public MMKGC benchmarks.
Adding a benchmark result helps the community track progress.