3260 papers • 126 benchmarks • 313 datasets
(Drone -> Satellite) Given one drone-view image or video, the task aims to find the most similar satellite-view image to localize the target building in the satellite view.
(Image credit: Papersgraph)
These leaderboards are used to track progress in drone-view-target-localization-10
Use these libraries to find drone-view-target-localization-10 models and implementations
No subtasks available.
It is argued that drones could serve as the third platform to deal with the geo-localization problem and propose a strong CNN baseline on this challenging dataset, named University-1652, which is the first drone-based geo- localization dataset and enables two new tasks, i.e., drone-view target localization and drone navigation.
It is argued that neighbor areas can be leveraged as auxiliary information, enriching discriminative clues for geo-localization, and introduced a simple and effective deep neural network, called Local Pattern Network (LPN), to take advantage of contextual information in an end-to-end manner.
This paper revisits re-ranking and demonstrates that it can be reformulated as a high-parallelism Graph Neural Network (GNN) function, and argues that the first phase equals building the k-nearest neighbor graph, while the second phase can be viewed as spreading the message within the graph.
A simple and efficient transformer-based structure called Feature Segmentation and Region Alignment (FSRA) is introduced to enhance the model’s ability to understand contextual information as well as to understand the distribution of instances and achieves the state-of-the-art in both tasks of drone view target localization and drone navigation.
In this paper, we study the cross-view geo-localization problem to match images from different viewpoints. The key motivation underpinning this task is to learn a discriminative viewpoint-invariant visual representation. Inspired by the human visual system for mining local patterns, we propose a new framework called RK-Net to jointly learn the discriminative Representation and detect salient Keypoints with a single Network. Specifically, we introduce a Unit Subtraction Attention Module (USAM) that can automatically discover representative keypoints from feature maps and draw attention to the salient regions. USAM contains very few learning parameters but yields significant performance improvement and can be easily plugged into different networks. We demonstrate through extensive experiments that (1) by incorporating USAM, RK-Net facilitates end-to-end joint learning without the prerequisite of extra annotations. Representation learning and keypoint detection are two highly-related tasks. Representation learning aids keypoint detection. Keypoint detection, in turn, enriches the model capability against large appearance changes caused by viewpoint variants. (2) USAM is easy to implement and can be integrated with existing methods, further improving the state-of-the-art performance. We achieve competitive geo-localization accuracy on three challenging datasets, i. e., University-1652, CVUSA and CVACT. Our code is available at https://github.com/AggMan96/RK-Net.
This work presents a simplified but effective architecture based on contrastive learning with symmetric InfoNCE loss that outperforms current state-of-the-art results and introduces two types of sampling strategies for hard negatives.
Adding a benchmark result helps the community track progress.