3260 papers • 126 benchmarks • 313 datasets
Change Detection is a computer vision task that involves detecting changes in an image or video sequence over time. The goal is to identify areas in the image or video that have undergone changes, such as appearance changes, object disappearance or appearance, or even changes in the scene's background. Image credit: "A TRANSFORMER-BASED SIAMESE NETWORK FOR CHANGE DETECTION"
(Image credit: Papersgraph)
These leaderboards are used to track progress in change-detection-10
Use these libraries to find change-detection-10 models and implementations
Experiments on short speech turn comparison and speaker change detection show that TristouNet brings significant improvements over the current state-of-the-art techniques for both tasks.
This work proposes a novel Siamese-based spatial–temporal attention neural network, which improves the F1-score of the baseline model from 83.9 to 87.3 with acceptable computational overhead and introduces a CD dataset LEVIR-CD, which is two orders of magnitude larger than other public datasets of this field.
Change detection is an important task in remote sensing (RS) image analysis. It is widely used in natural disaster monitoring and assessment, land resource planning, and other fields. As a pixel-to-pixel prediction task, change detection is sensitive about the utilization of the original position information. Recent change detection methods always focus on the extraction of deep change semantic feature, but ignore the importance of shallow-layer information containing high-resolution and fine-grained features, this often leads to the uncertainty of the pixels at the edge of the changed target and the determination miss of small targets. In this letter, we propose a densely connected siamese network for change detection, namely SNUNet-CD (the combination of Siamese network and NestedUNet). SNUNet-CD alleviates the loss of localization information in the deep layers of neural network through compact information transmission between encoder and decoder, and between decoder and decoder. In addition, Ensemble Channel Attention Module (ECAM) is proposed for deep supervision. Through ECAM, the most representative features of different semantic levels can be refined and used for the final classification. Experimental results show that our method improves greatly on many evaluation criteria and has a better tradeoff between accuracy and calculation amount than other state-of-the-art (SOTA) change detection methods.
This work proposes a bitemporal image transformer (BIT) to efficiently and effectively model contexts within the spatial-temporal domain and significantly outperforms the purely convolutional baseline using only three times lower computational costs and model parameters.
This paper presents three fully convolutional neural network architectures which perform change detection using a pair of coregistered images, and proposes two Siamese extensions of fully Convolutional networks which use heuristics about the current problem to achieve the best results.
A novel deep learning framework for urban change detection which combines state-of-the-art fully convolutional networks (similar to U-Net) for feature representation and powerful recurrent networks (such as LSTMs) for temporal modeling is presented.
A deeply supervised image fusion network (IFN) is proposed for change detection in high resolution bi-temporal remote sensing images and outperforms four benchmark methods derived from the literature, by returning changed areas with complete boundaries and high internal compactness compared to the state-of-the-art methods.
xBD provides pre- and post-event multi-band satellite imagery from a variety of disaster events with building polygons, classification labels for damage types, ordinal labels of damage level, and corresponding satellite metadata, and will be the largest building damage assessment dataset to date.
A powerful feature extraction model entitled multi-scale feature convolution unit (MFCU) is adopted for change detection in multi-temporal VHR images and two novel deep siamese convolutional neural networks are designed for unsupervised and supervised change detection, respectively.
DILATE (DIstortion Loss including shApe and TimE), a new objective function for training deep neural networks that aims at accurately predicting sudden changes, is introduced, and explicitly incorporates two terms supporting precise shape and temporal change detection.
Adding a benchmark result helps the community track progress.