3260 papers • 126 benchmarks • 313 datasets
Hyperspectral Image Classification is a task in the field of remote sensing and computer vision. It involves the classification of pixels in hyperspectral images into different classes based on their spectral signature. Hyperspectral images contain information about the reflectance of objects in hundreds of narrow, contiguous wavelength bands, making them useful for a wide range of applications, including mineral mapping, vegetation analysis, and urban land-use mapping. The goal of this task is to accurately identify and classify different types of objects in the image, such as soil, vegetation, water, and buildings, based on their spectral properties. ( Image credit: Shorten Spatial-spectral RNN with Parallel-GRU for Hyperspectral Image Classification )
(Image credit: Papersgraph)
These leaderboards are used to track progress in hyperspectral-image-classification-20
Use these libraries to find hyperspectral-image-classification-20 models and implementations
No subtasks available.
A novel deep convolutional neural network that is deeper and wider than other existing deep networks for hyperspectral image classification, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors.
A novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data.
This article proposes a unified BS framework, BS Network (BS-Net), which consists of a band attention module (BAM), which aims to explicitly model the nonlinear interdependences between spectral bands, and a reconstruction network (RecNet) which is used to restore the original HSI from the learned informative bands, resulting in a flexible architecture.
The proposed multitask deep learning method for the classification of multiple hyperspectral data in a single training successfully shows its ability to utilize samples from multiple data sets and to enhance networks’ performance.
Results indicate that the Fourier scattering transform is highly effective at representing spectral data when compared with other state-of-the-art spectral-spatial classification methods.
This work proposed a lightweight CNN (3D followed by 2D-CNN) model which significantly reduces the computational cost by distributing spatial-spectral feature extraction across a lighter model alongside a preprocessing that has been carried out to improve the classification results.
SpectralNET, a wavelet CNN, which is a variation of 2D CNN for multi-resolution HSI classification, is proposed and a better model is achieved that can classify multi- resolution HSI data with high accuracy.
This article performs the superpixel generation on intermediate features during network training to adaptively produce homogeneous regions, obtain graph structures, and further generate spatial descriptors, which are served as graph nodes, to obtain a spectral-spatial graph reasoning network (SSGRN).
This work rethink HS image classification from a sequential perspective with transformers and proposes a novel backbone network called SpectralFormer, which is capable of learning spectrally local sequence information from neighboring bands of HS images, yielding groupwise spectral embeddings.
Adding a benchmark result helps the community track progress.