HighRes-net is presented, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion, and shows that by learning deep representations of multiple views, it can super-resolve low-resolution signals and enhance Earth Observation data at scale.
Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details. Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views. This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery. To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss. Co-registration of low-resolution views is learned implicitly through a reference-frame channel, with no explicit registration mechanism. We learn a global fusion operator that is applied recursively on an arbitrary number of low-resolution pairs. We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet. We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth Observation data at scale. Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.
Michel Deudon
1 papers
Alfredo Kalaitzis
1 papers
Israel Goytom
1 papers
Md Rifat Arefin
1 papers
Zhichao Lin
1 papers
K. Sankaran
1 papers
Julien Cornebise
2 papers