Unsupervised change detection stands as a critical tool for damage assessment after a natural disaster. We emphasize heterogeneous change detection methods, which support the case of highly heterogeneous images at the two observation dates, providing greater flexibility than traditional homogeneous methods. This adaptability is vital for swift responses in the aftermath of natural disasters. In this framework, we address the challenging case of detecting changes between a hyperspectral and a synthetic aperture radar images. This case has intrinsic difficulties, namely the difference in the nature of the physical quantity measured, added to the great difference in dimensionality of the two imaging domains. To address these challenges, a novel method is proposed based on the integration of a manifold learning technique and deep learning networks trained to perform an image to image translation task. The method works in a fully unsupervised manner, further enforcing a fast implementation in real-world scenarios. From an application-oriented perspective, we focus on flooded-area mapping using the PRISMA and COSMO-SkyMed missions. The experimental validation on two datasets, a semi-simulated one and a real one associated with flooding, suggests that the proposed method allows for accurate detection of flooded areas and other ground changes.

Manifold learning and deep generative networks for heterogeneous change detection from hyperspectral and synthetic aperture radar images

Masari I.;Moser G.;Serpico S. B.
2024-01-01

Abstract

Unsupervised change detection stands as a critical tool for damage assessment after a natural disaster. We emphasize heterogeneous change detection methods, which support the case of highly heterogeneous images at the two observation dates, providing greater flexibility than traditional homogeneous methods. This adaptability is vital for swift responses in the aftermath of natural disasters. In this framework, we address the challenging case of detecting changes between a hyperspectral and a synthetic aperture radar images. This case has intrinsic difficulties, namely the difference in the nature of the physical quantity measured, added to the great difference in dimensionality of the two imaging domains. To address these challenges, a novel method is proposed based on the integration of a manifold learning technique and deep learning networks trained to perform an image to image translation task. The method works in a fully unsupervised manner, further enforcing a fast implementation in real-world scenarios. From an application-oriented perspective, we focus on flooded-area mapping using the PRISMA and COSMO-SkyMed missions. The experimental validation on two datasets, a semi-simulated one and a real one associated with flooding, suggests that the proposed method allows for accurate detection of flooded areas and other ground changes.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1229685
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact