New Publication: NIR-GAN – Syntehtic NIR from RGB images

NIR-GAN: Synthesizing Near-Infrared from RGB with Location Embeddings and Task-Driven Losses

We’re excited to share that our work, “Near-Infrared Band Synthesis From Earth Observation Imagery With Learned Location Embeddings and Task-Driven Loss Functions,” has been published open access by IEEE.

Most remote sensing workflows depend on near-infrared (NIR) information—think NDVI/NDWI for vegetation and water—but many RGB-only archives simply don’t have it. NIR-GAN closes that gap by learning to generate a realistic NIR band directly from RGB, so practitioners can compute familiar indices and train multispectral models even when NIR isn’t available.

What we built

  • NIR-GAN (conditional GAN): An image-to-image model that predicts NIR from RGB.

  • Learned location embeddings: Compact vectors encoding geographic & climatic context to improve generalization across regions and sensors.

  • Task-driven loss design: Differentiable, index-derived losses (e.g., NDVI/NDWI) that optimize the synthesized NIR specifically for downstream RS tasks—not just for pixel fidelity.

  • Cross-sensor robustness: Effective across data sources with different spectral/spatial characteristics.

Results at a glance

  • Up to 26.6 dB PSNR on Sentinel-2 for the synthesized NIR band.

  • Visually realistic outputs that preserve structural patterns relevant for vegetation/water monitoring.

  • Enables creation of partly synthetic multispectral datasets, improving training coverage where NIR is missing.

Why it matters

  • Compute NDVI/NDWI and related indices on RGB-only archives.

  • Augment training data for multispectral models without expensive acquisitions.

  • Increase coverage for land-cover mapping, change detection, and environmental monitoring where NIR is unavailable.

🔗 Read the full article (Open Access): IEEE Xplore

Example Result and histogram.
Time-Series of NDVI development: Track NDVI over crop cycles and seasonality.

Recent Posts

New Publication: NIR-GAN – Syntehtic NIR from RGB images

NIR-GAN: Synthesizing Near-Infrared from RGB with Location Embeddings and Task-Driven Losses We’re excited to share that our work, “Near-Infrared Band Synthesis From Earth Observation Imagery With Learned Location Embeddings and Task-Driven Loss Functions,” has been published open access by IEEE. Most remote sensing workflows depend on near-infrared (NIR) information—think NDVI/NDWI for vegetation and water—but many RGB-only archives simply don’t have it. NIR-GAN closes that gap by learning to generate a realistic NIR band directly from RGB, so practitioners can compute familiar indices and train multispectral models even when NIR isn’t available. What we built NIR-GAN (conditional GAN): An image-to-image model ...

No-Code SR Demo is now live!

This demo, aimed at non-technical users, allows you to enter your coordinates and create a super-resolution product on your custom Sentinel-2 acquisition. Immediately judge wether SR can be useful for you application!

OpenSR Team @Living Planet Symposium

The OpenSR team joined ESA’s Living Planet Symposium 2025 to present our latest advances in Sentinel-2 super-resolution, dataset standards, and workflows. From latent diffusion models to FAIR-compliant data access with TACO, our tools aim to make high-resolution Earth observation more accessible and actionable.

New Release: OpenSR-UseCases Package

A lightweight validation toolkit to benchmark segmentation performance across low-, super-, and high-resolution imagery. Quantifies how well super-resolution models improve object detection and segmentation accuracy in real-world tasks. Ideal for researchers who want to go beyond visual inspection and measure actual downstream performance gains.

New Preprint: A Radiometrically and Spatially Consistent Super-Resolution Framework for Sentinel-2

We’ve published a new preprint presenting SEN2SR, a deep learning framework for super-resolving Sentinel-2 imagery with radiometric and spatial fidelity. The model leverages harmonized synthetic data, hard constraints, and xAI tools to achieve artifact-free enhancements at 2.5 m resolution.

RGB-NIR Latent Diffusion Super-Resolution Model Released!

Our Latent diffusion model, including weights, for the RGB-NIR bands of Sentinel-2 has been released.

New Publication: LDSR-S2 Model Paper

Our diffusion-based super-resolution model for Sentinel-2 imagery has been published in IEEE JSTARS! The open-access paper introduces a latent diffusion approach with pixelwise uncertainty maps—pushing the boundaries of trustworthy generative modeling in Earth observation.

SEN2NAIP v2.0 Released — A Major Boost for Sentinel-2 Super-Resolution

We’ve released SEN2NAIP v2.0, a large-scale dataset designed for training and validating super-resolution models on Sentinel-2 imagery. The dataset includes thousands of real and synthetic HR-LR image pairs, making it a cornerstone for future SR research in Earth Observation.

New Publication: SEN2NAIP published in ‘Scientific Data’

The dataset paper has been published in 'Scientific Data'.

The OpenSR team contributes to Flood Mapping for the Valencian Flash Floods

Our team at the University of Valencia has released an interactive satellite flood map of the recent Valencia flash floods, using Landsat-8 and Sentinel-2 imagery combined with a machine learning segmentation model. Leveraging super-resolution techniques, we enhanced Sentinel-2 data to 2.5m resolution, enabling more precise flood extent mapping for post-disaster analysis.

OpenSR-Utils Preview Released: A package to handle patching, tiling and overlapping for SR Products

We’ve released a preview of OpenSR-Utils, a Python package to apply super-resolution models on raw Sentinel-2 imagery. With multi-GPU support, georeferenced output, and automatic patching, it’s a practical toolkit for real-world remote sensing pipelines.

SUPERIX: Intercomparison Excercise

Presenting SUPERIX: a community-driven benchmark to rigorously compare super-resolution models for Sentinel-2 data. Using real-world datasets and tailored metrics, SUPERIX aims to uncover the true impact of SR techniques on remote sensing accuracy.

Team attends ESA SUREDOS Workshop in Frascati

Our team attended the ESA SUREDOS Workshop to discuss the role of super-resolution in enhancing Earth Observation data. The event explored cutting-edge deep learning techniques and the importance of reliable, domain-specific validation for scientific and operational EO applications.

New Publication: OpenSR-Test theoretical framework has been published

OpenSR-test is now published in IEEE GRSL! Our new paper introduces a rigorous benchmark for evaluating super-resolution in remote sensing with real-world datasets and meaningful metrics.

OpenSR-Test Framework and Datasets Released

Our framework to validate supre-resolution results is now published. It can take any SR model and create sophisticated validation metrics over mutliple datasets, enhancing the comparability of methodologies.