OpenSR-SRGAN – Modular Framework for Multispectral SR (Code and Publication)

Earth-observation data is evolving fast, and so is the need for tools that help us push its limits. Today we’re excited to introduce OpenSR-SRGAN, our new open-source framework designed to make GAN-based super-resolution practical, reproducible, and easy to experiment with across a wide range of satellite sensors and spectral bands.

OpenSR-SRGAN is now publicly available and published as a software package (DOI included in the paper). It’s part of the growing OpenSR ecosystem developed at the Image & Signal Processing Group (IPL), University of Valencia, funded by the OpenSR project by ESA’s Φ-Lab.

🚀 Why We Built OpenSR-SRGAN

GANs remain incredibly powerful tools for generating high-frequency detail, but anyone who has trained them knows the pain: instability, weird dynamics between generator and discriminator, sensitivity to hyperparameters… and that’s before dealing with multispectral inputs.

OpenSR-SRGAN solves these pain points by offering:

  • A unified, modular architecture for generators & discriminators

  • Config-driven experimentation via simple YAML files

  • Native support for arbitrary spectral bands (RGB, NIR, SWIR, and beyond)

  • Training stabilization mechanisms built-in (warmup, label smoothing, EMA, etc.)

  • Seamless integration with the rest of the OpenSR ecosystem: SEN2NAIP dataset, opensr-test, and opensr-utils

Instead of modifying code for every new experiment or sensor, users only edit a configuration file. This keeps the workflow clean, reproducible, and beginner-friendly.

All of this is described in detail in the software paper (see pages 1–3 for the motivation and contributions)

🧠 What’s Inside the Framework?

1. Interchangeable Generators

You can switch between several backbone styles (SRResNet, RCAB, RRDB, ESRGAN, large-kernel attention, even a conditional/noise-augmented cGAN generator) simply by editing a flag in your config.
Appendix A (page 8) lists all generator types and their characteristics.

2. Multiple Discriminator Options

Depending on your goal, pick:

  • a global SRGAN discriminator,

  • a PatchGAN, or

  • a deep ESRGAN-style discriminator.
    See Table 2 on page 8 for a quick comparison.

3. Built-In Training Stabilization

Out of the box, the framework includes:

  • generator pretraining,

  • adversarial ramp-up,

  • warmup schedules,

  • label smoothing,

  • TTUR,

  • Exponential Moving Average (EMA) of weights (Section 4.3 & 4.3.2).

These features make training far less chaotic than standard GAN implementations.

4. A Config-First Workflow

Everything—architecture, dataset, losses, optimizers, training schedules—is controlled from a YAML file.
You don’t touch model code unless you want to.

🌍 Designed for the EO Community

OpenSR-SRGAN is not meant to be “the best SR model ever.”
Instead, it’s designed to be:

  • a reliable baseline,

  • a testbed for new ideas,

  • a reproducible benchmark, and

  • a practical tool for multispectral and multisensor workflows.

It offers a clean engineering foundation so researchers can focus on ideas, not boilerplate.

📦 Get Started

The source code and ready-to-run examples are available at:

👉 https://github.com/ESAOpenSR/SRGAN

If you’re already using other OpenSR tools like opensr-test or opensr-utils, this package slots right in.

📊 Examples From the Paper

The paper includes two reference experiments demonstrating how the framework handles both RGB super-resolution and multispectral/SWIR reconstruction.

Example 1 — 4× RGB-NIR Super-Resolution (SEN2NAIP)

As shown in Figure 2 on page 10, the framework enhances Sentinel-2 RGB/NIR from 10m to 2.5m, producing noticeably sharper buildings, field boundaries, and road structures.
The setup uses an RCAB-based generator and an SRGAN discriminator, trained with L1 + perceptual + adversarial loss.

Example 2 — 8× Multispectral SWIR Super-Resolution

Shown in Figure 3 on page 12, OpenSR-SRGAN successfully reconstructs 20m Sentinel-2 bands from synthetically degraded 160m inputs, recovering edges and small structures while preserving band-wise spectral consistency.
This experiment uses a PatchGAN discriminator with emphasis on L1 + SAM losses.

The tables on pages 11–12 summarize configurations, PSNR/SSIM/SAM results, and model performance.

Example 4 — 4× Single-Channel Medical Image SR Example

Just as an experiment, we test how well the general framework si able to generalize to other domains. Just based on standard image metrics, it seems like SRGAN is able to produce competitive results. Importantly, the modular design allows the user to immediately start experimenting with their own data, normalizations and architectures.

 

Recent Posts

The New OpenSR WebGIS Viewer: Exploring Europe at 2.5m

Explore our new WebGIS viewer showcasing a Europe-scale 2.5 m super-resolved Sentinel-2 data cube generated with LDSR-S2 and directly comparable to the native 10 m imagery.

OpenSR-SRGAN – Modular Framework for Multispectral SR (Code and Publication)

OpenSR-SRGAN is our new open-source framework that makes GAN-based super-resolution easy, modular, and fully reproducible for multispectral satellite data. It lets researchers swap architectures, losses, and training strategies through simple configuration files—no code changes needed.

SEN2SR Integration in ArcGIS!

The ESRI Analytics Team has integrated our model into ArcGIS! Users can now easily and seamlessly use our models straight without coding, straight through their GIS software.

New Publication: NIR-GAN – Syntehtic NIR from RGB images

NIR-GAN: Synthesizing Near-Infrared from RGB with Location Embeddings and Task-Driven Losses We’re excited to share that our work, “Near-Infrared Band Synthesis From Earth Observation Imagery With Learned Location Embeddings and Task-Driven Loss Functions,” has been published open access by IEEE. Most remote sensing workflows depend on near-infrared (NIR) information—think NDVI/NDWI for vegetation and water—but many RGB-only archives simply don’t have it. NIR-GAN closes that gap by learning to generate a realistic NIR band directly from RGB, so practitioners can compute familiar indices and train multispectral models even when NIR isn’t available. What we built NIR-GAN (conditional GAN): An image-to-image model ...

OpenSR at the 4th IADF School: Luis and César Lead Sessions on EO Super-Resolution

Luis Gómez Chova and César Aybar represented OpenSR at the 4th IADF School in Benevento, delivering sessions on super resolution and EO machine learning.

No-Code SR Demo is now live!

This demo, aimed at non-technical users, allows you to enter your coordinates and create a super-resolution product on your custom Sentinel-2 acquisition. Immediately judge wether SR can be useful for you application!

OpenSR Team @Living Planet Symposium

The OpenSR team joined ESA’s Living Planet Symposium 2025 to present our latest advances in Sentinel-2 super-resolution, dataset standards, and workflows. From latent diffusion models to FAIR-compliant data access with TACO, our tools aim to make high-resolution Earth observation more accessible and actionable.

New Release: OpenSR-UseCases Package

A lightweight validation toolkit to benchmark segmentation performance across low-, super-, and high-resolution imagery. Quantifies how well super-resolution models improve object detection and segmentation accuracy in real-world tasks. Ideal for researchers who want to go beyond visual inspection and measure actual downstream performance gains.

New Preprint: A Radiometrically and Spatially Consistent Super-Resolution Framework for Sentinel-2

We’ve published a new preprint presenting SEN2SR, a deep learning framework for super-resolving Sentinel-2 imagery with radiometric and spatial fidelity. The model leverages harmonized synthetic data, hard constraints, and xAI tools to achieve artifact-free enhancements at 2.5 m resolution.

RGB-NIR Latent Diffusion Super-Resolution Model Released!

Our Latent diffusion model, including weights, for the RGB-NIR bands of Sentinel-2 has been released.

New Publication: LDSR-S2 Model Paper

Our diffusion-based super-resolution model for Sentinel-2 imagery has been published in IEEE JSTARS! The open-access paper introduces a latent diffusion approach with pixelwise uncertainty maps—pushing the boundaries of trustworthy generative modeling in Earth observation.

SEN2NAIP v2.0 Released — A Major Boost for Sentinel-2 Super-Resolution

We’ve released SEN2NAIP v2.0, a large-scale dataset designed for training and validating super-resolution models on Sentinel-2 imagery. The dataset includes thousands of real and synthetic HR-LR image pairs, making it a cornerstone for future SR research in Earth Observation.

New Publication: SEN2NAIP published in ‘Scientific Data’

The dataset paper has been published in 'Scientific Data'.

The OpenSR team contributes to Flood Mapping for the Valencian Flash Floods

Our team at the University of Valencia has released an interactive satellite flood map of the recent Valencia flash floods, using Landsat-8 and Sentinel-2 imagery combined with a machine learning segmentation model. Leveraging super-resolution techniques, we enhanced Sentinel-2 data to 2.5m resolution, enabling more precise flood extent mapping for post-disaster analysis.

OpenSR-Utils Preview Released: A package to handle patching, tiling and overlapping for SR Products

We’ve released a preview of OpenSR-Utils, a Python package to apply super-resolution models on raw Sentinel-2 imagery. With multi-GPU support, georeferenced output, and automatic patching, it’s a practical toolkit for real-world remote sensing pipelines.