No-Code SR Demo is now live!

 

Open In Colab

Super-Resolve Sentinel-2 in Your Browser — No Code Required 🚀

Have you ever looked at a 10 m Sentinel-2 scene and wished you could zoom in without downloading huge stacks of raw data or writing Python? Now you can. The brand-new LDSR-S2 “no-code” Colab notebook lets anyone—researcher, analyst or student—run state-of-the-art latent-diffusion super-resolution entirely in the browser on Google’s free GPUs.

Why it’s worth a try

  • Runs on free Colab hardware – one click, no installs, no GPU at home required.

  • Pick your AOI visually – scroll an OpenStreetMap basemap and click to capture lat/lon, or type coordinates if you already know them.

  • Define your time window & cloud filter – avoid cloudy scenes with a slider (e.g. “max 10 % CC”).

  • Interactive preview – the notebook shows the native 10 m RGB preview and asks “Does this look OK?” before spending GPU time.

  • Instant evaluation – in ~30 s you’ll see a side-by-side LR vs. SR PNG and get ready-to-download GeoTIFFs for both. Load them in QGIS/ArcGIS and decide if LDSR-S2 fits your pipeline.

 

 

Open In Colab

 

Step-by-Step Guide

StepWhat to doTip
1Open the notebook via the badge above.You’ll land in Google Colab.
2Switch to a GPU runtimeRuntime → Change runtime type → GPU.Free tier GPUs are fine.
3Run every cell:Runtime → Run all.Colab installs dependencies & loads the model automatically.
4In the GUI: Select start/end dates and max cloud-cover.Narrow windows fetch faster.
5Click “Enter coordinates” or “Select on map.”

• Typing coords shows input boxes.

• Map lets you pan/zoom and click once.

6Hit “Load Scene.”The S2 10 m preview appears.
7If the preview looks good, press “Use this scene.”Otherwise, choose “Get different scene.”
8Wait ~5 s while the GPU works. 
9View the LR vs. SR comparison PNG.PNG is saved as example.png.
10Download the GeoTIFFs:lr.tif (native) and sr.tif (2.5 m SR).Files sidebar → right-click Download.Both files are fully georeferenced and drop straight into QGIS/ArcGIS.

 

Under the Hood 🔧

  • Data provider: Microsoft Planetary Computer (Sentinel-2 L2A, BOA reflectance).

  • Model: LDSR-S2 v 1.0 (latent diffusion).

  • Patch size: 128 × 128 px → 512 × 512 px (10m -> 2.5 m effective spatial resolution).

  • Multispectral: RGB+NIR  
 
  • Outputs:

    • lr.tif – original 10 m four-band patch.

    • sr.tif – super-resolved 2.5 m four-band patch.

    • example.png – side-by-side LR vs. SR preview.

See if LDSR-S2 Fits Your Use-Case

  • Trying to classify roofs?

  • Need crisper disaster-response maps?

  • Curious if SR helps object detection?

Open the notebook, click a location, and find out in minutes—no coding, no local GPU, zero setup. If it works, grab the GeoTIFFs and drop them into your workflow; if not, change dates or pick another AOI and iterate instantly.

Happy super-resolving! Don’t forget to let us know if these results are useful for you!

Recent Posts

The New OpenSR WebGIS Viewer: Exploring Europe at 2.5m

Explore our new WebGIS viewer showcasing a Europe-scale 2.5 m super-resolved Sentinel-2 data cube generated with LDSR-S2 and directly comparable to the native 10 m imagery.

OpenSR-SRGAN – Modular Framework for Multispectral SR (Code and Publication)

OpenSR-SRGAN is our new open-source framework that makes GAN-based super-resolution easy, modular, and fully reproducible for multispectral satellite data. It lets researchers swap architectures, losses, and training strategies through simple configuration files—no code changes needed.

SEN2SR Integration in ArcGIS!

The ESRI Analytics Team has integrated our model into ArcGIS! Users can now easily and seamlessly use our models straight without coding, straight through their GIS software.

New Publication: NIR-GAN – Syntehtic NIR from RGB images

NIR-GAN: Synthesizing Near-Infrared from RGB with Location Embeddings and Task-Driven Losses We’re excited to share that our work, “Near-Infrared Band Synthesis From Earth Observation Imagery With Learned Location Embeddings and Task-Driven Loss Functions,” has been published open access by IEEE. Most remote sensing workflows depend on near-infrared (NIR) information—think NDVI/NDWI for vegetation and water—but many RGB-only archives simply don’t have it. NIR-GAN closes that gap by learning to generate a realistic NIR band directly from RGB, so practitioners can compute familiar indices and train multispectral models even when NIR isn’t available. What we built NIR-GAN (conditional GAN): An image-to-image model ...

OpenSR at the 4th IADF School: Luis and César Lead Sessions on EO Super-Resolution

Luis Gómez Chova and César Aybar represented OpenSR at the 4th IADF School in Benevento, delivering sessions on super resolution and EO machine learning.

No-Code SR Demo is now live!

This demo, aimed at non-technical users, allows you to enter your coordinates and create a super-resolution product on your custom Sentinel-2 acquisition. Immediately judge wether SR can be useful for you application!

OpenSR Team @Living Planet Symposium

The OpenSR team joined ESA’s Living Planet Symposium 2025 to present our latest advances in Sentinel-2 super-resolution, dataset standards, and workflows. From latent diffusion models to FAIR-compliant data access with TACO, our tools aim to make high-resolution Earth observation more accessible and actionable.

New Release: OpenSR-UseCases Package

A lightweight validation toolkit to benchmark segmentation performance across low-, super-, and high-resolution imagery. Quantifies how well super-resolution models improve object detection and segmentation accuracy in real-world tasks. Ideal for researchers who want to go beyond visual inspection and measure actual downstream performance gains.

New Preprint: A Radiometrically and Spatially Consistent Super-Resolution Framework for Sentinel-2

We’ve published a new preprint presenting SEN2SR, a deep learning framework for super-resolving Sentinel-2 imagery with radiometric and spatial fidelity. The model leverages harmonized synthetic data, hard constraints, and xAI tools to achieve artifact-free enhancements at 2.5 m resolution.

RGB-NIR Latent Diffusion Super-Resolution Model Released!

Our Latent diffusion model, including weights, for the RGB-NIR bands of Sentinel-2 has been released.

New Publication: LDSR-S2 Model Paper

Our diffusion-based super-resolution model for Sentinel-2 imagery has been published in IEEE JSTARS! The open-access paper introduces a latent diffusion approach with pixelwise uncertainty maps—pushing the boundaries of trustworthy generative modeling in Earth observation.

SEN2NAIP v2.0 Released — A Major Boost for Sentinel-2 Super-Resolution

We’ve released SEN2NAIP v2.0, a large-scale dataset designed for training and validating super-resolution models on Sentinel-2 imagery. The dataset includes thousands of real and synthetic HR-LR image pairs, making it a cornerstone for future SR research in Earth Observation.

New Publication: SEN2NAIP published in ‘Scientific Data’

The dataset paper has been published in 'Scientific Data'.

The OpenSR team contributes to Flood Mapping for the Valencian Flash Floods

Our team at the University of Valencia has released an interactive satellite flood map of the recent Valencia flash floods, using Landsat-8 and Sentinel-2 imagery combined with a machine learning segmentation model. Leveraging super-resolution techniques, we enhanced Sentinel-2 data to 2.5m resolution, enabling more precise flood extent mapping for post-disaster analysis.

OpenSR-Utils Preview Released: A package to handle patching, tiling and overlapping for SR Products

We’ve released a preview of OpenSR-Utils, a Python package to apply super-resolution models on raw Sentinel-2 imagery. With multi-GPU support, georeferenced output, and automatic patching, it’s a practical toolkit for real-world remote sensing pipelines.