Uncertainty quantification#

R.A. Collenteur, University of Graz, WIP (May-2021)

In this notebook it is shown how to compute the uncertainty of the model simulation using the built-in uncertainty quantification options of Pastas.

  • Confidence interval of simulation

  • Prediction interval of simulation

  • Confidence interval of step response

  • Confidence interval of block response

  • Confidence interval of contribution

  • Custom confidence intervals

The compute the confidence intervals, parameters sets are drawn from a multivariate normal distribution based on the jacobian matrix obtained during parameter optimization. This method to quantify uncertainties has some underlying assumptions on the model residuals (or noise) that should be checked. This notebook only deals with parameter uncertainties and not with model structure uncertainties.

[1]:
import pandas as pd
import pastas as ps

import matplotlib.pyplot as plt

ps.set_log_level("ERROR")
ps.show_versions()
Python version: 3.10.8
NumPy version: 1.23.5
Pandas version: 2.0.1
SciPy version: 1.10.1
Matplotlib version: 3.7.1
Numba version: 0.57.0
LMfit version: 1.2.1
Latexify version: Not Installed
Pastas version: 1.0.1

Create a model#

We first create a toy model to simulate the groundwater levels in southeastern Austria. We will use this model to illustrate how the different methods for uncertainty quantification can be used.

[2]:
gwl = (
    pd.read_csv("data_wagna/head_wagna.csv", index_col=0, parse_dates=True, skiprows=2)
    .squeeze()
    .loc["2006":]
    .iloc[0::10]
)
evap = pd.read_csv(
    "data_wagna/evap_wagna.csv", index_col=0, parse_dates=True, skiprows=2
).squeeze()
prec = pd.read_csv(
    "data_wagna/rain_wagna.csv", index_col=0, parse_dates=True, skiprows=2
).squeeze()

# Model settings
tmin = pd.Timestamp("2007-01-01")  # Needs warmup
tmax = pd.Timestamp("2016-12-31")

ml = ps.Model(gwl)
sm = ps.RechargeModel(
    prec, evap, recharge=ps.rch.FlexModel(), rfunc=ps.Exponential(), name="rch"
)
ml.add_stressmodel(sm)

# Add the ARMA(1,1) noise model and solve the Pastas model
ml.add_noisemodel(ps.ArmaModel())
ml.solve(tmin=tmin, tmax=tmax, noise=True)
Fit report GWL                          Fit Statistics
======================================================
nfev    32                     EVP               74.63
nobs    365                    R2                 0.75
noise   True                   RMSE               0.19
tmin    2007-01-01 00:00:00    AIC            -2051.60
tmax    2016-12-31 00:00:00    BIC            -2020.40
freq    D                      Obj                0.63
warmup  3650 days 00:00:00     ___
solver  LeastSquares           Interp.              No

Parameters (8 optimized)
======================================================
                optimal      stderr     initial   vary
rch_A          0.522819      ±9.99%    0.529381   True
rch_a         63.841472     ±13.23%   10.000000   True
rch_srmax    421.002411     ±42.74%  250.000000   True
rch_lp         0.250000        ±nan    0.250000  False
rch_ks       756.867252    ±206.24%  100.000000   True
rch_gamma      4.808335     ±20.19%    2.000000   True
rch_kv         1.000000        ±nan    1.000000  False
rch_simax      2.000000        ±nan    2.000000  False
constant_d   262.646990  ±2.55e-02%  263.166264   True
noise_alpha  122.983409     ±24.84%   10.000000   True
noise_beta     8.492102     ±12.35%    1.000000   True

Diagnostic Checks#

Before we perform the uncertainty quantification, we should check if the underlying statistical assumptions are met. We refer to the notebook on Diagnostic checking for more details on this.

[3]:
ml.plots.diagnostics();
../_images/examples_uncertainty_5_0.png

Confidence intervals#

After the model is calibrated, a fit attribute is added to the Pastas Model object (ml.fit). This object contains information about the optimizations (e.g., the jacobian matrix) and a number of methods that can be used to quantify uncertainties.

[4]:
ci = ml.fit.ci_simulation(alpha=0.05, n=1000)
ax = ml.plot(figsize=(10, 3))
ax.fill_between(ci.index, ci.iloc[:, 0], ci.iloc[:, 1], color="lightgray")
ax.legend(["Observations", "Simulation", "95% Confidence interval"], ncol=3, loc=2)
[4]:
<matplotlib.legend.Legend at 0x7fe4c537be80>
../_images/examples_uncertainty_7_1.png

Prediction interval#

[5]:
ci = ml.fit.prediction_interval(n=1000)
ax = ml.plot(figsize=(10, 3))
ax.fill_between(ci.index, ci.iloc[:, 0], ci.iloc[:, 1], color="lightgray")
ax.legend(["Observations", "Simulation", "95% Prediction interval"], ncol=3, loc=2)
[5]:
<matplotlib.legend.Legend at 0x7fe4c4ffa4a0>
../_images/examples_uncertainty_9_1.png

Uncertainty of step response#

[6]:
ci = ml.fit.ci_step_response("rch")
ax = ml.plots.step_response(figsize=(6, 2))
ax.fill_between(ci.index, ci.iloc[:, 0], ci.iloc[:, 1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=4)
[6]:
<matplotlib.legend.Legend at 0x7fe4c5010be0>
../_images/examples_uncertainty_11_1.png

Uncertainty of block response#

[7]:
ci = ml.fit.ci_block_response("rch")
ax = ml.plots.block_response(figsize=(6, 2))
ax.fill_between(ci.index, ci.iloc[:, 0], ci.iloc[:, 1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=1)
[7]:
<matplotlib.legend.Legend at 0x7fe4bef43700>
../_images/examples_uncertainty_13_1.png

Uncertainty of the contributions#

[8]:
ci = ml.fit.ci_contribution("rch")
r = ml.get_contribution("rch")
ax = r.plot(figsize=(10, 3))
ax.fill_between(ci.index, ci.iloc[:, 0], ci.iloc[:, 1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=1)
plt.tight_layout()
../_images/examples_uncertainty_15_0.png

Custom Confidence intervals#

It is also possible to compute the confidence intervals manually, for example to estimate the uncertainty in the recharge or statistics (e.g., SGI, NSE). We can call ml.fit.get_parameter_sample to obtain random parameter samples from a multivariate normal distribution using the optimal parameters and the covariance matrix. Next, we use the parameter sets to obtain multiple simulations of ‘something’, here the recharge.

[9]:
params = ml.fit.get_parameter_sample(n=1000, name="rch")
data = {}

# Here we run the model n times with different parameter samples
for i, param in enumerate(params):
    data[i] = ml.stressmodels["rch"].get_stress(p=param)

df = pd.DataFrame.from_dict(data, orient="columns").loc[tmin:tmax].resample("A").sum()
ci = df.quantile([0.025, 0.975], axis=1).transpose()

r = ml.get_stress("rch").resample("A").sum()
ax = r.plot.bar(figsize=(10, 2), width=0.5, yerr=[r - ci.iloc[:, 0], ci.iloc[:, 1] - r])
ax.set_xticklabels(labels=r.index.year, rotation=0, ha="center")
ax.set_ylabel("Recharge [mm a$^{-1}$]")
ax.legend(ncol=3);
../_images/examples_uncertainty_17_0.png

Uncertainty of the NSE#

The code pattern shown above can be used for many types of uncertainty analyses. Another example is provided below, where we compute the uncertainty of the Nash-Sutcliffe efficacy.

[10]:
params = ml.fit.get_parameter_sample(n=1000)
data = []

# Here we run the model n times with different parameter samples
for i, param in enumerate(params):
    sim = ml.simulate(p=param)
    data.append(ps.stats.nse(obs=ml.observations(), sim=sim))

fig, ax = plt.subplots(1, 1, figsize=(4, 3))
plt.hist(data, bins=50, density=True)
ax.axvline(ml.stats.nse(), linestyle="--", color="k")
ax.set_xlabel("NSE [-]")
ax.set_ylabel("frequency [-]")

from scipy.stats import norm
import numpy as np

mu, std = norm.fit(data)

# Plot the PDF.
xmin, xmax = ax.set_xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
ax.plot(x, p, "k", linewidth=2)
[10]:
[<matplotlib.lines.Line2D at 0x7fe4c07e9330>]
../_images/examples_uncertainty_19_1.png