Bayesian uncertainty analysis#
R.A. Collenteur, Eawag, June, 2023
In this notebook it is shown how the MCMC-algorithm can be used to estimate the model parameters and quantify the (parameter) uncertainties for a Pastas model using a Bayesian approach. For this the EmceeSolver is introduced, based on the emcee Python package.
Besides Pastas the following Python Packages have to be installed to run this notebook:
import corner
import emcee
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pastas as ps
ps.set_log_level("ERROR")
ps.show_versions()
Pastas version: 1.9.0
Python version: 3.11.10
NumPy version: 2.2.5
Pandas version: 2.2.3
SciPy version: 1.15.2
Matplotlib version: 3.10.1
Numba version: 0.61.2
DeprecationWarning: As of Pastas 1.5, no noisemodel is added to the pastas Model class by default anymore. To solve your model using a noisemodel, you have to explicitly add a noisemodel to your model before solving. For more information, and how to adapt your code, please see this issue on GitHub: https://github.com/pastas/pastas/issues/735
1. Create a Pastas Model#
The first step is to create a Pastas Model, including the RechargeModel to simulate the effect of precipitation and evaporation on the heads. Here, we first estimate the model parameters using the standard least-squares approach.
head = pd.read_csv(
"data/B32C0639001.csv", parse_dates=["date"], index_col="date"
).squeeze()
evap = pd.read_csv("data/evap_260.csv", index_col=0, parse_dates=[0]).squeeze()
rain = pd.read_csv("data/rain_260.csv", index_col=0, parse_dates=[0]).squeeze()
ml = ps.Model(head)
ml.add_noisemodel(ps.ArNoiseModel())
# Select a recharge model
rch = ps.rch.FlexModel()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Gamma(), name="rch")
ml.add_stressmodel(rm)
ml.solve(tmin="1990")
ax = ml.plot(figsize=(10, 3))
Fit report head Fit Statistics
================================================
nfev 35 EVP 87.37
nobs 351 R2 0.87
noise True RMSE 0.07
tmin 1990-01-01 00:00:00 AICc -2048.61
tmax 2005-10-14 00:00:00 BIC -2014.39
freq D Obj 0.49
warmup 3650 days 00:00:00 ___
solver LeastSquares Interp. No
Parameters (9 optimized)
================================================
optimal initial vary
rch_A 0.473059 0.630436 True
rch_n 0.673876 1.000000 True
rch_a 311.365023 10.000000 True
rch_srmax 75.519158 250.000000 True
rch_lp 0.250000 0.250000 False
rch_ks 51.169319 100.000000 True
rch_gamma 2.204776 2.000000 True
rch_kv 1.999952 1.000000 True
rch_simax 2.000000 2.000000 False
constant_d 0.790444 1.359779 True
noise_alpha 42.563856 15.000000 True
Warnings! (1)
================================================
Response tmax for 'rch' > than warmup period.

2. Use the EmceeSolver#
We will now use the EmceeSolve solver to estimate the model parameters and their uncertainties. This solver wraps the Emcee package, which implements different versions of MCMC. A good understanding of Emcee helps when using this solver, so it comes recommended to check out their documentation as well.
To set up the solver, a number of decisions need to be made:
Determine the priors of the parameters
Choose a (log) likelihood function
Choose the number of steps and thinning
2a. Choose and set the priors#
The first step is to choose and set the priors of the parameters. This is done by using the ml.set_parameter
method and the dist
argument (from distribution). Any distribution from the scipy.stats
can be chosen (https://docs.scipy.org/doc/scipy/tutorial/stats/continuous.html), for example uniform
, norm
, or lognorm
. Here, for the sake of the example, we set all prior distributions to a normal distribution.
# Set the initial parameters to a normal distribution
for name in ml.parameters.index:
ml.set_parameter(name, dist="norm")
ml.parameters
initial | pmin | pmax | vary | name | dist | stderr | optimal | |
---|---|---|---|---|---|---|---|---|
rch_A | 0.630436 | 0.00001 | 63.043598 | True | rch | norm | 0.047591 | 0.473059 |
rch_n | 1.000000 | 0.01000 | 100.000000 | True | rch | norm | 0.031958 | 0.673876 |
rch_a | 10.000000 | 0.01000 | 10000.000000 | True | rch | norm | 64.922102 | 311.365023 |
rch_srmax | 250.000000 | 0.00001 | 1000.000000 | True | rch | norm | 42.285668 | 75.519158 |
rch_lp | 0.250000 | 0.00001 | 1.000000 | False | rch | norm | NaN | 0.250000 |
rch_ks | 100.000000 | 0.00001 | 10000.000000 | True | rch | norm | 59.173792 | 51.169319 |
rch_gamma | 2.000000 | 0.00001 | 20.000000 | True | rch | norm | 0.201373 | 2.204776 |
rch_kv | 1.000000 | 0.25000 | 2.000000 | True | rch | norm | 0.424189 | 1.999952 |
rch_simax | 2.000000 | 0.00000 | 10.000000 | False | rch | norm | NaN | 2.000000 |
constant_d | 1.359779 | NaN | NaN | True | constant | norm | 0.052620 | 0.790444 |
noise_alpha | 15.000000 | 0.00001 | 5000.000000 | True | noise | norm | 6.540665 | 42.563856 |
Pastas will use the initial
value of the parameter for the loc
argument of the distribution (e.g., the mean of a normal distribution), and the stderr
as the scale
argument (e.g., the standard deviation of a normal distribution). Only for the parameters with a uniform
distribution, the pmin
and pmax
values are used to determine a uniform prior. By default, all parameters are assigned a uniform
prior.
2b. Create the solver instance#
The next step is to create an instance of the EmceeSolve
solver class. At this stage all the settings need to be provided on how the Ensemble Sampler is created (https://emcee.readthedocs.io/en/stable/user/sampler/). Important settings are the nwalkers
, the moves
, the objective_function
. More advanced options are to parallelize the MCMC algorithm (parallel=True
), and to set a backend to store the results. Here’s an example:
# Choose the objective function
ln_prob = ps.objfunc.GaussianLikelihoodAr1()
# Create the EmceeSolver with some settings
s = ps.EmceeSolve(
nwalkers=20,
moves=emcee.moves.DEMove(),
objective_function=ln_prob,
progress_bar=True,
parallel=False,
)
In the above code we created an EmceeSolve
instance with 20 walkers, which take steps according to the DEMove
move algorithm (see Emcee docs), and a Gaussian likelihood function that assumes AR1 correlated errors. Different objective functions are available, see the Pastas documentation on the different options.
Depending on the likelihood function, a number of additional parameters need to be inferred. These parameters are not added to the Pastas Model instance, but are available from the solver object. Using the set_parameter
method of the solver, these parameters can be changed. In this example where we use the GaussianLikelihoodAr1
function the sigma and theta are estimated; the unknown standard deviation of the errors and the autoregressive parameter.
s.parameters
initial | pmin | pmax | vary | stderr | name | dist | |
---|---|---|---|---|---|---|---|
ln_var | 0.05 | 1.000000e-10 | 1.00000 | True | 0.01 | ln | uniform |
ln_phi | 0.50 | 1.000000e-10 | 0.99999 | True | 0.20 | ln | uniform |
s.set_parameter("ln_var", initial=0.0028, vary=False, dist="norm")
s.parameters
initial | pmin | pmax | vary | stderr | name | dist | |
---|---|---|---|---|---|---|---|
ln_var | 0.0028 | 1.000000e-10 | 1.00000 | False | 0.01 | ln | norm |
ln_phi | 0.5000 | 1.000000e-10 | 0.99999 | True | 0.20 | ln | uniform |
2c. Run the solver and solve the model#
After setting the parameters and creating a EmceeSolve solver instance we are now ready to run the MCMC analysis. We can do this by running ml.solve
. We can pass the same parameters that we normally provide to this method (e.g., tmin
or fit_constant
). Here we use the initial parameters from our least-square solve, and do not fit a noise model, because we take autocorrelated errors into account through the likelihood function.
All the arguments that are not used by ml.solve
, for example steps
and tune
, are passed on to the run_mcmc
method from the sampler (see Emcee docs). The most important is the steps
argument, that determines how many steps each of the walkers takes.
# Use the solver to run MCMC
ml.del_noisemodel()
ml.solve(
solver=s,
initial=False,
fit_constant=False,
tmin="1990",
steps=100,
tune=True,
)
Fit report head Fit Statistics
================================================
nfev nan EVP 87.96
nobs 351 R2 0.88
noise False RMSE 0.07
tmin 1990-01-01 00:00:00 AICc -1830.99
tmax 2005-10-14 00:00:00 BIC -1804.29
freq D Obj nan
warmup 3650 days 00:00:00 ___
solver EmceeSolve Interp. No
Parameters (7 optimized)
================================================
optimal initial vary
rch_A 0.481599 0.473059 True
rch_n 0.684901 0.673876 True
rch_a 308.972030 311.365023 True
rch_srmax 53.212608 75.519158 True
rch_lp 0.250000 0.250000 False
rch_ks 31.080152 51.169319 True
rch_gamma 2.402847 2.204776 True
rch_kv 1.795541 1.999952 True
rch_simax 2.000000 2.000000 False
constant_d 0.778646 0.000000 False
Warnings! (1)
================================================
Response tmax for 'rch' > than warmup period.
0%| | 0/100 [00:00<?, ?it/s]
4%|▍ | 4/100 [00:00<00:03, 30.71it/s]
8%|▊ | 8/100 [00:00<00:03, 29.22it/s]
11%|█ | 11/100 [00:00<00:03, 27.03it/s]
14%|█▍ | 14/100 [00:00<00:03, 25.97it/s]
17%|█▋ | 17/100 [00:00<00:03, 25.15it/s]
20%|██ | 20/100 [00:00<00:03, 24.79it/s]
23%|██▎ | 23/100 [00:00<00:03, 24.69it/s]
26%|██▌ | 26/100 [00:01<00:03, 23.50it/s]
29%|██▉ | 29/100 [00:01<00:03, 22.87it/s]
32%|███▏ | 32/100 [00:01<00:03, 22.35it/s]
35%|███▌ | 35/100 [00:01<00:02, 22.24it/s]
38%|███▊ | 38/100 [00:01<00:02, 22.29it/s]
41%|████ | 41/100 [00:01<00:02, 22.18it/s]
44%|████▍ | 44/100 [00:01<00:02, 22.10it/s]
47%|████▋ | 47/100 [00:01<00:02, 22.04it/s]
50%|█████ | 50/100 [00:02<00:02, 21.75it/s]
53%|█████▎ | 53/100 [00:02<00:02, 21.76it/s]
56%|█████▌ | 56/100 [00:02<00:01, 22.13it/s]
59%|█████▉ | 59/100 [00:02<00:01, 22.50it/s]
62%|██████▏ | 62/100 [00:02<00:01, 22.25it/s]
65%|██████▌ | 65/100 [00:02<00:01, 22.24it/s]
68%|██████▊ | 68/100 [00:02<00:01, 22.46it/s]
71%|███████ | 71/100 [00:03<00:01, 22.62it/s]
74%|███████▍ | 74/100 [00:03<00:01, 23.01it/s]
77%|███████▋ | 77/100 [00:03<00:00, 23.02it/s]
80%|████████ | 80/100 [00:03<00:00, 22.77it/s]
83%|████████▎ | 83/100 [00:03<00:00, 22.49it/s]
86%|████████▌ | 86/100 [00:03<00:00, 22.91it/s]
89%|████████▉ | 89/100 [00:03<00:00, 23.24it/s]
92%|█████████▏| 92/100 [00:03<00:00, 23.58it/s]
95%|█████████▌| 95/100 [00:04<00:00, 23.68it/s]
98%|█████████▊| 98/100 [00:04<00:00, 23.07it/s]
100%|██████████| 100/100 [00:04<00:00, 23.16it/s]
3. Posterior parameter distributions#
The results from the MCMC analysis are stored in the sampler
object, accessible through ml.solver.sampler
variable. The object ml.solver.sampler.flatchain
contains a Pandas DataFrame with \(n\) the parameter samples, where \(n\) is calculated as follows:
\(n = \frac{\left(\text{steps}-\text{burn}\right)\cdot\text{nwalkers}}{\text{thin}} \)
Corner.py#
Corner is a simple but great python package that makes creating corner graphs easy. A couple of lines of code suffice to create a plot of the parameter distributions and the covariances between the parameters.
# Corner plot of the results
fig = plt.figure(figsize=(8, 8))
labels = list(ml.parameters.index[ml.parameters.vary]) + list(
ml.solver.parameters.index[ml.solver.parameters.vary]
)
labels = [label.split("_")[1] for label in labels]
best = list(ml.parameters[ml.parameters.vary].optimal) + list(
ml.solver.parameters[ml.solver.parameters.vary].optimal
)
axes = corner.corner(
ml.solver.sampler.get_chain(flat=True, discard=50),
quantiles=[0.025, 0.5, 0.975],
labelpad=0.1,
show_titles=True,
title_kwargs=dict(fontsize=10),
label_kwargs=dict(fontsize=10),
max_n_ticks=3,
fig=fig,
labels=labels,
truths=best,
)
plt.show()

4. What happens to the walkers at each step?#
The walkers take steps in different directions for each step. It is expected that after a number of steps, the direction of the step becomes random, as a sign that an optimum has been found. This can be checked by looking at the autocorrelation, which should be insignificant after a number of steps. Below we just show how to obtain the different chains, the interpretation of which is outside the scope of this notebook.
fig, axes = plt.subplots(len(labels), figsize=(10, 7), sharex=True)
samples = ml.solver.sampler.get_chain(flat=True)
for i in range(len(labels)):
ax = axes[i]
ax.plot(samples[:, i], "k", alpha=0.5)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number")
Text(0.5, 0, 'step number')

5. Plot some simulated time series to display uncertainty?#
We can now draw parameter sets from the chain and simulate the uncertainty in the head simulation.
# Plot results and uncertainty
ax = ml.plot(figsize=(10, 3))
plt.title(None)
chain = ml.solver.sampler.get_chain(flat=True, discard=50)
inds = np.random.randint(len(chain), size=100)
for ind in inds:
params = chain[ind]
p = ml.parameters.optimal.copy().values
p[ml.parameters.vary] = params[: ml.parameters.vary.sum()]
_ = ml.simulate(p, tmin="1990").plot(c="gray", alpha=0.1, zorder=-1)
plt.legend(["Measurements", "Simulation", "Ensemble members"], numpoints=3)
<matplotlib.legend.Legend at 0x787cc1f59110>
