{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# MCMC uncertainty\n", "*R.A. Collenteur, University of Graz, November 2019*\n", "\n", "In this notebook it is shown how the MCMC-algorithm can be used to estimate the model parameters for a Pastas model. Besides Pastas the following Python Packages have to be installed to run this notebook:\n", "\n", "- [emcee](https://emcee.readthedocs.io/en/stable/user/faq/)\n", "- [lmfit](https://lmfit.github.io/lmfit-py/)\n", "- [corner](https://corner.readthedocs.io)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "import pastas as ps\n", "import corner\n", "import emcee as mc\n", "\n", "import matplotlib.pyplot as plt\n", "\n", "ps.set_log_level(\"ERROR\")\n", "ps.show_versions(lmfit=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Create a Pastas Model\n", "The first step is to create a Pastas Model object, including the RechargeModel to simulate the effect of precipitation and evaporation on the groundwater heads." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# read observations and create the time series model\n", "obs = pd.read_csv(\"data/head_nb1.csv\", parse_dates=[\"date\"], index_col=\"date\").squeeze()\n", "rain = pd.read_csv(\n", " \"data/rain_nb1.csv\", parse_dates=[\"date\"], index_col=\"date\"\n", ").squeeze()\n", "evap = pd.read_csv(\n", " \"data/evap_nb1.csv\", parse_dates=[\"date\"], index_col=\"date\"\n", ").squeeze()\n", "\n", "# Create the time series model\n", "ml = ps.Model(obs, name=\"head\")\n", "\n", "sm = ps.RechargeModel(prec=rain, evap=evap, rfunc=ps.Exponential(), name=\"recharge\")\n", "ml.add_stressmodel(sm)\n", "ml.solve(noise=True, report=\"basic\")\n", "\n", "ml.plot(figsize=(10, 3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Use the EMCEE Hammer\n", "Apart from the default solver (ps.LeastSquares()), Pastas also contains the option to use the LmFit package to estimate the parameters. This package wraps multiple optimization techniques, one of which is [Emcee](https://lmfit.github.io/lmfit-py/fitting.html#lmfit.minimizer.Minimizer.emcee). The code bock below shows how to use this method to estimate the parameters of Pastas models.\n", "\n", "Emcee takes a number of keyword arguments that determine how the optimization is done. The most important is the `steps` argument, that determines how many steps each of the walkers takes. The argument `nwalkers` can be used to set the number of walkers (default is 100). The `burn` argument determines how many samples from the start of the walkers are removed. The argument `thin` finally determines how many samples are accepted (1 in thin samples)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml.set_parameter(\"noise_alpha\", vary=True)\n", "\n", "ml.solve(\n", " tmin=\"2002\",\n", " noise=True,\n", " initial=False,\n", " fit_constant=True,\n", " solver=ps.LmfitSolve(),\n", " method=\"emcee\",\n", " nwalkers=10,\n", " steps=20,\n", " burn=2,\n", " thin=2,\n", " is_weighted=True,\n", " nan_policy=\"omit\",\n", ");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Visualize the results\n", "The results are stored in the `result` object, accessible through `ml.fit.result`. The object `ml.fit.result.flatchain` contains a Pandas DataFrame with $n$ the parameter samples, whgere $n$ is calculated as follows:\n", "\n", "$n = \\frac{\\left(\\text{steps}-\\text{burn}\\right)\\cdot\\text{nwalkers}}{\\text{thin}} $\n", "\n", "## Corner.py\n", "Corner is a simple but great python package that makes creating corner graphs easy. One line of code suffices to create a plot of the parameter distributions and the covariances between the parameters. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corner.corner(\n", " ml.fit.result.flatchain,\n", " truths=list(ml.parameters[ml.parameters.vary == True].optimal),\n", ");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. What happens to the walkers at each step?\n", "The walkers take steps in different directions for each step. It is expected that after a number of steps, the direction of the step becomes random, as a sign that an optimum has been found. This can be checked by looking at the autocorrelation, which should be insignificant after a number of steps (NOT SURE HOW TO INTERPRET THIS YET). However it does not seem the case that the parameters converge to come optimum yet, even for the simple Linear model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "labels = ml.fit.result.flatchain.columns\n", "\n", "fig, axes = plt.subplots(labels.size, figsize=(10, 7), sharex=True)\n", "samples = ml.fit.result.chain\n", "for i in range(labels.size):\n", " ax = axes[i]\n", " ax.plot(samples[:, :, i], \"k\", alpha=0.3)\n", " ax.set_xlim(0, len(samples))\n", " ax.set_ylabel(labels[i])\n", " ax.yaxis.set_label_coords(-0.1, 0.5)\n", "\n", "axes[-1].set_xlabel(\"step number\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Plot some simulated time series to display uncertainty?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ax = ml.plot(figsize=(10, 3))\n", "\n", "inds = np.random.randint(len(ml.fit.result.flatchain), size=100)\n", "for ind in inds:\n", " params = ml.fit.result.flatchain.iloc[ind].values\n", " ml.simulate(params).plot(c=\"k\", alpha=0.1, zorder=0)" ] } ], "metadata": { "kernelspec": { "display_name": "pastas_dev", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:41:22) [MSC v.1929 64 bit (AMD64)]" }, "vscode": { "interpreter": { "hash": "29475f8be425919747d373d827cb41e481e140756dd3c75aa328bf3399a0138e" } } }, "nbformat": 4, "nbformat_minor": 4 }