{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "4eeSInBQfqyU" }, "source": [ "## Curve fitting and the $\\chi^2$ error surface\n", "#### Material to accompany Hughes and Hase Section 6.5, this time with a nonlinear fit (Gaussian).\n", "\n", "Jackie Villadsen, March 2024, adapted into a worksheet. Tom Solomon, March 2021, modified to Gaussian curve fit; based on curve_fit_w_contour (Marty Ligare, August 2020) that does a linear fit." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hzCtWaolfqyV" }, "outputs": [], "source": [ "import numpy as np\n", "from scipy import optimize\n", "from scipy import stats\n", "\n", "import urllib # for importing from a URL\n", "\n", "import matplotlib as mpl\n", "from mpl_toolkits.mplot3d import Axes3D\n", "import matplotlib.pyplot as plt\n", "from matplotlib import cm\n", "from matplotlib.ticker import LinearLocator, FormatStrFormatter" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kekI1U_WfqyV" }, "outputs": [], "source": [ "# Following is an Ipython magic command that makes figures interactive for zooming etc.\n", "# (Commented out because doesn't work in Colab.)\n", "# %matplotlib notebook\n", "\n", "# Modification of matplotlib defaults to make plot labels easier to read\n", "mpl.style.use('classic')\n", "plt.rc('figure', figsize = (6, 4.5)) # Reduces overall size of figures\n", "plt.rc('axes', labelsize=16, titlesize=14)\n", "plt.rc('figure', autolayout = True) # Adjusts supblot params for new size" ] }, { "cell_type": "markdown", "metadata": { "id": "tsjbkpRQfqyW" }, "source": [ "#### Define functions" ] }, { "cell_type": "markdown", "source": [ "A Gaussian function has the form:\n", "\n", "$$\n", "f(x) = A e^{-(x-x_0)^2/(2b^2)} + c\n", "$$\n", "\n", "In the code below, modify the multi-line comment (inside 3 ''') to explain what the input parameters are." ], "metadata": { "id": "Qeinj4CVgbSK" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "t2VwAS1pfqyW" }, "outputs": [], "source": [ "def f(x,A,x0,b,c):\n", " '''y = f(x,A,x0,b,c) - Add a comment briefly describing what this function is\n", "\n", " Add text here to briefly describe what A, x0, b, and c are'''\n", " return A*np.exp(-1.0*(x-x0)**2/ (2*b**2) ) + c\n" ] }, { "cell_type": "markdown", "source": [ "Now, try running help(f) and see what it prints. Any time you put a comment on the line IMMEDIATELY after the def line, you are defining the help text. From now on, try to practice doing this in your data analysis code." ], "metadata": { "id": "ZgY2uuS_hdoT" } }, { "cell_type": "code", "source": [ "# add your help command here\n", "help(...)\n" ], "metadata": { "id": "JOrKn8BXhiMw" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "The second function you need to define is $\\chi^2$, in order to assess goodness of fit. The function for $\\chi^2$ is:\n", "\n", "$$\n", "\\chi^2 = \\sum_{i=0}^{N_{data}} \\frac{\\left(y_i - f(x_i)\\right)^2}{\\sigma_{y,i}^2}\n", "$$\n", "\n", "where $(x_i,y_i)$ are the data points, $\\sigma_{y,i}$ are the uncertainties on the data points, and $f(x_i)$ are the model-fit values of $y$ for the x-positions in the data.\n", "\n", "Below, complete the code for the function to calculate chi2. Your function should include a call to the f function that you already defined." ], "metadata": { "id": "J6QB97s2hpym" } }, { "cell_type": "code", "source": [ "def chi2(x, y, u, A, x0, b, c):\n", " '''Chisquare as a function of data (x, y, and yerr=u), and\n", " Gaussian model parameters A, x0, b, and c'''\n", " return # replace me with code" ], "metadata": { "id": "zR9nXrPshoon" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Qf4_1W1UfqyW" }, "source": [ "### Linear fit to data for $A$, $x_0$, $b$ and $c$" ] }, { "cell_type": "markdown", "metadata": { "id": "efain_7_fqyW" }, "source": [ "#### Data to be fit:" ] }, { "cell_type": "markdown", "source": [ "First, go to this url: [data link](https://www.eg.bucknell.edu/~phys310/skills/data_analysis/GaussianData.dat) and view the data set. The three columns are x, y, and u (where u is the uncertainty on y).\n", "\n", "\n", "Run the code below to load data from a URL. (I recommend using this in the future when assignments provide online data sets!) Print the data to see what is in each variable." ], "metadata": { "id": "7qSSL3HJjVj7" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GhoBEWQCfqyW" }, "outputs": [], "source": [ "g = urllib.request.urlopen('https://www.eg.bucknell.edu/~phys310/skills/data_analysis/GaussianData.dat')\n", "data = np.loadtxt(g)\n", "# Format: [[x1,y1,u1], [x2,y2,u2], ... ] where u1 is uncertainty in y1\n", "\n", "x, y, u = data.T" ] }, { "cell_type": "code", "source": [ "# try printing the following 5 things: data, data.T, x, y, u.\n", "# Compare their shapes and how they relate to the data file.\n", "\n", "data" ], "metadata": { "id": "g4FBA5Zljj5X" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Now, make a plot of the data, including error bars. The command you need for this is plt.errorbar. In the code below, add a line to call plt.errorbar." ], "metadata": { "id": "bN8cjpqXklBr" } }, { "cell_type": "code", "source": [ "# first, run help to read about plt.errorbar\n", "plt.errorbar?" ], "metadata": { "id": "CZnuxGf8k1Ii" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "crDvEChMfqyW" }, "outputs": [], "source": [ "plt.figure()\n", "plt.title(\"data\",fontsize=14)\n", "plt.xlabel('$x$')\n", "plt.ylabel('$y$')\n", "plt.axhline(0,color='gray') # highlighting zero often helps the viewer interpret the data\n", "plt.xlim(0,4)\n", "plt.ylim(-0.5,3.5)\n", "# Add a line here to plot the data with error bars" ] }, { "cell_type": "markdown", "source": [ "Wait! Your data points are currently connected by line segments, which is bad style. Discuss with the class: why is connecting your data with line segments considered bad style, or even misleading, in physics research?" ], "metadata": { "id": "oKHSn_QulNdW" } }, { "cell_type": "markdown", "source": [ "BEFORE MOVING ON: Try out some alternatives: go back and add fmt='.' or fmt='o' to your plt.errorbar command, notice how they look and which one you like." ], "metadata": { "id": "Sy_l0uyJlnP7" } }, { "cell_type": "markdown", "metadata": { "id": "brIIk_7PfqyX" }, "source": [ "#### Initial estimates for parameters" ] }, { "cell_type": "markdown", "source": [ "Sometimes, curve fitting will fail if you start with a parameter estimate that is too far off. Looking at the graph of your data, make guesses for the values of each of the parameters and add them below. It's fine if they're sort of off - typically anything within a factor of a few is fine." ], "metadata": { "id": "P6AFRfr-mVzL" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "A2Qp6bjZfqyX" }, "outputs": [], "source": [ "# I am purposefully picking some so-so values because as long as we are\n", "# close, the fitting should be able to make it work.\n", "A =\n", "x0 =\n", "b =\n", "c=\n", "p0 = A, x0, b, c\n", "print(p0) # this is a variable with all the params together" ] }, { "cell_type": "markdown", "source": [ "Now, plot the model with your initial guess parameters on top of the data, to make sure your guesses are decent. If they are wayyyyy off (like ten times taller, etc) then revise them. However, a bad fit anywhere close to the data is totally fine, no need to spend time fixing it by hand." ], "metadata": { "id": "Cp5_Ii_am0AJ" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "taAiyuLXfqyX" }, "outputs": [], "source": [ "plt.figure()\n", "xc = np.linspace(0,4,201)\n", "yc = f(xc, *p0)\n", "plt.plot(xc, yc)\n", "plt.errorbar(x,y,u,fmt='o')\n", "plt.xlabel('$x$')\n", "plt.ylabel('$y$')\n", "plt.title(\"Preliminary graph with estimated parameters\");" ] }, { "cell_type": "markdown", "metadata": { "id": "Pc1kealZfqyX" }, "source": [ "#### Perform fit" ] }, { "cell_type": "markdown", "source": [ "The code below runs the $\\chi^2$ minimization routine, finding the set of model parameters that give you the lowest possible $\\chi^2$ between the data and the fit. The variable popt contains a 4-element list of these optimized parameters. The variable pcov is a 4x4 matrix, where the diagonal elements are the uncertainty squared for each parameter.\n", "\n", "Complete the lines below that have ..." ], "metadata": { "id": "LD1F2wt5nasE" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VDpEWv0BfqyX" }, "outputs": [], "source": [ "popt, pcov = optimize.curve_fit(f, x, y, p0, sigma=u, absolute_sigma=True)\n", "A = popt[0]\n", "x0 = ...\n", "b = ...\n", "c = ...\n", "\n", "αA = np.sqrt(pcov[0,0])\n", "αx0 = ...\n", "αb = ...\n", "αc = ...\n", "\n", "print(\"A =\", A,\"+/-\", αA,\"\\n\")\n", "print(\"x0 =\", x0,\"+/-\", αx0,\"\\n\")\n", "print(\"b =\", b,\"+/-\", αb,\"\\n\")\n", "print(\"c =\", c,\"+/-\", αc,\"\\n\")\n", "\n", "print(\"covariance matrix =\",\"\\n\",pcov,\"\\n\")\n", "pcov_data = pcov\n", "\n", "a = chi2(x,y,u,*popt)\n", "print(\"chi2 =\", a)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "09RZ69D2fqyX" }, "source": [ "#### Written in standard notation\n", "\n", "Now, report your fit results in standard notation. The first is shown as an example.\n", "\n", "### A = 2.43 $\\pm$ 0.10\n", "\n", "### $x_0$ =\n", "\n", "### b =\n", "\n", "### c =\n", "\n", "Note: I generated this data with A = 2.50, $x_0$ = 1.75, b = 0.27 and c = 0.65, with some additive noise with standard deviation of 0.14. Do all the best-fit parameters agree with the true values to within 2 uncertainties?" ] }, { "cell_type": "markdown", "source": [ "Now, plot the best fit against your data, to make sure it looks reasonable." ], "metadata": { "id": "o3QKMWk6onCF" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xYnedGp2fqyX" }, "outputs": [], "source": [ "xc = np.linspace(0,4,201) # quasi-continuous set of x's function plot\n", "yc = ... # write a function call, sending it xc and the best-fit parameters, to get the model-fit y\n", "plt.figure()\n", "plt.title(\"data with best fit line\",fontsize=14)\n", "plt.xlabel('$x$')\n", "plt.ylabel('$y$')\n", "plt.axhline(0, color='magenta')\n", "plt.xlim(0,4) # Pad x-range on plot\n", "plt.errorbar(x, y, yerr=u, fmt='o');\n", "plt.plot(xc ,yc);" ] }, { "cell_type": "markdown", "metadata": { "id": "V0xdW1Q2fqyY" }, "source": [ "#### Residuals:" ] }, { "cell_type": "code", "source": [ "# calculate the normalized residuals: (y_data - y_fit)/(sigma_y)\n", "\n", "res_norm = ..." ], "metadata": { "id": "K4nFQQBGo9Xr" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5FFoHneyfqyY" }, "outputs": [], "source": [ "plt.figure()\n", "plt.axhline(0,color='magenta')\n", "plt.title('normalized residuals')\n", "plt.xlabel('$x$')\n", "plt.ylabel('normalized residuals')\n", "plt.grid(True)\n", "plt.errorbar(x,res_norm,1,fmt='o') # Discuss: why are we putting an error bar of 1?\n", "plt.xlim(-0.5,4.5);" ] }, { "cell_type": "markdown", "metadata": { "id": "-RGLEN21fqyY" }, "source": [ "The normalized residuals look fine. First, most but not all are between -1 and +1. Second, there are no patterns in the residuals; rather, they are randomly fluctuating between negative and positive." ] }, { "cell_type": "markdown", "source": [ "## Test your $\\chi^2$ function" ], "metadata": { "id": "2rTHxg6IcLaC" } }, { "cell_type": "markdown", "source": [ "$\\chi^2$ is equal to the sum of the squares of the normalized residuals. Let's test your chi2 function that you defined at the top by comparing it to the normalized residuals.\n", "\n", "First, print the sum of the squares of the normalized residuals:" ], "metadata": { "id": "JXqned7CcXQe" } }, { "cell_type": "code", "source": [ "# square res_norm, then sum it, then print\n", "..." ], "metadata": { "id": "FeXOt09pcKEG" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Now, test your chi2 function:" ], "metadata": { "id": "vAOj0DV6cuAY" } }, { "cell_type": "code", "source": [ "# run a function call to chi2 with the best-fit parameters\n", "chi2(x, y, u, A, x0, b, c)" ], "metadata": { "id": "dNIwpQ4_cwLL" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Does your chi2 function agree with the sum of squares? It's important to test this function before the next part.\n", "\n", "Finally, assess the goodness of fit: $\\chi^2$ should be roughly equal to the number of degrees of freedom:\n", "\n", "(# of d.o.f.) = (# of data points) - (# of model fit parameters)\n", "\n", "DISCUSS: How many degrees of freedom do we have here? Is $\\chi^2$ within roughly a factor of 2 from that? (Next lesson, we'll see how to more quantitatively decide what is a ``healthy'' range for $\\chi^2$, depending on the number of data points.)" ], "metadata": { "id": "S3ZYlF97dAJ3" } }, { "cell_type": "markdown", "metadata": { "id": "W7S3HoM4fqyY" }, "source": [ "#### Make \"data\" for contour plot\n", "\n", "We have four parameters, so we should plot two at a time. Let's pair up A with c and x0 with b. Start with A and c\n", "\n", "+ Choose ranges of $A$ and $c$ for contour plot. The ranges should range from below to above the best fit values for each. If the range ends up crappy, you can always change the range and re-do the calculations, or you can zoom in and out of the contour plot.\n", "+ Calculate values of $\\chi^2$ at grid points" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v1QVd-xBfqyY" }, "outputs": [], "source": [ "AA = np.linspace(2, 3.0, 201)\n", "cc = np.linspace(0, 1, 201)\n", "\n", "Z = np.zeros((len(AA),len(cc)))\n", "\n", "for i in range(len(AA)):\n", " for j in range(len(cc)):\n", " Z[j,i] = chi2(x, y, u, AA[i], x0, b, cc[j]) - chi2(x, y, u, *popt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NRrUMVP3fqyY" }, "outputs": [], "source": [ "plt.figure()\n", "AA, cc = np.meshgrid(AA, cc)\n", "CS = plt.contour(AA, cc, Z, levels=[1,2,5,10,20,50])\n", "plt.xlabel('$A$')\n", "plt.ylabel('$c$')\n", "plt.grid()\n", "plt.axhline(c)\n", "plt.axvline(A)\n", "plt.clabel(CS, inline=1, fontsize=10);" ] }, { "cell_type": "markdown", "metadata": { "id": "U6aJPmYAfqyY" }, "source": [ "Let's focus on the $\\chi^2_{reduced} - \\chi^2_{best} = 1$ contour. That's where we can determine uncertainties for A and c" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sPhlRXGxfqyY" }, "outputs": [], "source": [ "AA = np.linspace(2.2, 2.6, 201)\n", "cc = np.linspace(0.6, 0.8, 201)\n", "\n", "Z = np.zeros((len(AA),len(cc)))\n", "\n", "for i in range(len(AA)):\n", " for j in range(len(cc)):\n", " Z[j,i] = chi2(x, y, u, AA[i], x0, b, cc[j]) - chi2(x, y, u, *popt)\n", "\n", "plt.figure()\n", "AA, cc = np.meshgrid(AA, cc)\n", "CS = plt.contour(AA, cc, Z, levels=[1])\n", "plt.xlabel('$A$')\n", "plt.ylabel('$c$')\n", "plt.grid()\n", "plt.axhline(c)\n", "plt.axvline(A)\n", "plt.clabel(CS, inline=1, fontsize=10);" ] }, { "cell_type": "markdown", "metadata": { "id": "jaIjASv3fqyY" }, "source": [ "Okay, so the contour $\\chi^2_{reduced} - \\chi^2_{best} = 1$ curve goes from a minimum in A from around 2.33 to a maximum of around 2.52, so if we take half of that range, that would get us $\\alpha_A \\approx 0.10$, which is consistent with what the curve fitting routine returned for the uncertainty. And the same contour curve goes from a minimum in c of around 0.67 to a maximum of around 0.73, so half of that range would get us $\\alpha_c \\approx 0.03$, which isn't exactly what the curve fitting returned, but frankly an uncertainty of 0.03 vs 0.04 isn't worth losing sleep over." ] }, { "cell_type": "markdown", "source": [ "Now let's look at the other two parameters $x_0$ and $b$. Generate an array with $x_0$ going from below to above the best fit value of 1.79 and $b$ going from below to above the best fit value of 0.29" ], "metadata": { "id": "JDqApnQkd-zU" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "j1QLc2vufqyY" }, "outputs": [], "source": [ "x0b = np.linspace(1.5, 2.0, 201)\n", "bb = np.linspace(0.1, 0.5, 201)\n", "\n", "Z = np.zeros((len(x0b),len(bb)))\n", "\n", "for i in range(len(x0b)):\n", " for j in range(len(bb)):\n", " Z[j,i] = chi2(x, y, u, A, x0b[i], bb[j], c) - chi2(x, y, u, *popt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "i3jkCnC2fqyY" }, "outputs": [], "source": [ "plt.figure()\n", "x0b, bb = np.meshgrid(x0b, bb)\n", "CS = plt.contour(x0b, bb, Z, levels=[1,2,5,10,20,50])\n", "plt.xlabel('$x0$')\n", "plt.ylabel('$b$')\n", "plt.grid()\n", "plt.axhline(b)\n", "plt.axvline(x0)\n", "plt.clabel(CS, inline=1, fontsize=10);" ] }, { "cell_type": "markdown", "metadata": { "id": "IbK9BtSEfqyY" }, "source": [ "Okay, now zoom in on the $\\chi^2_{reduced} - \\chi^2_{best} = 1$ contour. That's where we can determine uncertainties for x0 and b" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cVAJSvzVfqyY" }, "outputs": [], "source": [ "x0b = np.linspace(1.7, 1.8, 201)\n", "bb = np.linspace(0.25, 0.35, 201)\n", "\n", "Z = np.zeros((len(x0b),len(bb)))\n", "\n", "for i in range(len(x0b)):\n", " for j in range(len(bb)):\n", " Z[j,i] = chi2(x, y, u, A, x0b[i], bb[j], c) - chi2(x, y, u, *popt)\n", "\n", "plt.figure()\n", "x0b, bb = np.meshgrid(x0b, bb)\n", "CS = plt.contour(x0b, bb, Z, levels=[1])\n", "plt.xlabel('$x0$')\n", "plt.ylabel('$b$')\n", "plt.grid()\n", "plt.axhline(b)\n", "plt.axvline(x0)\n", "plt.clabel(CS, inline=1, fontsize=10);\n", "\n", "# I then used the zoom feature, i.e., hold \"cntl\" key, and right-click and drag to zoom in.\n", "# You could also simply narrow the range to zoom in.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "y7lVlk-JfqyZ" }, "source": [ "Okay, so the contour $\\chi^2_{reduced} - \\chi^2_{best} = 1$ curve goes from a minimum in $x_0$ from around 0.04 to a maximum of around 0.074, so if we take half of that range, that would get us $\\alpha_x0 \\approx 0.017$, which is consistent with what the curve fitting routine returned for the uncertainty. And the same contour curve goes from a minimum in b of around 0.27 to a maximum of around 0.31, so half of that range would get us $\\alpha_b \\approx 0.02$, which isn't exactly what the curve fitting returned, but frankly an uncertainty of 0.02 vs 0.03 isn't worth losing sleep over.\n", "\n", "So, yeah. That's basically where the uncertainties in the curve-fitting come from." ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.8" }, "colab": { "provenance": [] } }, "nbformat": 4, "nbformat_minor": 0 }