Exploring the functional approach to error propagation

In [1]:
import numpy as np
In [2]:
def g(l,t):
    '''Function giving "little g" from the measured length and 
    period of a simple pendulum'''
    return 4*np.pi**2*l/t**2

Data

In [3]:
l = 0.96   # length of pendulum in meters
t = 1.970  # period of pendulum in seconds
delta_t = 0.004  # uncertainty of the period in seconds

Two determinations of uncertainty in $g$ due to uncertainty in $T$:

In [4]:
g_best = g(l,t)
g_plus = g(l, t + delta_t)
g_minus = g(l, t - delta_t)
In [5]:
print('g_best = ',g_best)
print('uncertainty with plus sign:', g_plus - g_best)
print('uncertainty with minus sign:', g_minus - g_best)
g_best =  9.765590687774262
uncertainty with plus sign: -0.03953676381878424
uncertainty with minus sign: 0.03977833230749894

Since I'm only going to use one significant figure in in my uncertainty, these two values are equivalent, and my result is

$$ g = 9.77 \pm 0.04\, \mbox{m/s$^2$}. $$

If $\Delta T$ is large enough that the first order linear approximation

$$ g(L,T\pm \Delta T)\simeq g(L,T) \pm \frac{\partial g}{\partial T}\, \Delta T $$

breaks down, then the two uncertainties will not be the same. For example, if $\Delta t= 0.3\,\mbox{s}$, the two uncertaintiels are not equal:

In [6]:
alpha_t = 0.3
g_best = g(l,t)
g_plus = g(l, t + alpha_t)
g_minus = g(l, t - alpha_t)

print('g_best = ',g_best)
print('uncertainty with plus sign:', g_plus - g_best)
print('uncertainty with minus sign:', g_minus - g_best)
g_best =  9.765590687774262
uncertainty with plus sign: -2.41064863569036
uncertainty with minus sign: 3.8237387611780616

We can be a liitle more quantitative about this by considering higher order terms in our Taylor's series expansions of $g(L,T)$:

\begin{eqnarray*} g(L,T+\Delta T) &\simeq& g(L,T) + \frac{\partial g}{\partial T} \, \Delta T + \frac{1}{2}\frac{\partial^2 g}{\partial T^2} \Delta T^2\\ g(L,T-\Delta T) &\simeq& g(L,T) - \frac{\partial g}{\partial T}\, \Delta T + \frac{1}{2}\frac{\partial^2 g}{\partial T^2}\, \Delta T^2. \end{eqnarray*}

These expressions will have the same magnitude when the terms quadratic in $\Delta T$ are small compared to the linear terms, i.e.,

$$ \frac{1}{2}\frac{\partial^2 g}{\partial T^2} \Delta T^2 \ll \frac{\partial g}{\partial T} \, \Delta T. $$

Rearranging this gives the condition on $\Delta T$:

$$ \Delta T \ll 2\frac{\frac{\partial g}{\partial T}}{\frac{\partial^2 g}{\partial T^2}} $$

In this problem we have

$$ \frac{\partial g}{\partial T} = -\frac{8\pi^2 L}{T^3}\quad\mbox{and}{\quad} \frac{\partial^2 g}{\partial T^2} = \frac{24\pi^2L}{T^4} $$

so our condition on the size of $\Delta T$ becomes

$$ \Delta T \ll \frac{2}{3}T. $$

This means that in this problem the two uncertainties will be the same when $\Delta T\ll 4/3\, \mbox{s}$, or when $\Delta T$ is, say, less than a tenth of 4/3 ($\simeq 0.13)$.

Version information

version_information is from J.R. Johansson (jrjohansson at gmail.com); see Introduction to scientific computing with Python for more information and instructions for package installation.

version_information is installed on the linux network at Bucknell

In [7]:
%load_ext version_information
In [8]:
%version_information
Out[8]:
SoftwareVersion
Python3.7.7 64bit [GCC 7.3.0]
IPython7.16.1
OSLinux 3.10.0 1062.9.1.el7.x86_64 x86_64 with centos 7.7.1908 Core
Fri Aug 07 10:10:43 2020 EDT