Hughes and Hase Problem 4.10

In [1]:
import numpy as np

A group of six students make the following measurements of the speed of light (all $\times 10^8\, \mbox{m/s}$): $3.03 \pm 0.04$, $2.99 \pm 0.02$, $3.00 \pm 0.05$, $3.05 \pm 0.04$ and $2.97 \pm 0.02$. What should the cohort report as their combined result?

The best result for the combined speed is the weighted mean:

In [2]:
speeds = np.array([3.03, 2.99, 2.99, 3.00, 3.05, 2.97])
uncertainties = np.array([0.04, 0.03, 0.02, 0.05, 0.04, 0.02])
weights = 1/uncertainties**2
weightedAvg = np.dot(speeds,weights)/np.sum(weights)
# OR weightedAvg = speeds@weights/np.sum(weights)
alpha_speeds = np.sqrt(1/np.sum(weights))
weightedAvg, alpha_speeds
Out[2]:
(2.9921259842519685, 0.011351102608219766)

So, the combined results should be reported as $(2.992 \pm 0.011) \times 10^8\, \mbox{m/s}$.

To check this, make the weights all the same and see if we end up with $s/\sqrt{N}$. If I make all the uncertainties 0.04, I get 0.0163299 for $\alpha_{\rm speeds}$. And $0.04/\sqrt{6}$ is 0.0163299. So, yes, it works in the proper limit.

Continuing question 4.10: If another student then reports $c = (3.0 \pm 0.3) \times 10^8\, \mbox{m/s}$, is there any change to the cohort's combined measurement?

We don't really need to do the calculation -- I can tell just by looking at this that this new, crappy measurement (look at how large the uncertainty is compared with the other ones!) isn't going to improve our final results. But we'll do the calculation to show that that is the case.

In [3]:
# I just copied the previous cell, and added the new data point
speeds = np.array([3.03, 2.99, 2.99, 3.00, 3.05, 2.97, 3.0])
uncertainties = np.array([0.04, 0.03, 0.02, 0.05, 0.04, 0.02, 0.3])
# uncertainties = sp.array([0.04, 0.04, 0.04, 0.04, 0.04, 0.04])
weights = 1.0/uncertainties**2
weightedAvg = np.dot(speeds,weights)/np.sum(weights)
alpha_speeds = np.sqrt(1.0/np.sum(weights))
weightedAvg, alpha_speeds
Out[3]:
(2.992137240886347, 0.011342985980361566)

So, um, yeah, we end up with $(2.992 \pm 0.011) \times 10^8\, \mbox{m/s}$, exactly as before. It doesn't really make much difference -- this new measurement has a really large uncertainty (0.3 vs. 0.04 or so), so the result is dominated by the other (better) measurements.

Version information

version_information is from J.R. Johansson (jrjohansson at gmail.com); see Introduction to scientific computing with Python for more information and instructions for package installation.

version_information is installed on the linux network at Bucknell

In [4]:
%load_ext version_information
In [5]:
version_information numpy
Out[5]:
SoftwareVersion
Python3.7.7 64bit [GCC 7.3.0]
IPython7.16.1
OSLinux 3.10.0 1062.9.1.el7.x86_64 x86_64 with centos 7.7.1908 Core
numpy1.18.5
Fri Aug 07 09:50:14 2020 EDT
In [ ]: