Calculating exclusion limits for a new theory in hardcore science

enter image description here

In many fields of science, it is important to understand the relevance of new theories or hypotheses in a description of experimental data, assuming that such data are already well represented by predictions of some well-accepted theory. A popular statistical method for setting upper limits (also called exclusion limits) on model parameters of a new theory is based on the CLs method.

Let us show how to set an exclusion limit in case of a counting experiment. We will use particle physics, as an example of hardcore science, where a new theory is considered to be excluded at the 95% confidence level (CL) if CLs = 0.05, and at more than the 95% CL if CLs< 0.05, assuming the presence of a signal that represents a new theory.

In this small tutorial, we will set the exclusion limits at the 95% CL on empirical distributions ("histograms"). We will use 3 histograms: (1) first histogram represents counts of actual measurements, (2) second histogram is the prediction of a well-established theory, (3) third distribution is a prediction ("signal") from some new theory. We will use the DataMelt program for this task that allows to use the Python language together with the Java scientific libraries.

Let us generate these histograms using one-tailed Gaussian distributions. The first histogram includes 10,000 events distributed according to one-tailed Gaussian distribution with the standard deviation 1. The second histogram represents our expectation for a known theory ("background"). The latter histogram has a similar shape and the number of events. The third histogram ("signal") corresponds to the prediction of a new theory. We assume that the new theory contributes to the tail of the experimental data (near x=2). We want to find the maximum number of events of signal (which is a parameter of this new theory) which can be reliable excluded, assuming that the given experimental data is already well described by the well-established theory.

The Python/Jython code below creates all the required histograms, scans through the signal strength, and calculates the CLs exclusion limits for a given signal strength:

from java.awt import Color
from java.util import Random
from jhplot import *
from jhpro.stat import *

data=H1D("Fake data",30,0.0,4.0)
data.setColor(Color.black); data.setStyle("p")

backg=H1D("Backg",30,0.0,4.0)
backg.setColor(Color.green); backg.setFill(1); backg.setErrAll(0)
backg.setFillColor(Color.green)

signal= H1D("Signal",30,0.0,4.0)
signal.setFill(1); signal.setFillColor(Color.red)
signal.setColor(Color.red)

r=Random(2L) # make random numbers reproducible
Max=10000    # events in empirical data
strength=10  # initial signal strength at start of the scan

for i in xrange(Max): # fill data and prediction
    data.fill(abs(r.nextGaussian()))
    backg.fill(abs(r.nextGaussian()))

for ev in range(100):  # scan through the parameter strength
   strength=strength+5 # increment Nr of signal events by 10
   signal.clear()
   for i in xrange(strength):
        signal.fill(abs(2+0.3*r.nextGaussian()))
   events=signal.sumAllBinHeights()
   datasource = DataSource(signal, backg, data)
   print "Calculating ..  Nr of signal events:",events
   climit = CLimits(datasource,50000)
   conf = climit.getLimit()
   valCLs=conf.getCLs(); print "  CLs = ",valCLs
   if  valCLs == float("inf"): continue
   exc=int((1-valCLs)*100)
   print "  Signal is excluded at level ",exc,"%"  
   del climit
   if (valCLs<0.05):
       print "Stop! Nr ",events," needed  for ",exc,"% CL"
       break

c1 = HPlot()  # show the results
c1.visible(); c1.setRange(0,3,0.0,1500)
sigback=backg.oper(signal,"Signal+Backg","+")
sigback.setColor(Color.blue)
sigback.setErrAll(0)
c1.draw([backg,signal,sigback,data])
c1.export("limit.pdf")

The above code determines how much of the signal should by allowed (in terms of the event counts) so it can be excluded at 95% confidence level, making this new theory irrelevant to the empirical data. We start from 10 initial events for the signal, calculate the exclusion limit using the class CLimit that uses a Monte Carlo method for settings statistical limits, and then increment the count number of the signal by 5. We stop the calculation when the signal is excluded at 95% CL. This corresponds to about 100 signal events. One can decrease the step size and increase the number of Monte Carlo experiments in order to get a better precision.

The created figure ("limit.pdf") shows the hypothetical "experimental" data overplayed with the blue histogram that includes both the well-established theory and the new theory (excluded at 95% CL). The signal distribution excluded at 95% CL is shown in red.

Example of exclusion limits at 95% confidence level


Related posts

Published by

jworkorg

jworkorg

User of DataMelt