Age | Commit message (Collapse) | Author |
|
Also, fix a few memory leaks in test.c.
|
|
This commit updates the fit to use the fit_event2() function which can fit for
multi vertex hypotheses. It also uses the QUAD fitter and the Hough transform
of the event to seed the fit so the results for 1 particle fits will be
slightly different than before.
I also fixed a small bug in combinations_with_replacement().
|
|
|
|
|
|
|
|
|
|
|
|
This commit adds a new function fit_event2() to fit multiple vertices. To seed
the fit, fit_event2() does the following:
- use the QUAD fitter to find the position and initial time of the event
- call find_peaks() to find possible directions for the particles
- loop over all possible unique combinations of the particles and direction
vectors and do a "fast" minimization
The best minimum found from the "fast" minimizations is then used to start the fit.
This commit has a few other updates:
- adds a hit_only parameter to the nll() function. This was necessary since
previously PMTs which weren't hit were always skipped for the fast
minimization, but when fitting for multiple vertices we need to include PMTs
which aren't hit since we float the energy.
- add the function guess_energy() to guess the energy of a particle given a
position and direction. This function estimates the energy by summing up the
QHS for all PMTs hit within the Cerenkov cone and dividing by 6.
- fixed a bug which caused the fit to freeze when hitting ctrl-c during the
fast minimization phase.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit adds a parameter to stop the fit if it takes longer than a certain
period of time in seconds. This parameter can be set on the command line. For
example, to limit fits to 10 minutes:
$ ./fit FILENAME --max-time 600.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit adds lots of comments to sno_charge.c and makes a couple of other
changes:
- use interp1d() instead of the GSL interpolation routines
- increase MAX_PE to 100
I increased MAX_PE because I determined that it had a rather large impact on
the likelihood function for 500 MeV electrons. This unfortunately slows down
the initialization by a lot. I think I could speed this up by convolving the
single PE charge distribution with a gaussian *before* convolving the charge
distributions to compute the charge distributions for multiple PE.
|
|
|
|
See Bryce Moffat's thesis page 64.
|
|
This commit adds Rayleigh scattering to the likelihood function. The Rayleigh
scattering lengths come from rsp_rayleigh.dat from SNOMAN which only includes
photons which scattered +/- 10 ns around the prompt peak. The fraction of light
which scatters is treated the same in the likelihood as reflected light, i.e.
it is uniform across all the PMTs in the detector and the time PDF is assumed
to be a constant for a fixed amount of time after the prompt peak.
|
|
|
|
|
|
|
|
This commit also adds a script to calculate the CSDA range for electrons from a
table of the stopping power as a function of energy. We need this script since
the NIST ESTAR website will only output the stopping power above a certain
energy threshold (1 GeV for electrons).
See https://physics.nist.gov/PhysRefData/Star/Text/ESTAR.html.
|
|
integral
|
|
This commit speeds up the fast likelihood calculation by only computing the
time PDF for a single photon. Since the majority of the time in the fast
likelihood calculation is spent computing the time PDF this should speed things
up by quite a bit. I suspect this won't have a big effect on the likelihood
value, but I should do some more testing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This function is only used when the expected number of photons reaching a PMT
is *very* small. In this case, we still need to estimate the PMT hit time PDF
for indirect light which is modelled as a flat distribution starting at the
time where the PMT is most likely to be hit from direct light. Since we compute
the most likely time for a PMT to be hit from direct light by computing the
integral of the expected charge times the time and then dividing by the total
charge, when the total charge is very small this can introduce large errors.
Note that this code already existed but it was computed in the likelihood
function. This commit just moves it to its own function to make things look
nicer.
|
|
This commit speeds up the likelihood function by about ~20% by using the
precomputed track positions, directions, times, etc. instead of interpolating
them on the fly.
It also switches to computing the number of points to integrate along the track
by dividing the track length by a specified distance, currently set to 1 cm.
This should hopefully speed things up for lower energies and result in more
stable fits at high energies.
|
|
|
|
|
|
particle id
|
|
|
|
This commit speeds up the likelihood calculation by returning zero early if the
angle between the PMT and the track is far from the Cerenkov angle.
Specifically we check to see that the angle is 5 "standard deviations" away.
Where the standard deviation is taken to be the RMS width of the angular
distribution.
|
|
|
|
|
|
To characterize the angular distribution of photons from an electromagnetic
shower I came up with the following functional form:
f(cos_theta) ~ exp(-abs(cos_theta-mu)^alpha/beta)
and fit this to data simulated using RAT-PAC at several different energies. I
then fit the alpha and beta coefficients as a function of energy to the
functional form:
alpha = c0 + c1/log(c2*T0 + c3)
beta = c0 + c1/log(c2*T0 + c3).
where T0 is the initial energy of the electron in MeV and c0, c1, c2, and c3
are parameters which I fit.
The longitudinal distribution of the photons generated from an electromagnetic
shower is described by a gamma distribution:
f(x) = x**(a-1)*exp(-x/b)/(Gamma(a)*b**a).
This parameterization comes from the PDG "Passage of particles through matter"
section 32.5. I also fit the data from my RAT-PAC simulation, but currently I
am not using it, and instead using a simpler form to calculate the coefficients
from the PDG (although I estimated the b parameter from the RAT-PAC data).
I also sped up the calculation of the solid angle by making a lookup table
since it was taking a significant fraction of the time to compute the
likelihood function.
|