Age | Commit message (Collapse) | Author |
|
|
|
|
|
In the processed zdab files (the SNOCR_* files), the first logical record just
has a run header bank and no EV bank.
|
|
|
|
|
|
|
|
This commit updates the zebra library files zebra.{c,h} so that it's now
possible to traverse the data structure using links! This was originally
motivated by wanting to figure out which MC particles were generated from the
MCGN bank (from which it's only possible to access the tracks and vertices
using structural links).
I've also added a new test to test-zebra which checks the consistency of all of
the next/up/orig, structural, and reference links in a zebra file.
|
|
Previously, the algorithm used to find peaks was to search for all peaks in the
Hough transform above some constant fraction of the highest peak. This
algorithm could have issues finding smaller peaks away from the highest peak.
The new algorithm instead finds the highest peak in the Hough transform and
then recomputes the Hough transform ignoring all PMT hits within the Cerenkov
cone of the first peak. The next peak is found from this transform and the
process is iteratively repeated until a certain number of peaks are found.
One disadvantage of this new system is that it will *always* find the same
number of peaks and this will usually be greater than the actual number of
rings in the event. This is not a problem though since when fitting the event
we loop over all possible peaks and do a quick fit to determine the starting
point and so false positives are OK because the real peaks will fit better
during this quick fit.
Another potential issue with this new method is that by rejecting all PMT hits
within the Cerenkov cone of the first peak we could miss a second peak very
close to the first peak. This is partially mitigated by the fact that when we
loop over all possible combinations of the particle ids and directions we allow
each peak to be used more than once. For example, when fitting for the
hypothesis that an event is caused by two electrons and one muon and given two
possible directions 1 and 2, we will fit for the following possible direction
combinations:
1 1 1
1 1 2
1 2 1
1 2 2
2 2 1
2 2 2
Therefore if there is a second ring close to the first it is possible to fit it
correctly since we will seed the quick fit with two particles pointing in the
same direction.
This commit also adds a few tests for new functions and changes the energy step
size during the quick fit to 10% of the starting energy value.
|
|
This commit updates the fit to use the fit_event2() function which can fit for
multi vertex hypotheses. It also uses the QUAD fitter and the Hough transform
of the event to seed the fit so the results for 1 particle fits will be
slightly different than before.
I also fixed a small bug in combinations_with_replacement().
|
|
|
|
|
|
|
|
This commit adds a new function fit_event2() to fit multiple vertices. To seed
the fit, fit_event2() does the following:
- use the QUAD fitter to find the position and initial time of the event
- call find_peaks() to find possible directions for the particles
- loop over all possible unique combinations of the particles and direction
vectors and do a "fast" minimization
The best minimum found from the "fast" minimizations is then used to start the fit.
This commit has a few other updates:
- adds a hit_only parameter to the nll() function. This was necessary since
previously PMTs which weren't hit were always skipped for the fast
minimization, but when fitting for multiple vertices we need to include PMTs
which aren't hit since we float the energy.
- add the function guess_energy() to guess the energy of a particle given a
position and direction. This function estimates the energy by summing up the
QHS for all PMTs hit within the Cerenkov cone and dividing by 6.
- fixed a bug which caused the fit to freeze when hitting ctrl-c during the
fast minimization phase.
|
|
|
|
|
|
This commit adds a parameter to stop the fit if it takes longer than a certain
period of time in seconds. This parameter can be set on the command line. For
example, to limit fits to 10 minutes:
$ ./fit FILENAME --max-time 600.0
|
|
|
|
|
|
|
|
|
|
|
|
This commit adds lots of comments to sno_charge.c and makes a couple of other
changes:
- use interp1d() instead of the GSL interpolation routines
- increase MAX_PE to 100
I increased MAX_PE because I determined that it had a rather large impact on
the likelihood function for 500 MeV electrons. This unfortunately slows down
the initialization by a lot. I think I could speed this up by convolving the
single PE charge distribution with a gaussian *before* convolving the charge
distributions to compute the charge distributions for multiple PE.
|
|
This commit adds Rayleigh scattering to the likelihood function. The Rayleigh
scattering lengths come from rsp_rayleigh.dat from SNOMAN which only includes
photons which scattered +/- 10 ns around the prompt peak. The fraction of light
which scatters is treated the same in the likelihood as reflected light, i.e.
it is uniform across all the PMTs in the detector and the time PDF is assumed
to be a constant for a fixed amount of time after the prompt peak.
|
|
|
|
integral
|
|
This commit speeds up the likelihood function by about ~20% by using the
precomputed track positions, directions, times, etc. instead of interpolating
them on the fly.
It also switches to computing the number of points to integrate along the track
by dividing the track length by a specified distance, currently set to 1 cm.
This should hopefully speed things up for lower energies and result in more
stable fits at high energies.
|
|
particle id
|
|
To characterize the angular distribution of photons from an electromagnetic
shower I came up with the following functional form:
f(cos_theta) ~ exp(-abs(cos_theta-mu)^alpha/beta)
and fit this to data simulated using RAT-PAC at several different energies. I
then fit the alpha and beta coefficients as a function of energy to the
functional form:
alpha = c0 + c1/log(c2*T0 + c3)
beta = c0 + c1/log(c2*T0 + c3).
where T0 is the initial energy of the electron in MeV and c0, c1, c2, and c3
are parameters which I fit.
The longitudinal distribution of the photons generated from an electromagnetic
shower is described by a gamma distribution:
f(x) = x**(a-1)*exp(-x/b)/(Gamma(a)*b**a).
This parameterization comes from the PDG "Passage of particles through matter"
section 32.5. I also fit the data from my RAT-PAC simulation, but currently I
am not using it, and instead using a simpler form to calculate the coefficients
from the PDG (although I estimated the b parameter from the RAT-PAC data).
I also sped up the calculation of the solid angle by making a lookup table
since it was taking a significant fraction of the time to compute the
likelihood function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit updates the PMT response bank and absorption lengths for H2O and
D2O based on the values in mcprod which were used for the LETA analysis.
|
|
Doh! I was previously using the tube number variable!
|
|
When testing out the fitter on 500 MeV muons, there was at least one event
which started out at a position very far from the true position. This event had
a secondary electron like ring which is what I suspect caused the fit to start
out in a position far from the true position.
This fix correctly starts the minimization close to the true position. In the
future I should look at updating get_direction() so that it finds the largest
ring direction instead of just doing a weighted average of all the vectors from
the position to the PMTs.
|
|
|
|
|
|
|
|
This commit adds the ability to run the fit program with the
--skip-second-event flag to only fit the first event after a MAST bank. This
way we avoid fitting secondaries like Michel electrons when fitting MC events.
|
|
the quantum efficiency
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
minimization phase
|