Age | Commit message (Collapse) | Author |
|
|
|
|
|
This commit updates the code to calculate the number of Cerenkov photons from
secondary particles produced in an electromagnetic shower from electrons to use
an energy dependent formula I fit to data simulated with RAT-PAC.
|
|
This commit updates the charge likelihood calculation to calculate:
P(hit,q|n) = P(q|hit,n)*P(hit|n)
This has almost no effect on the fit results, but is technically correct.
|
|
This commit updates the optics code to calculate the rayleigh scattering length
using the Einstein-Smoluchowski formula instead of using the effective rayleigh
scattering lengths from the RSPR bank.
|
|
|
|
Thanks clang!
|
|
Previously I was calculating the expected number of delta ray photons when
integrating over the shower path, but since the delta rays are produced along
the particle path and not further out like the shower photons, this wasn't
correct. The normalization of the probability distribution for the photons
produced along the path was also not handled correctly.
This commit adds a new function called integrate_path_delta_ray() to compute
the expected number of photons from delta rays hitting each PMT. Currently this
means that the likelihood function for muons will be significantly slower than
previously, but hopefully I can speed it up again in the future (for example by
skipping the shower calculation which is negligible for lower energy muons).
|
|
This commit speeds up the likelihood function by integrating the charge along
the track inline instead of creating an array and then calling trapz(). It also
introduces two global variables avg_index_d2o and avg_index_h2o which are the
average indices of refraction for D2O and H2O weighted by the PMT quantum
efficiency and the Cerenkov spectrum.
|
|
This commit speeds up the likelihood calculation by eliminating most calls to
acos(). This is done by updating the PMT response lookup tables to be as a
function of the cosine of the angle between the photon and the PMT normal
instead of the angle itself.
|
|
Previously I was computing the fraction of light absorbed and scattered by
calculating an average absorption and scattering length weighted by the
Cerenkov spectrum and the PMT quantum efficiency, which isn't correct since we
should be averaging the absorption and scattering probabilities, not the
absorption and scattering lengths.
This commit fixes this by instead computing the average probability that a
photon is absorbed or scattered as a function of the distance travelled by
integrating the absorption and scattering probabilities over all wavelengths
weighted by the PMT quantum efficiency and the Cerenkov spectrum.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Based on some initial testing it seems that the subplex minimization algorithm
performs *much* better than BOBYQA for multi-particle fits. It is also a bit
slower, so I will probably have to figure out how to speed things up.
|
|
|
|
|
|
|
|
To enable the fitter to run outside of the src directory, I created a new
function open_file() which works exactly like fopen() except that it searches
for the file in both the current working directory and the path specified by an
environment variable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In the processed zdab files (the SNOCR_* files), the first logical record just
has a run header bank and no EV bank.
|
|
|
|
|
|
|
|
|
|
|
|
This is so that in the future if we only integrate over the path in the PSUP we
don't overestimate the Cerenkov light from delta rays.
|
|
This seems to speed things up a little bit.
|
|
Previously the find peaks algorithm would ignore any PMT hits within the
Cerenkov ring of previously found rings. This had the problem that occasionally
the algorithm would repeatedly find the same direction due to hits outside of
the Cerenkov cone. The new update was inspired by how SuperK does this and
instead we "subtract" off the previous rings by subtracting the average qhs
times e^(-cos(theta-1/n)/0.1) from each PMT for each previous ring.
Based on some quick testing this seems a lot better than the previous
algorithm, but still probably needs some tweaking.
|
|
This commit updates the likelihood function to take into account Cerenkov light
produced from delta rays produced by muons. The angular distribution of this
light is currently assumed to be constant along the track and parameterized in
the same way as the Cerenkov light from an electromagnetic shower. Currently I
assume the light is produced uniformly along the track which isn't exactly
correct, but should be good enough.
|
|
|
|
|
|
This commit updates the zebra library files zebra.{c,h} so that it's now
possible to traverse the data structure using links! This was originally
motivated by wanting to figure out which MC particles were generated from the
MCGN bank (from which it's only possible to access the tracks and vertices
using structural links).
I've also added a new test to test-zebra which checks the consistency of all of
the next/up/orig, structural, and reference links in a zebra file.
|
|
Previously, the algorithm used to find peaks was to search for all peaks in the
Hough transform above some constant fraction of the highest peak. This
algorithm could have issues finding smaller peaks away from the highest peak.
The new algorithm instead finds the highest peak in the Hough transform and
then recomputes the Hough transform ignoring all PMT hits within the Cerenkov
cone of the first peak. The next peak is found from this transform and the
process is iteratively repeated until a certain number of peaks are found.
One disadvantage of this new system is that it will *always* find the same
number of peaks and this will usually be greater than the actual number of
rings in the event. This is not a problem though since when fitting the event
we loop over all possible peaks and do a quick fit to determine the starting
point and so false positives are OK because the real peaks will fit better
during this quick fit.
Another potential issue with this new method is that by rejecting all PMT hits
within the Cerenkov cone of the first peak we could miss a second peak very
close to the first peak. This is partially mitigated by the fact that when we
loop over all possible combinations of the particle ids and directions we allow
each peak to be used more than once. For example, when fitting for the
hypothesis that an event is caused by two electrons and one muon and given two
possible directions 1 and 2, we will fit for the following possible direction
combinations:
1 1 1
1 1 2
1 2 1
1 2 2
2 2 1
2 2 2
Therefore if there is a second ring close to the first it is possible to fit it
correctly since we will seed the quick fit with two particles pointing in the
same direction.
This commit also adds a few tests for new functions and changes the energy step
size during the quick fit to 10% of the starting energy value.
|
|
Also, fix a few memory leaks in test.c.
|
|
This commit updates the fit to use the fit_event2() function which can fit for
multi vertex hypotheses. It also uses the QUAD fitter and the Hough transform
of the event to seed the fit so the results for 1 particle fits will be
slightly different than before.
I also fixed a small bug in combinations_with_replacement().
|
|
|
|
|