Age | Commit message (Collapse) | Author |
|
This commit updates the likelihood function to take into account Cerenkov light
produced from delta rays produced by muons. The angular distribution of this
light is currently assumed to be constant along the track and parameterized in
the same way as the Cerenkov light from an electromagnetic shower. Currently I
assume the light is produced uniformly along the track which isn't exactly
correct, but should be good enough.
|
|
This commit updates the fit to use the fit_event2() function which can fit for
multi vertex hypotheses. It also uses the QUAD fitter and the Hough transform
of the event to seed the fit so the results for 1 particle fits will be
slightly different than before.
I also fixed a small bug in combinations_with_replacement().
|
|
This commit adds a new function fit_event2() to fit multiple vertices. To seed
the fit, fit_event2() does the following:
- use the QUAD fitter to find the position and initial time of the event
- call find_peaks() to find possible directions for the particles
- loop over all possible unique combinations of the particles and direction
vectors and do a "fast" minimization
The best minimum found from the "fast" minimizations is then used to start the fit.
This commit has a few other updates:
- adds a hit_only parameter to the nll() function. This was necessary since
previously PMTs which weren't hit were always skipped for the fast
minimization, but when fitting for multiple vertices we need to include PMTs
which aren't hit since we float the energy.
- add the function guess_energy() to guess the energy of a particle given a
position and direction. This function estimates the energy by summing up the
QHS for all PMTs hit within the Cerenkov cone and dividing by 6.
- fixed a bug which caused the fit to freeze when hitting ctrl-c during the
fast minimization phase.
|
|
|
|
|
|
|
|
|
|
See Bryce Moffat's thesis page 64.
|
|
This commit adds Rayleigh scattering to the likelihood function. The Rayleigh
scattering lengths come from rsp_rayleigh.dat from SNOMAN which only includes
photons which scattered +/- 10 ns around the prompt peak. The fraction of light
which scatters is treated the same in the likelihood as reflected light, i.e.
it is uniform across all the PMTs in the detector and the time PDF is assumed
to be a constant for a fixed amount of time after the prompt peak.
|
|
integral
|
|
|
|
|
|
|
|
This commit speeds up the likelihood function by about ~20% by using the
precomputed track positions, directions, times, etc. instead of interpolating
them on the fly.
It also switches to computing the number of points to integrate along the track
by dividing the track length by a specified distance, currently set to 1 cm.
This should hopefully speed things up for lower energies and result in more
stable fits at high energies.
|
|
To characterize the angular distribution of photons from an electromagnetic
shower I came up with the following functional form:
f(cos_theta) ~ exp(-abs(cos_theta-mu)^alpha/beta)
and fit this to data simulated using RAT-PAC at several different energies. I
then fit the alpha and beta coefficients as a function of energy to the
functional form:
alpha = c0 + c1/log(c2*T0 + c3)
beta = c0 + c1/log(c2*T0 + c3).
where T0 is the initial energy of the electron in MeV and c0, c1, c2, and c3
are parameters which I fit.
The longitudinal distribution of the photons generated from an electromagnetic
shower is described by a gamma distribution:
f(x) = x**(a-1)*exp(-x/b)/(Gamma(a)*b**a).
This parameterization comes from the PDG "Passage of particles through matter"
section 32.5. I also fit the data from my RAT-PAC simulation, but currently I
am not using it, and instead using a simpler form to calculate the coefficients
from the PDG (although I estimated the b parameter from the RAT-PAC data).
I also sped up the calculation of the solid angle by making a lookup table
since it was taking a significant fraction of the time to compute the
likelihood function.
|
|
|
|
|
|
I noticed when fitting electrons that the cquad integration routine was not
very stable, i.e. it would return different results for *very* small changes in
the fit parameters which would cause the fit to stall.
Since it's very important for the minimizer that the likelihood function not
jump around, I am switching to integrating over the path by just using a fixed
number of points and using the trapezoidal rule. This seems to be a lot more
stable, and as a bonus I was able to combine the three integrals (direct
charge, indirect charge, and time) so that we only have to do a single loop.
This should hopefully make the speed comparable since the cquad routine was
fairly effective at only using as many function evaluations as needed.
Another benefit to this approach is that if needed, it will be easier to port
to a GPU.
|
|
Since we only have the range and dE/dx tables for light water for electrons and
protons it's not correct to use the heavy water density. Also, even though we
have both tables for muons, currently we only load the heavy water table, so we
hardcode the density to that of heavy water.
In the future, it would be nice to load both tables and use the correct one
depending on if we are fitting in the heavy or light water.
|
|
|
|
|
|
To calculate the expected number of photons from reflected light we now
integrate over the track and use the PMT response table to calculate what
fraction of the light is reflected. Previously we were just using a constant
fraction of the total detected light which was faster since we only had to
integrate over the track once, but this should be more accurate.
|
|
|
|
|
|
This commit updates the CHARGE_FRACTION value to now represent approximately
the fraction of light reflected from each PMT. It also updates the value to be
closer to the true value based on a couple of fits.
|
|
Previously to avoid computing P(q,t|n)*P(n|mu) for large n when they were very
unlikely I was using a precomputed maximum n value based only on the expected
number of PE. However, this didn't take into account P(q|n).
This commit updates the likelihood function to dynamically decide when to quit
computing these probabilities when the probability for a given n divided by the
most likely probability is less than some threshold.
This threshold is currently set to 10**(-10) which means we quit calculating
these probabilities when the probability is 10 million times less likely than
the most probable value.
|
|
This commit adds a fast function to calculate the expected number of PE at a
PMT without numerically integrating over the track. This calculation is *much*
faster than integrating over the track (~30 ms compared to several seconds) and
so we use it during the "quick" minimization phase of the fit to quickly find
the best position.
|
|
|
|
This commit updates the likelihood fit to use the KL path expansion. Currently,
I'm just using one coefficient for the path in both x and y.
|
|
The RMS scattering angle calculation comes from Equation 33.15 in the PDG
article on the passage of particles through matter. It's not entirely obvious
if this equation is correct for a long track. It seems like it should be
integrated along the track to add up the contributions at different energies,
but it's not obvious how to do that with the log term.
In any case, the way I was previously calculating it (by using the momentum and
velocity at each point along the track) was definitely wrong.
I will try this out and perhaps try to integrate it later.
|
|
|