Age | Commit message (Collapse) | Author |
|
This commit speeds up the likelihood calculation by returning zero early if the
angle between the PMT and the track is far from the Cerenkov angle.
Specifically we check to see that the angle is 5 "standard deviations" away.
Where the standard deviation is taken to be the RMS width of the angular
distribution.
|
|
|
|
|
|
To characterize the angular distribution of photons from an electromagnetic
shower I came up with the following functional form:
f(cos_theta) ~ exp(-abs(cos_theta-mu)^alpha/beta)
and fit this to data simulated using RAT-PAC at several different energies. I
then fit the alpha and beta coefficients as a function of energy to the
functional form:
alpha = c0 + c1/log(c2*T0 + c3)
beta = c0 + c1/log(c2*T0 + c3).
where T0 is the initial energy of the electron in MeV and c0, c1, c2, and c3
are parameters which I fit.
The longitudinal distribution of the photons generated from an electromagnetic
shower is described by a gamma distribution:
f(x) = x**(a-1)*exp(-x/b)/(Gamma(a)*b**a).
This parameterization comes from the PDG "Passage of particles through matter"
section 32.5. I also fit the data from my RAT-PAC simulation, but currently I
am not using it, and instead using a simpler form to calculate the coefficients
from the PDG (although I estimated the b parameter from the RAT-PAC data).
I also sped up the calculation of the solid angle by making a lookup table
since it was taking a significant fraction of the time to compute the
likelihood function.
|
|
|
|
|
|
|
|
|
|
|
|
things up
|
|
|
|
|
|
|
|
|
|
sqrt(1-pow(cos_theta,2))
|
|
I noticed when fitting electrons that the cquad integration routine was not
very stable, i.e. it would return different results for *very* small changes in
the fit parameters which would cause the fit to stall.
Since it's very important for the minimizer that the likelihood function not
jump around, I am switching to integrating over the path by just using a fixed
number of points and using the trapezoidal rule. This seems to be a lot more
stable, and as a bonus I was able to combine the three integrals (direct
charge, indirect charge, and time) so that we only have to do a single loop.
This should hopefully make the speed comparable since the cquad routine was
fairly effective at only using as many function evaluations as needed.
Another benefit to this approach is that if needed, it will be easier to port
to a GPU.
|
|
This commit fixes a bug which was double counting the pmt response when
computing the direct charge and incorrectly multiplying the reflected charge by
the pmt response. I think this was just a typo left in when I added the
reflected charge.
|
|
This commit updates path_init() to check that beta > 0 before dividing by it to
compute the time. Previously when fitting electrons it would occasionally
divide by zero which would cause the inf to propagate all the way through the
likelihood function.
|
|
Occasionally when fitting electrons the kinetic energy at the last step would
be high enough that the electron never crossed the BETA_MIN threshold which
would cause the gsl routine to throw an error.
This commit updates particle_init() to set the kinetic energy at the last
step to zero to make sure that we can bisect the point along the track where
the speed drops to BETA_MIN.
|
|
Since we only have the range and dE/dx tables for light water for electrons and
protons it's not correct to use the heavy water density. Also, even though we
have both tables for muons, currently we only load the heavy water table, so we
hardcode the density to that of heavy water.
In the future, it would be nice to load both tables and use the correct one
depending on if we are fitting in the heavy or light water.
|
|
path coefficients
Previously I was adding the log likelihood of the path coefficients instead of
the *negative* log likelihood! When fitting electrons this would sometimes
cause the fit to become unstable and continue increasing the path coefficients
without bound since the gain in the likelihood caused by increasing the
coefficients was more than the loss caused by a worse fit to the PMT data.
Doh!
|
|
Previously I was using the radiation length in light water but scaling it by
the density of heavy water, which isn't correct. Since the radiation length in
heavy and light water is almost identical, we just use the radiation length in
light water.
|
|
|
|
This commit fixes a bug in the calculation of the average rms width of the
angular distribution for a path with a KL expansion. I also made a lot of
updates to the test-path program:
- plot the distribution of the KL expansion coefficients
- plot the standard deviation of the angular distribution as a function of
distance along with the prediction
- plot the simulated and reconstructed path in 3D
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This speeds up the "boot up" time from ~30 seconds to ~12 seconds.
|
|
|
|
|
|
This commit updates the PMT response bank and absorption lengths for H2O and
D2O based on the values in mcprod which were used for the LETA analysis.
|
|
Doh! I was previously using the tube number variable!
|
|
|
|
|
|
Previously we ignored PMTs which were flagged when computing the expected
number of PE for each PMT, but since we calculate the amount of reflected light
here we need to include even PMTs which are offline (since they still reflect
light).
|
|
To calculate the expected number of photons from reflected light we now
integrate over the track and use the PMT response table to calculate what
fraction of the light is reflected. Previously we were just using a constant
fraction of the total detected light which was faster since we only had to
integrate over the track once, but this should be more accurate.
|
|
|
|
Previously I was interpolating the absorption lengths using interp1d() but that
only works when the x array is uniform. Since the wavelengths are not spaced
uniformly, we have to use the GSL interpolation routines.
|
|
This commit updates the fast likelihood calculation to use the identity
sin(a-b) = sin(a)*cos(b) - cos(a)*sin(b)
to speed up the fast likelihood calculation.
|
|
This commit speeds up the fast likelihood calculation by avoiding calls to
trigonometric functions where possible. Specifically we calculate
sin(a) = sqrt(1-pow(cos(a),2));
instead of
sin(a) = sin(acos(cos(a)));
|
|
|
|
Currently the PDF for scattered light is modelled as a flat distribution
starting at some time t. Previously I was using the mean hit time for all PMTs,
however this should really be a flat distribution in the time *residual* after
the main peak. Therefore, the PDF now starts at the estimated time for direct
photons.
|
|
I accidentally hardcoded the single PE TTS to 1.5 ns in the likelihood
calculation.
|
|
This commit updates the bounds of the track integration in the likelihood
function to integrate up to 1 meter around the point at which the PMT is at the
Cerenkov angle from the track.
This fixes an issue I was seeing where a *very* small change in the fit
paramters would cause the likelihood to jump by a large amount. I eventually
tracked it down to the same issue I was seeing before which I solved by
splitting up the integration into two intervals. However that fix did not seem
to completely fix the issue. Based on initial tests with 500 MeV muons, this
fix seems to do a much better job.
|