Age | Commit message (Collapse) | Author |
|
|
|
I noticed when fitting electrons that the cquad integration routine was not
very stable, i.e. it would return different results for *very* small changes in
the fit parameters which would cause the fit to stall.
Since it's very important for the minimizer that the likelihood function not
jump around, I am switching to integrating over the path by just using a fixed
number of points and using the trapezoidal rule. This seems to be a lot more
stable, and as a bonus I was able to combine the three integrals (direct
charge, indirect charge, and time) so that we only have to do a single loop.
This should hopefully make the speed comparable since the cquad routine was
fairly effective at only using as many function evaluations as needed.
Another benefit to this approach is that if needed, it will be easier to port
to a GPU.
|
|
This commit fixes a bug which was double counting the pmt response when
computing the direct charge and incorrectly multiplying the reflected charge by
the pmt response. I think this was just a typo left in when I added the
reflected charge.
|
|
Occasionally when fitting electrons the kinetic energy at the last step would
be high enough that the electron never crossed the BETA_MIN threshold which
would cause the gsl routine to throw an error.
This commit updates particle_init() to set the kinetic energy at the last
step to zero to make sure that we can bisect the point along the track where
the speed drops to BETA_MIN.
|
|
Since we only have the range and dE/dx tables for light water for electrons and
protons it's not correct to use the heavy water density. Also, even though we
have both tables for muons, currently we only load the heavy water table, so we
hardcode the density to that of heavy water.
In the future, it would be nice to load both tables and use the correct one
depending on if we are fitting in the heavy or light water.
|
|
path coefficients
Previously I was adding the log likelihood of the path coefficients instead of
the *negative* log likelihood! When fitting electrons this would sometimes
cause the fit to become unstable and continue increasing the path coefficients
without bound since the gain in the likelihood caused by increasing the
coefficients was more than the loss caused by a worse fit to the PMT data.
Doh!
|
|
Previously I was using the radiation length in light water but scaling it by
the density of heavy water, which isn't correct. Since the radiation length in
heavy and light water is almost identical, we just use the radiation length in
light water.
|
|
|
|
|
|
|
|
Previously we ignored PMTs which were flagged when computing the expected
number of PE for each PMT, but since we calculate the amount of reflected light
here we need to include even PMTs which are offline (since they still reflect
light).
|
|
To calculate the expected number of photons from reflected light we now
integrate over the track and use the PMT response table to calculate what
fraction of the light is reflected. Previously we were just using a constant
fraction of the total detected light which was faster since we only had to
integrate over the track once, but this should be more accurate.
|
|
|
|
This commit updates the fast likelihood calculation to use the identity
sin(a-b) = sin(a)*cos(b) - cos(a)*sin(b)
to speed up the fast likelihood calculation.
|
|
This commit speeds up the fast likelihood calculation by avoiding calls to
trigonometric functions where possible. Specifically we calculate
sin(a) = sqrt(1-pow(cos(a),2));
instead of
sin(a) = sin(acos(cos(a)));
|
|
Currently the PDF for scattered light is modelled as a flat distribution
starting at some time t. Previously I was using the mean hit time for all PMTs,
however this should really be a flat distribution in the time *residual* after
the main peak. Therefore, the PDF now starts at the estimated time for direct
photons.
|
|
I accidentally hardcoded the single PE TTS to 1.5 ns in the likelihood
calculation.
|
|
This commit updates the bounds of the track integration in the likelihood
function to integrate up to 1 meter around the point at which the PMT is at the
Cerenkov angle from the track.
This fixes an issue I was seeing where a *very* small change in the fit
paramters would cause the likelihood to jump by a large amount. I eventually
tracked it down to the same issue I was seeing before which I solved by
splitting up the integration into two intervals. However that fix did not seem
to completely fix the issue. Based on initial tests with 500 MeV muons, this
fix seems to do a much better job.
|
|
|
|
This commit updates the likelihood calculation to split up the track integral
into two intervals in some cases. I noticed when fitting some events that the
likelihood value would change drastically for a very small change in the fit
parameters. I eventually tracked it down to the fact that the track integral
was occasionally returning a very small charge for a PMT which should have a
very high charge. This was happening because the region of the track which was
hitting the PMT was very small and the cquad integration routine was completely
skipping it.
The solution to this problem is a bit of a hack, but it seems to work. I first
calculate where along the track (for a straight track) the PMT would be at the
Cerenkov angle from the track. If this point is somewhere along the track then
we split up the integral into two intervals: one going from the start of the
track to this point and the other from the point to the end of the track. Since
the cquad routine always samples points near the end of the intervals this
should prevent it from completely skipping over the point in the track where
the integrand is non-zero.
|
|
For some reason the OWL tubes have 9999.00 for the x, y, and z coordinates of
the normal vector in the PMT file. For now, I'm just going to remove them from
the likelihood calculation.
|
|
|
|
This commit updates the CHARGE_FRACTION value to now represent approximately
the fraction of light reflected from each PMT. It also updates the value to be
closer to the true value based on a couple of fits.
|
|
|
|
|
|
get_path_length()
|
|
|
|
This commit updates the calculation of the muon kinetic energy as a function of
distance along the track. Previously I was using an approximation from the PDG,
but it doesn't seem to be very accurate and won't generalize to the case of
electrons. The kinetic energy is now calculated using the tabulated values of
dE/dx as a function of energy.
|
|
This commit adds the function ln() to compute log(n) for integer n. It uses a
lookup table for n < 100 to speed things up.
|
|
Previously to avoid computing P(q,t|n)*P(n|mu) for large n when they were very
unlikely I was using a precomputed maximum n value based only on the expected
number of PE. However, this didn't take into account P(q|n).
This commit updates the likelihood function to dynamically decide when to quit
computing these probabilities when the probability for a given n divided by the
most likely probability is less than some threshold.
This threshold is currently set to 10**(-10) which means we quit calculating
these probabilities when the probability is 10 million times less likely than
the most probable value.
|
|
|
|
|
|
This commit adds a fast function to calculate the expected number of PE at a
PMT without numerically integrating over the track. This calculation is *much*
faster than integrating over the track (~30 ms compared to several seconds) and
so we use it during the "quick" minimization phase of the fit to quickly find
the best position.
|
|
For some reason the fit seems to have trouble with the kinetic energy.
Basically, it seems to "converge" even though when you run the minimization
again it finds a better minimum with a lower energy. I think this is likely due
to the fact that for muons the kinetic energy only really affects the range of
the muon and this is subject to error in the numerical integration.
I also thought that maybe it could be due to roundoff error in the likelihood
calculation, so I implemented the Kahan summation to try and reduce that. No
idea if it's actually improving things, but I should benchmark it later to see.
|
|
|
|
I found when simulating high energy muons that the expected charge for some
PMTs which should be getting hit was zero. The reason for this is that the
integrand was very sharply peaked at the Cerenkov angle which makes it
difficult to integrate for numerical integration routines like cquad. To solve
this I split up the integral at the point when the track was at the Cerenkov
angle from the PMT to make sure that cquad didn't miss the peak. However,
calling cquad twice takes a lot of time so it's not necessarily good to do this
for all fits. Also, it's not obvious if it is necessary any more now that the
angular distribution calculation was fixed.
I think the real reason that cquad was missing those integrals was that for a
high energy muon the range is going to be very large (approximately 40 meters
for a 10 GeV muon). In this case, I should really only integrate up to the edge
of the cavity or PSUP and hopefully cquad picks enough points in there to get a
non zero value.
I also added a check to only compute tmean when at least one PMT has a valid
time. This prevents a divide by zero which causes the likelihood function to
return nan.
|
|
This commit updates the likelihood fit to use the KL path expansion. Currently,
I'm just using one coefficient for the path in both x and y.
|
|
refraction
|
|
The RMS scattering angle calculation comes from Equation 33.15 in the PDG
article on the passage of particles through matter. It's not entirely obvious
if this equation is correct for a long track. It seems like it should be
integrated along the track to add up the contributions at different energies,
but it's not obvious how to do that with the log term.
In any case, the way I was previously calculating it (by using the momentum and
velocity at each point along the track) was definitely wrong.
I will try this out and perhaps try to integrate it later.
|
|
|