Age | Commit message (Collapse) | Author |
|
path coefficients
Previously I was adding the log likelihood of the path coefficients instead of
the *negative* log likelihood! When fitting electrons this would sometimes
cause the fit to become unstable and continue increasing the path coefficients
without bound since the gain in the likelihood caused by increasing the
coefficients was more than the loss caused by a worse fit to the PMT data.
Doh!
|
|
This commit fixes a bug in the calculation of the average rms width of the
angular distribution for a path with a KL expansion. I also made a lot of
updates to the test-path program:
- plot the distribution of the KL expansion coefficients
- plot the standard deviation of the angular distribution as a function of
distance along with the prediction
- plot the simulated and reconstructed path in 3D
|
|
This commit adds a function called get_path_length() which computes the path
length inside and outside a sphere for a line segment between two points. This
will be useful for calculating the photon absorption for paths which cross the
AV and for computing the time of flight of photons from a track to a PMT.
|
|
This commit adds the function ln() to compute log(n) for integer n. It uses a
lookup table for n < 100 to speed things up.
|
|
This commit adds a fast function to calculate the expected number of PE at a
PMT without numerically integrating over the track. This calculation is *much*
faster than integrating over the track (~30 ms compared to several seconds) and
so we use it during the "quick" minimization phase of the fit to quickly find
the best position.
|
|
For some reason the fit seems to have trouble with the kinetic energy.
Basically, it seems to "converge" even though when you run the minimization
again it finds a better minimum with a lower energy. I think this is likely due
to the fact that for muons the kinetic energy only really affects the range of
the muon and this is subject to error in the numerical integration.
I also thought that maybe it could be due to roundoff error in the likelihood
calculation, so I implemented the Kahan summation to try and reduce that. No
idea if it's actually improving things, but I should benchmark it later to see.
|
|
spaced
|
|
|
|
|