Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
This commit updates the time estimate in nll_best() to take into account the
fact that for PMTs which get hit by a lot of photons the time distribution we
expect is the first order statistic of the overall photon hit time
distribution.
|
|
|
|
Also, call this function when computing the psi parameter in nll_best().
|
|
No reason to use fast_sqrt() in get_theta0_min() since the argument isn't
guaranteed to be between 0 and 1.
|
|
This commit makes a few small changes to try and reduce the number of divisions
and multiplications done in get_expected_charge() to speed up the likelihood
function.
|
|
Since we already calculate sin(theta) in get_expected_charge() there's no
reason to calculate it again in get_probability(). This *may* already be
optimized out by the compiler.
|
|
|
|
|
|
Also, delete a check on cos_theta_pmt which isn't necessary.
|
|
|
|
Since we now calculate the expected charge from shower photons for muons we
need to initialize the angular distribution and a few other things in
particle_init().
|
|
Similarly to electrons, I fit an analytic form to the ratio of the number of
photons produced via shower particles over the radiative energy loss. In this
case, I chose the functional form:
ratio = a*(1-exp(-T/b))
since the ratio seemed to reach a constant value after a certain energy. I then
simulated a 10 GeV muon and it appears that the ratio might actually decrease
after that, so for higher energies I may have to come up with a different fit
function.
|
|
|
|
delta rays
This commit introduces a new method for integrating over the particle track to
calculate the number of shower and delta ray photons expected at each PMT. The
reason for introducing a new method was that the previous method of just using
the trapezoidal rule was both inaccurate and not stable. By inaccurate I mean
that the trapezoidal rule was not producing a very good estimate of the true
integral and by not stable I mean that small changes in the fit parameters
(like theta and phi) could produce wildly different results. This meant that
the likelihood function was very noisy and was causing the minimizers to not be
able to find the global minimum.
The new integration method works *much* better than the trapezoidal rule for
the specific functions we are dealing with. The problem is essentially to
integrate the product of two functions over some interval, one of which is very
"peaky", i.e. we want to find:
\int f(x) g(x) dx
where f(x) is peaked around some region and g(x) is relatively smooth. For our
case, f(x) represents the angular distribution of the Cerenkov light and g(x)
represents the factors like solid angle, absorption, etc.
The technique I discovered was that you can approximate this integral via a
discrete sum:
constant \sum_i g(x_i)
where the x_i are chosen to have equal spacing along the range of the integral
of f(x), i.e.
x_i = F^(-1)(i*constant)
This new method produces likelihood functions which are *much* more smooth and
accurate than previously.
In addition, there are a few other fixes in this commit:
- switch from specifying a step size for the shower integration to a number of
points, i.e. dx_shower -> number of shower points
- only integrate to the PSUP
I realized that previously we were integrating to the end of the track even
if the particle left the PSUP, and that there was no code to deal with the
fact that light emitted beyond the PSUP can't make it back to the PMTs.
- only integrate to the Cerenkov threshold
When integrating over the particle track to calculate the expected number
of direct Cerenkov photons, we now only integrate the track up to the point
where the particle's velocity is 1/index. This should hopefully make the
likelihood smoother because previously the estimate would depend on exactly
whether the points we sampled the track were above or below this point.
- add a minimum theta0 value based on the angular width of the PMT
When calculating the expected number of Cerenkov photons we assumed that
the angular distribution was constant over the whole PMT. This is a bad
assumption when the particle is very close to the PMT. Really we should
average the function over all the angles of the PMT, but that would be too
computationally expensive so instead we just calculate a minimum theta0
value which depends on the distance and angle to the PMT. This seems to
make the likelihood much smoother for particles near the PSUP.
- add a factor of sin(theta) when checking if we can skip calculating the
charge in get_expected_charge()
- fix a nan in beta_root() when the momentum is negative
- update PSUP_RADIUS from 800 cm -> 840 cm
|
|
|
|
This commit updates the code to calculate the number of Cerenkov photons from
secondary particles produced in an electromagnetic shower from electrons to use
an energy dependent formula I fit to data simulated with RAT-PAC.
|
|
This commit updates the charge likelihood calculation to calculate:
P(hit,q|n) = P(q|hit,n)*P(hit|n)
This has almost no effect on the fit results, but is technically correct.
|
|
This commit updates the optics code to calculate the rayleigh scattering length
using the Einstein-Smoluchowski formula instead of using the effective rayleigh
scattering lengths from the RSPR bank.
|
|
Thanks clang!
|
|
Previously I was calculating the expected number of delta ray photons when
integrating over the shower path, but since the delta rays are produced along
the particle path and not further out like the shower photons, this wasn't
correct. The normalization of the probability distribution for the photons
produced along the path was also not handled correctly.
This commit adds a new function called integrate_path_delta_ray() to compute
the expected number of photons from delta rays hitting each PMT. Currently this
means that the likelihood function for muons will be significantly slower than
previously, but hopefully I can speed it up again in the future (for example by
skipping the shower calculation which is negligible for lower energy muons).
|
|
This commit speeds up the likelihood function by integrating the charge along
the track inline instead of creating an array and then calling trapz(). It also
introduces two global variables avg_index_d2o and avg_index_h2o which are the
average indices of refraction for D2O and H2O weighted by the PMT quantum
efficiency and the Cerenkov spectrum.
|
|
This commit speeds up the likelihood calculation by eliminating most calls to
acos(). This is done by updating the PMT response lookup tables to be as a
function of the cosine of the angle between the photon and the PMT normal
instead of the angle itself.
|
|
Previously I was computing the fraction of light absorbed and scattered by
calculating an average absorption and scattering length weighted by the
Cerenkov spectrum and the PMT quantum efficiency, which isn't correct since we
should be averaging the absorption and scattering probabilities, not the
absorption and scattering lengths.
This commit fixes this by instead computing the average probability that a
photon is absorbed or scattered as a function of the distance travelled by
integrating the absorption and scattering probabilities over all wavelengths
weighted by the PMT quantum efficiency and the Cerenkov spectrum.
|
|
|
|
|
|
|
|
|
|
This is so that in the future if we only integrate over the path in the PSUP we
don't overestimate the Cerenkov light from delta rays.
|
|
This commit updates the likelihood function to take into account Cerenkov light
produced from delta rays produced by muons. The angular distribution of this
light is currently assumed to be constant along the track and parameterized in
the same way as the Cerenkov light from an electromagnetic shower. Currently I
assume the light is produced uniformly along the track which isn't exactly
correct, but should be good enough.
|
|
This commit adds a new function fit_event2() to fit multiple vertices. To seed
the fit, fit_event2() does the following:
- use the QUAD fitter to find the position and initial time of the event
- call find_peaks() to find possible directions for the particles
- loop over all possible unique combinations of the particles and direction
vectors and do a "fast" minimization
The best minimum found from the "fast" minimizations is then used to start the fit.
This commit has a few other updates:
- adds a hit_only parameter to the nll() function. This was necessary since
previously PMTs which weren't hit were always skipped for the fast
minimization, but when fitting for multiple vertices we need to include PMTs
which aren't hit since we float the energy.
- add the function guess_energy() to guess the energy of a particle given a
position and direction. This function estimates the energy by summing up the
QHS for all PMTs hit within the Cerenkov cone and dividing by 6.
- fixed a bug which caused the fit to freeze when hitting ctrl-c during the
fast minimization phase.
|
|
|
|
|
|
|
|
|
|
This commit adds lots of comments to sno_charge.c and makes a couple of other
changes:
- use interp1d() instead of the GSL interpolation routines
- increase MAX_PE to 100
I increased MAX_PE because I determined that it had a rather large impact on
the likelihood function for 500 MeV electrons. This unfortunately slows down
the initialization by a lot. I think I could speed this up by convolving the
single PE charge distribution with a gaussian *before* convolving the charge
distributions to compute the charge distributions for multiple PE.
|
|
|
|
This commit adds Rayleigh scattering to the likelihood function. The Rayleigh
scattering lengths come from rsp_rayleigh.dat from SNOMAN which only includes
photons which scattered +/- 10 ns around the prompt peak. The fraction of light
which scatters is treated the same in the likelihood as reflected light, i.e.
it is uniform across all the PMTs in the detector and the time PDF is assumed
to be a constant for a fixed amount of time after the prompt peak.
|
|
|
|
integral
|
|
This commit speeds up the fast likelihood calculation by only computing the
time PDF for a single photon. Since the majority of the time in the fast
likelihood calculation is spent computing the time PDF this should speed things
up by quite a bit. I suspect this won't have a big effect on the likelihood
value, but I should do some more testing.
|
|
|
|
|
|
|
|
|
|
This function is only used when the expected number of photons reaching a PMT
is *very* small. In this case, we still need to estimate the PMT hit time PDF
for indirect light which is modelled as a flat distribution starting at the
time where the PMT is most likely to be hit from direct light. Since we compute
the most likely time for a PMT to be hit from direct light by computing the
integral of the expected charge times the time and then dividing by the total
charge, when the total charge is very small this can introduce large errors.
Note that this code already existed but it was computed in the likelihood
function. This commit just moves it to its own function to make things look
nicer.
|
|
This commit speeds up the likelihood function by about ~20% by using the
precomputed track positions, directions, times, etc. instead of interpolating
them on the fly.
It also switches to computing the number of points to integrate along the track
by dividing the track length by a specified distance, currently set to 1 cm.
This should hopefully speed things up for lower energies and result in more
stable fits at high energies.
|
|
|