aboutsummaryrefslogtreecommitdiff
path: root/src/misc.c
diff options
context:
space:
mode:
authortlatorre <tlatorre@uchicago.edu>2019-05-13 10:04:05 -0500
committertlatorre <tlatorre@uchicago.edu>2019-05-13 10:04:05 -0500
commitae2156d64e57ce4c976587d2ecab239c836ac8f0 (patch)
treeee1811d809f09a1cca277db94837bb9a30881eaa /src/misc.c
parent119af4ffbfc5814394c97d9ee35e0caff6a90927 (diff)
downloadsddm-ae2156d64e57ce4c976587d2ecab239c836ac8f0.tar.gz
sddm-ae2156d64e57ce4c976587d2ecab239c836ac8f0.tar.bz2
sddm-ae2156d64e57ce4c976587d2ecab239c836ac8f0.zip
update method for calculating expected number of photons from shower and delta rays
This commit introduces a new method for integrating over the particle track to calculate the number of shower and delta ray photons expected at each PMT. The reason for introducing a new method was that the previous method of just using the trapezoidal rule was both inaccurate and not stable. By inaccurate I mean that the trapezoidal rule was not producing a very good estimate of the true integral and by not stable I mean that small changes in the fit parameters (like theta and phi) could produce wildly different results. This meant that the likelihood function was very noisy and was causing the minimizers to not be able to find the global minimum. The new integration method works *much* better than the trapezoidal rule for the specific functions we are dealing with. The problem is essentially to integrate the product of two functions over some interval, one of which is very "peaky", i.e. we want to find: \int f(x) g(x) dx where f(x) is peaked around some region and g(x) is relatively smooth. For our case, f(x) represents the angular distribution of the Cerenkov light and g(x) represents the factors like solid angle, absorption, etc. The technique I discovered was that you can approximate this integral via a discrete sum: constant \sum_i g(x_i) where the x_i are chosen to have equal spacing along the range of the integral of f(x), i.e. x_i = F^(-1)(i*constant) This new method produces likelihood functions which are *much* more smooth and accurate than previously. In addition, there are a few other fixes in this commit: - switch from specifying a step size for the shower integration to a number of points, i.e. dx_shower -> number of shower points - only integrate to the PSUP I realized that previously we were integrating to the end of the track even if the particle left the PSUP, and that there was no code to deal with the fact that light emitted beyond the PSUP can't make it back to the PMTs. - only integrate to the Cerenkov threshold When integrating over the particle track to calculate the expected number of direct Cerenkov photons, we now only integrate the track up to the point where the particle's velocity is 1/index. This should hopefully make the likelihood smoother because previously the estimate would depend on exactly whether the points we sampled the track were above or below this point. - add a minimum theta0 value based on the angular width of the PMT When calculating the expected number of Cerenkov photons we assumed that the angular distribution was constant over the whole PMT. This is a bad assumption when the particle is very close to the PMT. Really we should average the function over all the angles of the PMT, but that would be too computationally expensive so instead we just calculate a minimum theta0 value which depends on the distance and angle to the PMT. This seems to make the likelihood much smoother for particles near the PSUP. - add a factor of sin(theta) when checking if we can skip calculating the charge in get_expected_charge() - fix a nan in beta_root() when the momentum is negative - update PSUP_RADIUS from 800 cm -> 840 cm
Diffstat (limited to 'src/misc.c')
-rw-r--r--src/misc.c19
1 files changed, 19 insertions, 0 deletions
diff --git a/src/misc.c b/src/misc.c
index e6939f7..f705c74 100644
--- a/src/misc.c
+++ b/src/misc.c
@@ -862,3 +862,22 @@ void get_dir(double *dir, double theta, double phi)
dir[1] = sin_theta*sin_phi;
dir[2] = cos_theta;
}
+
+/* Fast version of acos() which uses a lookup table computed on the first call. */
+double fast_acos(double x)
+{
+ size_t i;
+ static int initialized = 0;
+ static double xs[N_ACOS];
+ static double ys[N_ACOS];
+
+ if (!initialized) {
+ for (i = 0; i < LEN(xs); i++) {
+ xs[i] = -1.0 + 2.0*i/(LEN(xs)-1);
+ ys[i] = acos(xs[i]);
+ }
+ initialized = 1;
+ }
+
+ return interp1d(x,xs,ys,LEN(xs));
+}