aboutsummaryrefslogtreecommitdiff
path: root/src/misc.h
AgeCommit message (Collapse)Author
2019-07-29don't need N_ACOS anymoretlatorre
2019-07-29fast_sqrt -> sqrttlatorre
After some testing, I realized that the fast_sqrt() function wasn't really faster than the native sqrt function.
2019-06-02add a fast sqrt function for values in between 0 and 1tlatorre
2019-05-13update method for calculating expected number of photons from shower and ↵tlatorre
delta rays This commit introduces a new method for integrating over the particle track to calculate the number of shower and delta ray photons expected at each PMT. The reason for introducing a new method was that the previous method of just using the trapezoidal rule was both inaccurate and not stable. By inaccurate I mean that the trapezoidal rule was not producing a very good estimate of the true integral and by not stable I mean that small changes in the fit parameters (like theta and phi) could produce wildly different results. This meant that the likelihood function was very noisy and was causing the minimizers to not be able to find the global minimum. The new integration method works *much* better than the trapezoidal rule for the specific functions we are dealing with. The problem is essentially to integrate the product of two functions over some interval, one of which is very "peaky", i.e. we want to find: \int f(x) g(x) dx where f(x) is peaked around some region and g(x) is relatively smooth. For our case, f(x) represents the angular distribution of the Cerenkov light and g(x) represents the factors like solid angle, absorption, etc. The technique I discovered was that you can approximate this integral via a discrete sum: constant \sum_i g(x_i) where the x_i are chosen to have equal spacing along the range of the integral of f(x), i.e. x_i = F^(-1)(i*constant) This new method produces likelihood functions which are *much* more smooth and accurate than previously. In addition, there are a few other fixes in this commit: - switch from specifying a step size for the shower integration to a number of points, i.e. dx_shower -> number of shower points - only integrate to the PSUP I realized that previously we were integrating to the end of the track even if the particle left the PSUP, and that there was no code to deal with the fact that light emitted beyond the PSUP can't make it back to the PMTs. - only integrate to the Cerenkov threshold When integrating over the particle track to calculate the expected number of direct Cerenkov photons, we now only integrate the track up to the point where the particle's velocity is 1/index. This should hopefully make the likelihood smoother because previously the estimate would depend on exactly whether the points we sampled the track were above or below this point. - add a minimum theta0 value based on the angular width of the PMT When calculating the expected number of Cerenkov photons we assumed that the angular distribution was constant over the whole PMT. This is a bad assumption when the particle is very close to the PMT. Really we should average the function over all the angles of the PMT, but that would be too computationally expensive so instead we just calculate a minimum theta0 value which depends on the distance and angle to the PMT. This seems to make the likelihood much smoother for particles near the PSUP. - add a factor of sin(theta) when checking if we can skip calculating the charge in get_expected_charge() - fix a nan in beta_root() when the momentum is negative - update PSUP_RADIUS from 800 cm -> 840 cm
2019-03-31update test-find-peaks to plot cerenkov ringstlatorre
This commit updates the test-find-peaks script to plot Cerenkov rings for each of the peaks. It also updates the script to use quad to find the position instead of using the MC information. Finally, I added a -n argument to the script to specify how many peaks to draw.
2019-03-16add GPLv3 licensetlatorre
2019-01-10update find_peaks algorithmtlatorre
Previously, the algorithm used to find peaks was to search for all peaks in the Hough transform above some constant fraction of the highest peak. This algorithm could have issues finding smaller peaks away from the highest peak. The new algorithm instead finds the highest peak in the Hough transform and then recomputes the Hough transform ignoring all PMT hits within the Cerenkov cone of the first peak. The next peak is found from this transform and the process is iteratively repeated until a certain number of peaks are found. One disadvantage of this new system is that it will *always* find the same number of peaks and this will usually be greater than the actual number of rings in the event. This is not a problem though since when fitting the event we loop over all possible peaks and do a quick fit to determine the starting point and so false positives are OK because the real peaks will fit better during this quick fit. Another potential issue with this new method is that by rejecting all PMT hits within the Cerenkov cone of the first peak we could miss a second peak very close to the first peak. This is partially mitigated by the fact that when we loop over all possible combinations of the particle ids and directions we allow each peak to be used more than once. For example, when fitting for the hypothesis that an event is caused by two electrons and one muon and given two possible directions 1 and 2, we will fit for the following possible direction combinations: 1 1 1 1 1 2 1 2 1 1 2 2 2 2 1 2 2 2 Therefore if there is a second ring close to the first it is possible to fit it correctly since we will seed the quick fit with two particles pointing in the same direction. This commit also adds a few tests for new functions and changes the energy step size during the quick fit to 10% of the starting energy value.
2018-12-14add a function to compute combinations with replacementtlatorre
2018-12-13add function to compute unique direction vectors for a multi particle fittlatorre
2018-11-11update likelihood function to fit electrons!tlatorre
To characterize the angular distribution of photons from an electromagnetic shower I came up with the following functional form: f(cos_theta) ~ exp(-abs(cos_theta-mu)^alpha/beta) and fit this to data simulated using RAT-PAC at several different energies. I then fit the alpha and beta coefficients as a function of energy to the functional form: alpha = c0 + c1/log(c2*T0 + c3) beta = c0 + c1/log(c2*T0 + c3). where T0 is the initial energy of the electron in MeV and c0, c1, c2, and c3 are parameters which I fit. The longitudinal distribution of the photons generated from an electromagnetic shower is described by a gamma distribution: f(x) = x**(a-1)*exp(-x/b)/(Gamma(a)*b**a). This parameterization comes from the PDG "Passage of particles through matter" section 32.5. I also fit the data from my RAT-PAC simulation, but currently I am not using it, and instead using a simpler form to calculate the coefficients from the PDG (although I estimated the b parameter from the RAT-PAC data). I also sped up the calculation of the solid angle by making a lookup table since it was taking a significant fraction of the time to compute the likelihood function.
2018-10-19add interp2d() for fast bilinear 2D interpolationtlatorre
2018-10-19update path integral to use a fixed number of pointstlatorre
I noticed when fitting electrons that the cquad integration routine was not very stable, i.e. it would return different results for *very* small changes in the fit parameters which would cause the fit to stall. Since it's very important for the minimizer that the likelihood function not jump around, I am switching to integrating over the path by just using a fixed number of points and using the trapezoidal rule. This seems to be a lot more stable, and as a bonus I was able to combine the three integrals (direct charge, indirect charge, and time) so that we only have to do a single loop. This should hopefully make the speed comparable since the cquad routine was fairly effective at only using as many function evaluations as needed. Another benefit to this approach is that if needed, it will be easier to port to a GPU.
2018-10-18fix the likelihood function to return the *negative* log likelihood of the ↵tlatorre
path coefficients Previously I was adding the log likelihood of the path coefficients instead of the *negative* log likelihood! When fitting electrons this would sometimes cause the fit to become unstable and continue increasing the path coefficients without bound since the gain in the likelihood caused by increasing the coefficients was more than the loss caused by a worse fit to the PMT data. Doh!
2018-10-17fix a bug in the theta0 calculation for a pathtlatorre
This commit fixes a bug in the calculation of the average rms width of the angular distribution for a path with a KL expansion. I also made a lot of updates to the test-path program: - plot the distribution of the KL expansion coefficients - plot the standard deviation of the angular distribution as a function of distance along with the prediction - plot the simulated and reconstructed path in 3D
2018-09-17add get_path_length()tlatorre
This commit adds a function called get_path_length() which computes the path length inside and outside a sphere for a line segment between two points. This will be useful for calculating the photon absorption for paths which cross the AV and for computing the time of flight of photons from a track to a PMT.
2018-09-13add a function to compute log(n) for integer ntlatorre
This commit adds the function ln() to compute log(n) for integer n. It uses a lookup table for n < 100 to speed things up.
2018-09-10add a fast likelihood functiontlatorre
This commit adds a fast function to calculate the expected number of PE at a PMT without numerically integrating over the track. This calculation is *much* faster than integrating over the track (~30 ms compared to several seconds) and so we use it during the "quick" minimization phase of the fit to quickly find the best position.
2018-09-04add a function to return the kahan sum of an arraytlatorre
For some reason the fit seems to have trouble with the kinetic energy. Basically, it seems to "converge" even though when you run the minimization again it finds a better minimum with a lower energy. I think this is likely due to the fact that for muons the kinetic energy only really affects the range of the muon and this is subject to error in the numerical integration. I also thought that maybe it could be due to roundoff error in the likelihood calculation, so I implemented the Kahan summation to try and reduce that. No idea if it's actually improving things, but I should benchmark it later to see.
2018-08-31add interp1d function to do fast interpolation when the x values are evenly ↵tlatorre
spaced
2018-08-31rotate and translate the path in path_init to speed things uptlatorre
2018-08-14move everything to src directorytlatorre