aboutsummaryrefslogtreecommitdiff
path: root/src/likelihood.c
AgeCommit message (Collapse)Author
2018-11-14speed things up by skipping zero valuestlatorre
2018-11-14initialize static arraystlatorre
2018-11-11update likelihood function to fit electrons!tlatorre
To characterize the angular distribution of photons from an electromagnetic shower I came up with the following functional form: f(cos_theta) ~ exp(-abs(cos_theta-mu)^alpha/beta) and fit this to data simulated using RAT-PAC at several different energies. I then fit the alpha and beta coefficients as a function of energy to the functional form: alpha = c0 + c1/log(c2*T0 + c3) beta = c0 + c1/log(c2*T0 + c3). where T0 is the initial energy of the electron in MeV and c0, c1, c2, and c3 are parameters which I fit. The longitudinal distribution of the photons generated from an electromagnetic shower is described by a gamma distribution: f(x) = x**(a-1)*exp(-x/b)/(Gamma(a)*b**a). This parameterization comes from the PDG "Passage of particles through matter" section 32.5. I also fit the data from my RAT-PAC simulation, but currently I am not using it, and instead using a simpler form to calculate the coefficients from the PDG (although I estimated the b parameter from the RAT-PAC data). I also sped up the calculation of the solid angle by making a lookup table since it was taking a significant fraction of the time to compute the likelihood function.
2018-11-04delete solid_angle_fast since it wasn't workingtlatorre
2018-10-21add a fast solid angle approximation to speed up the fast likelihood calculationtlatorre
2018-10-21speed up get_total_charge_approx() by precomputing some variablestlatorre
2018-10-21fix use of uninitialized variablestlatorre
2018-10-19don't call path_init() when doing the fast likelihood calculation to speed ↵tlatorre
things up
2018-10-19add MIN_RATIO_FAST to speed up the "fast" likelihood calculationtlatorre
2018-10-19speed up get_total_charge_approx()tlatorre
2018-10-19epsrel -> npointstlatorre
2018-10-19update path integral to use a fixed number of pointstlatorre
I noticed when fitting electrons that the cquad integration routine was not very stable, i.e. it would return different results for *very* small changes in the fit parameters which would cause the fit to stall. Since it's very important for the minimizer that the likelihood function not jump around, I am switching to integrating over the path by just using a fixed number of points and using the trapezoidal rule. This seems to be a lot more stable, and as a bonus I was able to combine the three integrals (direct charge, indirect charge, and time) so that we only have to do a single loop. This should hopefully make the speed comparable since the cquad routine was fairly effective at only using as many function evaluations as needed. Another benefit to this approach is that if needed, it will be easier to port to a GPU.
2018-10-18fix a bug in get_total_charge_approx()tlatorre
This commit fixes a bug which was double counting the pmt response when computing the direct charge and incorrectly multiplying the reflected charge by the pmt response. I think this was just a typo left in when I added the reflected charge.
2018-10-18make sure that the kinetic energy is zero at the last steptlatorre
Occasionally when fitting electrons the kinetic energy at the last step would be high enough that the electron never crossed the BETA_MIN threshold which would cause the gsl routine to throw an error. This commit updates particle_init() to set the kinetic energy at the last step to zero to make sure that we can bisect the point along the track where the speed drops to BETA_MIN.
2018-10-18hardcode the density when computing dE/dxtlatorre
Since we only have the range and dE/dx tables for light water for electrons and protons it's not correct to use the heavy water density. Also, even though we have both tables for muons, currently we only load the heavy water table, so we hardcode the density to that of heavy water. In the future, it would be nice to load both tables and use the correct one depending on if we are fitting in the heavy or light water.
2018-10-18fix the likelihood function to return the *negative* log likelihood of the ↵tlatorre
path coefficients Previously I was adding the log likelihood of the path coefficients instead of the *negative* log likelihood! When fitting electrons this would sometimes cause the fit to become unstable and continue increasing the path coefficients without bound since the gain in the likelihood caused by increasing the coefficients was more than the loss caused by a worse fit to the PMT data. Doh!
2018-10-18update theta0 calculation to use the radiation length in light watertlatorre
Previously I was using the radiation length in light water but scaling it by the density of heavy water, which isn't correct. Since the radiation length in heavy and light water is almost identical, we just use the radiation length in light water.
2018-10-18update fit to fit for electrons and protonstlatorre
2018-10-12skip PMTs which weren't hit for the fast likelihood calculationtlatorre
2018-10-01update negative log likelihood for path coefficientstlatorre
2018-10-01loop over all normal PMTs when calculating the expected number of photonstlatorre
Previously we ignored PMTs which were flagged when computing the expected number of PE for each PMT, but since we calculate the amount of reflected light here we need to include even PMTs which are offline (since they still reflect light).
2018-10-01use the PMT response table to calculate the amount of reflected lighttlatorre
To calculate the expected number of photons from reflected light we now integrate over the track and use the PMT response table to calculate what fraction of the light is reflected. Previously we were just using a constant fraction of the total detected light which was faster since we only had to integrate over the track once, but this should be more accurate.
2018-10-01add absorption length for acrylictlatorre
2018-09-26speed up fast likelihood calculationtlatorre
This commit updates the fast likelihood calculation to use the identity sin(a-b) = sin(a)*cos(b) - cos(a)*sin(b) to speed up the fast likelihood calculation.
2018-09-26speed up fast likelihood calculationtlatorre
This commit speeds up the fast likelihood calculation by avoiding calls to trigonometric functions where possible. Specifically we calculate sin(a) = sqrt(1-pow(cos(a),2)); instead of sin(a) = sin(acos(cos(a)));
2018-09-25update indirect scattering PDF start timetlatorre
Currently the PDF for scattered light is modelled as a flat distribution starting at some time t. Previously I was using the mean hit time for all PMTs, however this should really be a flat distribution in the time *residual* after the main peak. Therefore, the PDF now starts at the estimated time for direct photons.
2018-09-25update likelihood calculation to use PMT_TTS macrotlatorre
I accidentally hardcoded the single PE TTS to 1.5 ns in the likelihood calculation.
2018-09-25update integration bounds in likelihood calculationtlatorre
This commit updates the bounds of the track integration in the likelihood function to integrate up to 1 meter around the point at which the PMT is at the Cerenkov angle from the track. This fixes an issue I was seeing where a *very* small change in the fit paramters would cause the likelihood to jump by a large amount. I eventually tracked it down to the same issue I was seeing before which I solved by splitting up the integration into two intervals. However that fix did not seem to completely fix the issue. Based on initial tests with 500 MeV muons, this fix seems to do a much better job.
2018-09-21update likelihood function to include the probability of the path coefficientstlatorre
2018-09-21split up the track integral into two intervalstlatorre
This commit updates the likelihood calculation to split up the track integral into two intervals in some cases. I noticed when fitting some events that the likelihood value would change drastically for a very small change in the fit parameters. I eventually tracked it down to the fact that the track integral was occasionally returning a very small charge for a PMT which should have a very high charge. This was happening because the region of the track which was hitting the PMT was very small and the cquad integration routine was completely skipping it. The solution to this problem is a bit of a hack, but it seems to work. I first calculate where along the track (for a straight track) the PMT would be at the Cerenkov angle from the track. If this point is somewhere along the track then we split up the integral into two intervals: one going from the start of the track to this point and the other from the point to the end of the track. Since the cquad routine always samples points near the end of the intervals this should prevent it from completely skipping over the point in the track where the integrand is non-zero.
2018-09-20don't include the OWL PMTs in the likelihood calculationtlatorre
For some reason the OWL tubes have 9999.00 for the x, y, and z coordinates of the normal vector in the PMT file. For now, I'm just going to remove them from the likelihood calculation.
2018-09-18speed likelihood calculation up a bittlatorre
2018-09-18update CHARGE_FRACTIONtlatorre
This commit updates the CHARGE_FRACTION value to now represent approximately the fraction of light reflected from each PMT. It also updates the value to be closer to the true value based on a couple of fits.
2018-09-18free memory from muon_energy structtlatorre
2018-09-17fix bug in fast likelihood calculationtlatorre
2018-09-17update likelihood function to calculate time of flight of photons using ↵tlatorre
get_path_length()
2018-09-17update likelihood to calculate absorption length correctlytlatorre
2018-09-17update muon kinetic energy calculationtlatorre
This commit updates the calculation of the muon kinetic energy as a function of distance along the track. Previously I was using an approximation from the PDG, but it doesn't seem to be very accurate and won't generalize to the case of electrons. The kinetic energy is now calculated using the tabulated values of dE/dx as a function of energy.
2018-09-13add a function to compute log(n) for integer ntlatorre
This commit adds the function ln() to compute log(n) for integer n. It uses a lookup table for n < 100 to speed things up.
2018-09-13speed things up by introducing a minimum ratio between probabilitiestlatorre
Previously to avoid computing P(q,t|n)*P(n|mu) for large n when they were very unlikely I was using a precomputed maximum n value based only on the expected number of PE. However, this didn't take into account P(q|n). This commit updates the likelihood function to dynamically decide when to quit computing these probabilities when the probability for a given n divided by the most likely probability is less than some threshold. This threshold is currently set to 10**(-10) which means we quit calculating these probabilities when the probability is 10 million times less likely than the most probable value.
2018-09-12small updates to speed things uptlatorre
2018-09-11update fast likelihood function to include the pmt response and absorptiontlatorre
2018-09-10add a fast likelihood functiontlatorre
This commit adds a fast function to calculate the expected number of PE at a PMT without numerically integrating over the track. This calculation is *much* faster than integrating over the track (~30 ms compared to several seconds) and so we use it during the "quick" minimization phase of the fit to quickly find the best position.
2018-09-04add a function to return the kahan sum of an arraytlatorre
For some reason the fit seems to have trouble with the kinetic energy. Basically, it seems to "converge" even though when you run the minimization again it finds a better minimum with a lower energy. I think this is likely due to the fact that for muons the kinetic energy only really affects the range of the muon and this is subject to error in the numerical integration. I also thought that maybe it could be due to roundoff error in the likelihood calculation, so I implemented the Kahan summation to try and reduce that. No idea if it's actually improving things, but I should benchmark it later to see.
2018-08-31add epsrel argument to likelihood functiontlatorre
2018-08-31switch back to calling cquad just once to speed things uptlatorre
I found when simulating high energy muons that the expected charge for some PMTs which should be getting hit was zero. The reason for this is that the integrand was very sharply peaked at the Cerenkov angle which makes it difficult to integrate for numerical integration routines like cquad. To solve this I split up the integral at the point when the track was at the Cerenkov angle from the PMT to make sure that cquad didn't miss the peak. However, calling cquad twice takes a lot of time so it's not necessarily good to do this for all fits. Also, it's not obvious if it is necessary any more now that the angular distribution calculation was fixed. I think the real reason that cquad was missing those integrals was that for a high energy muon the range is going to be very large (approximately 40 meters for a 10 GeV muon). In this case, I should really only integrate up to the edge of the cavity or PSUP and hopefully cquad picks enough points in there to get a non zero value. I also added a check to only compute tmean when at least one PMT has a valid time. This prevents a divide by zero which causes the likelihood function to return nan.
2018-08-28add path to the likelihood fittlatorre
This commit updates the likelihood fit to use the KL path expansion. Currently, I'm just using one coefficient for the path in both x and y.
2018-08-27update code to use get_index_snoman* functions to calculate the index of ↵tlatorre
refraction
2018-08-14fix how the RMS scattering angle is calculatedtlatorre
The RMS scattering angle calculation comes from Equation 33.15 in the PDG article on the passage of particles through matter. It's not entirely obvious if this equation is correct for a long track. It seems like it should be integrated along the track to add up the contributions at different energies, but it's not obvious how to do that with the log term. In any case, the way I was previously calculating it (by using the momentum and velocity at each point along the track) was definitely wrong. I will try this out and perhaps try to integrate it later.
2018-08-14move everything to src directorytlatorre