aboutsummaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2019-03-25update rayleigh scattering calculationtlatorre
This commit updates the optics code to calculate the rayleigh scattering length using the Einstein-Smoluchowski formula instead of using the effective rayleigh scattering lengths from the RSPR bank.
2019-03-25fix some more warnings pointed out by clangtlatorre
2019-03-25fix uninitialized variabletlatorre
Thanks clang!
2019-03-25fix delta ray charge calculationtlatorre
Previously I was calculating the expected number of delta ray photons when integrating over the shower path, but since the delta rays are produced along the particle path and not further out like the shower photons, this wasn't correct. The normalization of the probability distribution for the photons produced along the path was also not handled correctly. This commit adds a new function called integrate_path_delta_ray() to compute the expected number of photons from delta rays hitting each PMT. Currently this means that the likelihood function for muons will be significantly slower than previously, but hopefully I can speed it up again in the future (for example by skipping the shower calculation which is negligible for lower energy muons).
2019-03-25speed up likelihood function by not calling trapz()tlatorre
This commit speeds up the likelihood function by integrating the charge along the track inline instead of creating an array and then calling trapz(). It also introduces two global variables avg_index_d2o and avg_index_h2o which are the average indices of refraction for D2O and H2O weighted by the PMT quantum efficiency and the Cerenkov spectrum.
2019-03-23speed up the likelihood calculation by avoiding calls to acos()tlatorre
This commit speeds up the likelihood calculation by eliminating most calls to acos(). This is done by updating the PMT response lookup tables to be as a function of the cosine of the angle between the photon and the PMT normal instead of the angle itself.
2019-03-23fix a bug in the absorption and scattering probabilitiestlatorre
Previously I was computing the fraction of light absorbed and scattered by calculating an average absorption and scattering length weighted by the Cerenkov spectrum and the PMT quantum efficiency, which isn't correct since we should be averaging the absorption and scattering probabilities, not the absorption and scattering lengths. This commit fixes this by instead computing the average probability that a photon is absorbed or scattered as a function of the distance travelled by integrating the absorption and scattering probabilities over all wavelengths weighted by the PMT quantum efficiency and the Cerenkov spectrum.
2019-03-23set CHARGE_FRACTION to 0.4 for both electrons and muonstlatorre
2019-03-17set a relative tolerance of 1e-2 on the optimization parameters in the fast fittlatorre
2019-03-17update MIN_NPOINTS to 10 to speed things uptlatorre
2019-03-17add indirect light for fast likelihood calculationtlatorre
2019-03-16delete license header from pack2b.{c,h}tlatorre
2019-03-16add GPLv3 licensetlatorre
2019-03-16switch to using SBPLX for the minimizationtlatorre
Based on some initial testing it seems that the subplex minimization algorithm performs *much* better than BOBYQA for multi-particle fits. It is also a bit slower, so I will probably have to figure out how to speed things up.
2019-03-08remove -fsanitize=address from Makefiletlatorre
2019-03-08fix some int -> floats in the PMT banktlatorre
2019-03-07update fit to automatically load DQXX file based on run numbertlatorre
2019-03-07update code to allow you to run the fit outside of the src directorytlatorre
To enable the fitter to run outside of the src directory, I created a new function open_file() which works exactly like fopen() except that it searches for the file in both the current working directory and the path specified by an environment variable.
2019-03-07update comment in test_path()tlatorre
2019-03-07fix a bug in path_init() when the direction was equal to (0,0,1)tlatorre
2019-03-07don't fix the position during the fast fittlatorre
2019-03-05update quad() to not abort if the matrix is singulartlatorre
2019-03-04add a function to tag flasher eventstlatorre
2019-03-04update fit to print gtid and nhit even if we skip the eventtlatorre
2019-03-04add a --min-nhit command line argumenttlatorre
2019-03-04update get_event() to handle events without a pmt bank linktlatorre
2019-03-04fix a bug in zebra_read_next_logical_record() when the size is zerotlatorre
2019-03-04speed up get_solid_angle_fast()tlatorre
2019-03-04skip logical record if there is no EV banktlatorre
In the processed zdab files (the SNOCR_* files), the first logical record just has a run header bank and no EV bank.
2019-03-04skip reading in mcgn banks if there is no mc banktlatorre
2019-03-04update run-fit so that you can ctrl-c ittlatorre
2019-03-04log(norm(...)) -> log_norm(...)tlatorre
2019-03-04check that all links are nonzerotlatorre
2019-01-31small updates to make sure we don't calculate nanstlatorre
2019-01-29normalize delta ray charge by total rangetlatorre
This is so that in the future if we only integrate over the path in the PSUP we don't overestimate the Cerenkov light from delta rays.
2019-01-29add a function to compute the angular distribution normalizationtlatorre
This seems to speed things up a little bit.
2019-01-27update find_peaks algorithm to subtract previous ringstlatorre
Previously the find peaks algorithm would ignore any PMT hits within the Cerenkov ring of previously found rings. This had the problem that occasionally the algorithm would repeatedly find the same direction due to hits outside of the Cerenkov cone. The new update was inspired by how SuperK does this and instead we "subtract" off the previous rings by subtracting the average qhs times e^(-cos(theta-1/n)/0.1) from each PMT for each previous ring. Based on some quick testing this seems a lot better than the previous algorithm, but still probably needs some tweaking.
2019-01-27add photons from delta rays to likelihood calculationtlatorre
This commit updates the likelihood function to take into account Cerenkov light produced from delta rays produced by muons. The angular distribution of this light is currently assumed to be constant along the track and parameterized in the same way as the Cerenkov light from an electromagnetic shower. Currently I assume the light is produced uniformly along the track which isn't exactly correct, but should be good enough.
2019-01-17update test-find-peaks to test the first eventtlatorre
2019-01-15fix a bug with getting the first MCTK banktlatorre
2019-01-15update zebra library to be able to use linkstlatorre
This commit updates the zebra library files zebra.{c,h} so that it's now possible to traverse the data structure using links! This was originally motivated by wanting to figure out which MC particles were generated from the MCGN bank (from which it's only possible to access the tracks and vertices using structural links). I've also added a new test to test-zebra which checks the consistency of all of the next/up/orig, structural, and reference links in a zebra file.
2019-01-10update find_peaks algorithmtlatorre
Previously, the algorithm used to find peaks was to search for all peaks in the Hough transform above some constant fraction of the highest peak. This algorithm could have issues finding smaller peaks away from the highest peak. The new algorithm instead finds the highest peak in the Hough transform and then recomputes the Hough transform ignoring all PMT hits within the Cerenkov cone of the first peak. The next peak is found from this transform and the process is iteratively repeated until a certain number of peaks are found. One disadvantage of this new system is that it will *always* find the same number of peaks and this will usually be greater than the actual number of rings in the event. This is not a problem though since when fitting the event we loop over all possible peaks and do a quick fit to determine the starting point and so false positives are OK because the real peaks will fit better during this quick fit. Another potential issue with this new method is that by rejecting all PMT hits within the Cerenkov cone of the first peak we could miss a second peak very close to the first peak. This is partially mitigated by the fact that when we loop over all possible combinations of the particle ids and directions we allow each peak to be used more than once. For example, when fitting for the hypothesis that an event is caused by two electrons and one muon and given two possible directions 1 and 2, we will fit for the following possible direction combinations: 1 1 1 1 1 2 1 2 1 1 2 2 2 2 1 2 2 2 Therefore if there is a second ring close to the first it is possible to fit it correctly since we will seed the quick fit with two particles pointing in the same direction. This commit also adds a few tests for new functions and changes the energy step size during the quick fit to 10% of the starting energy value.
2018-12-14fix another bug in combinations_with_replacement()tlatorre
Also, fix a few memory leaks in test.c.
2018-12-14switch to using fit_event2() by defaulttlatorre
This commit updates the fit to use the fit_event2() function which can fit for multi vertex hypotheses. It also uses the QUAD fitter and the Hough transform of the event to seed the fit so the results for 1 particle fits will be slightly different than before. I also fixed a small bug in combinations_with_replacement().
2018-12-14add a function to compute combinations with replacementtlatorre
2018-12-14fix help stringtlatorre
2018-12-13add some more comments and fix a memory leaktlatorre
2018-12-13add -fdiagnostics-color to Makefiletlatorre
2018-12-13add some commentstlatorre
2018-12-13update fit.c to fit multiple verticestlatorre
This commit adds a new function fit_event2() to fit multiple vertices. To seed the fit, fit_event2() does the following: - use the QUAD fitter to find the position and initial time of the event - call find_peaks() to find possible directions for the particles - loop over all possible unique combinations of the particles and direction vectors and do a "fast" minimization The best minimum found from the "fast" minimizations is then used to start the fit. This commit has a few other updates: - adds a hit_only parameter to the nll() function. This was necessary since previously PMTs which weren't hit were always skipped for the fast minimization, but when fitting for multiple vertices we need to include PMTs which aren't hit since we float the energy. - add the function guess_energy() to guess the energy of a particle given a position and direction. This function estimates the energy by summing up the QHS for all PMTs hit within the Cerenkov cone and dividing by 6. - fixed a bug which caused the fit to freeze when hitting ctrl-c during the fast minimization phase.