Age | Commit message (Collapse) | Author |
|
|
|
This commit updates get_event() to assume the PMT type is correct in the
pmt.txt file instead of assuming the SNOMAN bank is correct. The reason for
this is that according to the email from Stan 3/15/9 should be a neck PMT (at
least in the NCD phase) and that is what the pmt.txt file says it should be. In
addition, the changes that Stan said were needed in ccc_type.for never got
made.
|
|
This commit updates is_muon() to use the EHS charge for the OWL tubes since
based on looking at the charges for the OWL tubes it looks like the QHS values
are completely random.
|
|
|
|
|
|
|
|
|
|
This commit updates get_event() to fill in the uncalibrated charge and time
info from the calibrated charge and time. The ECA calibrated time and ECA + PCA
without walk correction times (ept and pt1) are just set to the calibrated
time. The uncalibrated charges are set by multiplying the calibrated charge by
the mean high-half point and adding a constant offset, and the ECA calibrated
charges are set by taking the uncalibrated charges and subtracting the offset.
The reason for filling these in is so that we can test the data cleaning cuts
on MC. Although it would be better if these were filled in by the MC, this is
better than nothing.
|
|
|
|
|
|
|
|
This commit updates the zebra code to properly handle all the errors from
get_bytes(). I also updated fit and cat-zdab to not display the errors about
the FTX banks by default unless you run them with the -v command line option.
|
|
This commit fixes the FTS cut so that it returns 1 when the event is flagged as
failing the cut. Previously, the function returned the following:
not enough PMT pairs: 0 (fail)
median time > 6.8 ns: 0 (fail)
otherwise: 1 (pass)
This had two issues: the return value wasn't consistent with the rest of the
data cleaning cuts and it should pass if there aren't enough PMT pairs.
Now, I fixed the not enough PMT pairs case and made the return value consistent
with the rest of the data cleaning cuts:
not enough PMT pairs: 0 (pass)
median time > 6.8 ns: 1 (fail)
otherwise: 0 (pass)
|
|
|
|
This commit updates the second requirement of the neck event tag to flag events
in which either 50% of the normal PMT hits are at the bottom of the detector
*or* 50% of the ECA calibrated charge is at the bottom of the detector.
This update was added after testing the cut out on run 10058 which is not in
the golden run list and noticing that there were 4 events which were very
clearly neck events but which didn't get tagged.
|
|
This commit fixes a bug in get_shower_weights() and get_delta_ray_weights()
which was causing an inf value to propagate and cause the fitter to crash. The
problem came because due to floating point roundoff the cdf value at the end of
the loop was slightly greater than the last cdf value we wanted which was
causing it to get mapped to cos(theta) = -1 (I think?) and then subsequently
get interpolated to an infinite value for xcdf.
The fix is just to make sure that the x coordinate is always between x1 and x2.
|
|
|
|
This commit updates the charge spectrum initialization to convolve the charge
distribution over a wider range to make sure that we have a nonzero probability
of observing charges all the way down to qlo.
The reason for this is that previously for some negative charges the
probability of observing that charge was zero, which was causing the likelihood
to return nan.
|
|
This commit updates the fit program to accept a particle combo from the command
line so you can fit for a single particle combination hypothesis. For example
running:
$ ./fit ~/zdabs/mu_minus_700_1000.hdf5 -p 2020
would just fit for the 2 electron hypothesis.
The reason for adding this ability is that my grid jobs were getting evicted
when fitting muons in run 10,000 since it takes 10s of hours to fit for all the
particle hypothesis. With this change, and a small update to the
submit-grid-jobs script we now submit a single grid job per particle
combination hypothesis which should make each grid job run approximately 4
times faster.
|
|
|
|
|
|
|
|
This commit updates get_event() to flag PMT charges below qlo, which is the
minimum charge value that we compute the charge PDFs for. This is to prevent
the likelihood from returning nan.
|
|
|
|
This commit updates fast_acos() to use an algorithm I found here:
https://web.archive.org/web/20161223122122/http://http.developer.nvidia.com:80/Cg/acos.html
which is faster than using a lookup table.
|
|
|
|
After some testing, I realized that the fast_sqrt() function wasn't really
faster than the native sqrt function.
|
|
This commit updates the likelihood to use the multiphoton PCA time instead of
the usual pt time. When looking at the reconstruction of muons in run 10,000 I
noticed that the PMT hit times for the PMTs which had really high charge was
all over the place. There were PMTs that were very close to each other with hit
times differing by ~ 20 ns. I'm still not entirely sure what causes this (is
it some hardware issue with the discriminator or is it a problem with the
charge walk correction which always assumes a single PE?), but the multiphoton
PCA times looked a lot more reasonable.
Eventually I need to look into the ptms variable which is the multiphoton PCA
transit time RMS.
|
|
Previously I was accidentally passing the absolute position of the particle
instead of the distance to the PMT to get_theta0_min().
|
|
|
|
This commit fixes a bug introduced in a previous commit when I moved all the
code to add the run, gtid, etc. to the event object to get_event(). We actually
need the run number before then to load the DQXX file for the right run.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit updates the is_neck_event() function to include the requirement
that 50% of the normal PMTs in the event must have z <= -425.0.
|
|
This commit updates the flasher cut to use the median hit time for all hits in
the paddle card instead of just the time of the channel with the highest
charge.
|
|
|
|
|
|
This commit updates zdab-cat to output each event as an individual YAML
document. The advantage of this is that we can then iterate over these without
loading the entire YAML document in submit-grid-jobs which means that we won't
use GB of memory on the grid submission node.
|
|
|
|
|
|
|