Age | Commit message (Collapse) | Author |
|
This commit updates the ./fit program to add a ctrl-z handler to allow you to
skip events. This is really handy when testing nwe things. Currently if you
press ctrl-z and it has already done at least one of the initial fits, it will
skip to move on to the final minimization stage. If you press ctrl-z during the
final minimization, it will skip fitting the event. Currently this will *not*
save the result to the file but I may change that in the future.
|
|
This commit updates get_expected_photons() to check if there are any shower
photons or delta ray photons before adding them since if there aren't any
shower photons or delta ray photons the PDF isn't constructed.
|
|
PSUP
|
|
This commit updates submit-grid-jobs so that it keeps a database of jobs. This
allows the script to make sure that we only have a certain number of jobs in
the job queue at a single time and automatically resubmitting failed jobs. The
idea is that it can now be run once to add jobs to the database:
$ submit-grid-jobs ~/zdabs/SNOCR_0000010000_000_p4_reduced.xzdab.gz
and then be run periodically via crontab:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * submit-grid-jobs --auto --logfile ~/submit.log
Similarly I updated cat-grid-jobs so that it uses the same database and can
also be run via a cron job:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * cat-grid-jobs --logfile cat.log --output-dir $HOME/fit_results
I also updated fit so that it keeps track of the total time elapsed including
the initial fits instead of just counting the final fits.
|
|
|
|
is large
Previously to achieve a large speedup in the likelihood calculation I added a
line to skip calculating the charge if:
abs((cos(theta)-cos_theta_cerenkov)/(sin_theta*theta0)) > 5
However I noticed that this was causing discontinuities in the likelihood
function when fitting low energy muons so I'm putting it behind a compile time
flag for now.
|
|
This commit updates how we handle PMTs whose type is different in the
snoman.ratdb file and the SNOMAN bank again. In particular, we now trust the
snoman.ratdb type *only* for the NCD runs and mark the PMT as invalid for the
D2O and salt phases.
This was spurred by noticing that with the current code GTID 9228 in run 10,000
was being marked as a neck event even though it was clearly a muon and XSNOED
only showed one neck hit. It was marked as a neck event because there were 2
neck PMT hits in the event: 3/15/9 and 13/15/0. After reading Stan's email more
carefully I realized that 3/15/9 was only installed as a neck PMT in the NCD
phase. I don't really know what type of PMT it was in the D2O and salt phases
(maybe an OWL), but in any case since I don't know the PMT position I don't
think we can use this PMT for these phases.
|
|
This commit updates get_expected_charge() to always use the index of refraction
for d2o instead of choosing the index of d2o or h2o based on the position of
the particle. The reason for this is that selecting the index based on the
position was causing discontinuities in the likelihood function for muon tracks
which crossed the AV.
|
|
This commit adds a new test to test the quad fitter when the t0 quantile
argument is less than 1.
|
|
This commit updates get_event() to clear any PMT flags except for PMT_FLAG_DQXX
from all PMT hits before loading the event. Although I *was* previously
clearing the other flags for hit PMTs, I was not clearing flags for PMTs which
were *not* hit. This was causing non deterministic behaviour, i.e. I was
getting different results depending on if I ran the fitter over a whole file or
just a single event.
|
|
This commit updates the likelihood function to use the PMT hit time without the
time walk correction applied (when the charge is greater than 1.5 PE) instead
of the multiphoton PCA time. The reason is that after talking with Chris Kyba I
realized that the multiphoton PCA time was calibrated to give the mean PMT hit
time when mulitiple photons hit at the same time instead of the time when the
first photon hits which is what I assume in my likelihood function.
Therefore I now use the regular PMT hit time without time walk correction
applied which should be closer to the first order statistic.
|
|
|
|
This commit updates the likelihood function to initialize mu_indirect to 0.0
since it's a static array. This can have an impact when the fit position is
outside of the PSUP and we skip calculating the charges.
|
|
|
|
|
|
This commit updates the crate, card, and channel variables in is_slot_early()
to be ints instead of size_ts. The reason is I saw a warning when building with
clang and realized that the abs(card - flasher_card) == 1 check wasn't working
if flasher_card was 1 greater than card because I was using unsigned ints.
|
|
|
|
get_hough_transform()
This commit adds two improvements to the quad fitter:
1. I updated quad to weight the random PMT hit selection by the probability
that the PMT hit is a multiphoton hit. The idea here is that we really only
want to sample direct light and for high energy events the reflected and
scattered light is usually single photon.
2. I added an option to quad to only use points in the quad cloud which are
below a given quantile of t0. The idea here is that for particles like muons
which travel more than a few centimeters in the detector the quad cloud usually
looks like the whole track. Since we want the QUAD fitter to find the position
of the *start* of the track we select only those quad cloud points with an
early time so the position is closer to the position of the start of the track.
Also, I fixed a major bug in get_hough_transform() in which I was using the
wrong index variable when checking if a PMT was not flagged, a normal PMT, and
was hit. This was causing the algorithm to completely miss finding more than
one ring while I was testing it.
|
|
This commit updates guess_energy() which is used to seed the energy for the
likelihood fit. Previously we estimated the energy by summing up the charge in
a 42 degree cone around the proposed direction and then dividing that by 6
(since electrons in SNO and SNO+ produce approximately 6 hits/MeV). Now,
guess_energy() estimates the energy by calculating the expected number of
photons produced from Cerenkov light, EM showers, and delta rays for a given
particle at a given energy. The most likely energy is found by bisecting the
difference between the expected number of photons and the observed charge to
find when they are equal.
This improves things dramatically for cosmic muons which have energies of ~200
GeV. Previously the initial guess was always very low (~1 GeV) and the fit
could take > 1 hour to increase the energy.
|
|
|
|
This commit updates the flasher cut to flag events in which the PMT with the
highest pedestal subtracted QLX charge is 80 counts above the next highest QLX
charge, has at least 4 hits in the same slot, and passes the final check in
the flasher cut (70% of the normal PMT hits must be 50 ns after the high charge
channel and 70% of the normal PMT hits must be at least 12 meters away from the
high charge channel).
This update was motivated by run 20062 GTID 818162. This was a flasher event
but which had only 3 hits in the PC and so passed the previous version of the
cut. This new update was inspired by the SNO QvT cut.
|
|
|
|
This commit updates the flasher cut with the following changes:
- no longer require nhit to be less than 1000
- update charge criteria to be that the flasher channel must have a QHS or QHL
1000 counts above the next highest QHS or QHL value in the PC or a QLX value
80 counts above the next highest QLX value
- only check is_slot_early() for missing hits in the PC
These updates were inspired by looking at how to tag flashers in runs 20062 -
20370 which didn't fail the original cut. In particular, the following flashers
were previously not tagged:
Run GTID Comments
--- ---- --------
20062 818162 flasher with only 3 hits in PC
reconstructs at PSUP
ESUMH triggered
20083 120836 high charge missing (in next couple of events)
probably picked wrong flasher PMT ID
20089 454156 nhit > 1000
After this commit the last two are properly tagged.
|
|
This commit updates the QvNHIT cut to not require PMT hits to have a good
calibration to be included in the charge sum. The reason for this is that many
electrical pickup events have lots of hits which are pickup and thus have small
or negative charges. When the charge is low like this the PMT hits get flagged
with the bad calibration bit (I'm not sure if it's because of the PMT charge
walk calibration or what). Therefore, now we include all hit PMTs in the charge
sum if there ECA calibrated QHL value is above -100.
|
|
This commit updates the zebra code to store a pointer to the first MAST bank in
the zebraFile struct so that we can jump to it when iterating over the logical
records. I had naively assumed based on the documenation in the SNOMAN
companion that the first bank in a logical record was guaranteed to be a MAST
bank, but that doesn't seem to be the case. This also explains why I was
sometimes seeing RHDR and ZDAB banks as the first bank in a logical record.
|
|
This commit updates the breakdown cut to flag any event in which less than 70%
of the PMT hits have a good TAC value.
|
|
This commit updates the a and b parameters for the gamma distribution used to
describe the position distribution of shower photons produced along the
direction of the muon. Previously I had been assuming b was equal to the
radiation length and using a formula from the PDG to calculate a from that.
However, this formula doesn't seem to be valid for muons (the formula comes
from a section describing the shower profile of electrons and gammas, so it's
not surprising). Therefore, now we don't assume any relationship between a and
b.
Now, the value of a is approximated by a constant since I couldn't find any
functional relationship as a function of energy to describe a very well (and
it's approximately constant), and b is approximated by a single degree
polynomial fit to the values I got from simulating muons in RAT-PAC as a
function of energy.
Note that looking at the simulation data it seems like the position
distribution of shower photons from muons isn't even very well described by a
gamma distribution, so in the future it might be a good idea to come up with a
better parameterization.
Even if I stick with the gamma distribution, it would be good to revisit this
in the future and fit for a and b over a wider range of energies.
|
|
This commit adds the sub_run variable to the ev array in the HDF5 output file
and updates plot-energy to order the events using the run and sub_run
variables. This fixes a potential issue where I was sorting by GTID before, but
the GTID can wrap around and so isn't guaranteed to put the events in the right
order.
|
|
This commit updates the ITC cut to use the pt1 time which is the ECA + PCA
without charge walk calibration time. The reason is that an event which is
mostly electronics noise may have all low charges which can't be calibrated
with the PCA walk calibration.
|
|
This commit updates the ev.nhit variable to represent the total number of
normal PMTs hit in the event, regardless of if the calibration failed. I added
a new variable ev.nhit_cal which now stores the total number of normal PMTs hit
without any flags.
|
|
This commit adds a field to the pmt_hit struct called best_uncal_q which
represents the best ECA calibrated charge (in units of QHS counts above
pedestals). This is then used in the muon data cleaning cut.
|
|
|
|
This commit updates the muon cut to require at least 1 OWL hit which has a high
charge and is early relative to nearby normal PMTs. This replaces the previous
cut criteria which required 2 OWL hits to be early relative to the 10%
percentile of all the normal PMT hits.
This new cut should target external muons better although I still need to do
some testing. In the future I'd like to optimize the distance at which PMTs are
considered nearby. Currently I just use 3 meters which seemed like a good value
based on some initial testing.
|
|
|
|
|
|
This commit updates is_slot_early() to do the following:
- use median time when checking to see if the time of the PMT hits in the slot
is early relative to nearby PMTs
- return True if the number of nearby PMTs is less than or equal to the number
of hit PMTs in the potential flasher slot
- skip potential cross talk hits when finding the nearby hit PMTs by skipping
hits in the adjacent slots with the same paddle card as the potential flasher
|
|
|
|
|
|
This commit updates get_event() to assume the PMT type is correct in the
pmt.txt file instead of assuming the SNOMAN bank is correct. The reason for
this is that according to the email from Stan 3/15/9 should be a neck PMT (at
least in the NCD phase) and that is what the pmt.txt file says it should be. In
addition, the changes that Stan said were needed in ccc_type.for never got
made.
|
|
This commit updates is_muon() to use the EHS charge for the OWL tubes since
based on looking at the charges for the OWL tubes it looks like the QHS values
are completely random.
|
|
|
|
|
|
|
|
|
|
This commit updates get_event() to fill in the uncalibrated charge and time
info from the calibrated charge and time. The ECA calibrated time and ECA + PCA
without walk correction times (ept and pt1) are just set to the calibrated
time. The uncalibrated charges are set by multiplying the calibrated charge by
the mean high-half point and adding a constant offset, and the ECA calibrated
charges are set by taking the uncalibrated charges and subtracting the offset.
The reason for filling these in is so that we can test the data cleaning cuts
on MC. Although it would be better if these were filled in by the MC, this is
better than nothing.
|
|
|
|
|
|
|
|
This commit updates the zebra code to properly handle all the errors from
get_bytes(). I also updated fit and cat-zdab to not display the errors about
the FTX banks by default unless you run them with the -v command line option.
|
|
This commit fixes the FTS cut so that it returns 1 when the event is flagged as
failing the cut. Previously, the function returned the following:
not enough PMT pairs: 0 (fail)
median time > 6.8 ns: 0 (fail)
otherwise: 1 (pass)
This had two issues: the return value wasn't consistent with the rest of the
data cleaning cuts and it should pass if there aren't enough PMT pairs.
Now, I fixed the not enough PMT pairs case and made the return value consistent
with the rest of the data cleaning cuts:
not enough PMT pairs: 0 (pass)
median time > 6.8 ns: 1 (fail)
otherwise: 0 (pass)
|