Age | Commit message (Collapse) | Author |
|
PSUP
|
|
This commit updates submit-grid-jobs so that it keeps a database of jobs. This
allows the script to make sure that we only have a certain number of jobs in
the job queue at a single time and automatically resubmitting failed jobs. The
idea is that it can now be run once to add jobs to the database:
$ submit-grid-jobs ~/zdabs/SNOCR_0000010000_000_p4_reduced.xzdab.gz
and then be run periodically via crontab:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * submit-grid-jobs --auto --logfile ~/submit.log
Similarly I updated cat-grid-jobs so that it uses the same database and can
also be run via a cron job:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * cat-grid-jobs --logfile cat.log --output-dir $HOME/fit_results
I also updated fit so that it keeps track of the total time elapsed including
the initial fits instead of just counting the final fits.
|
|
|
|
|
|
|
|
|
|
|
|
is large
Previously to achieve a large speedup in the likelihood calculation I added a
line to skip calculating the charge if:
abs((cos(theta)-cos_theta_cerenkov)/(sin_theta*theta0)) > 5
However I noticed that this was causing discontinuities in the likelihood
function when fitting low energy muons so I'm putting it behind a compile time
flag for now.
|
|
This commit updates plot-fit-results to use the median when plotting the energy
and position bias and the interquartile range (times 1.35) when plotting the
energy and position resolution. The reason is that single large outliers for
higher energy muons were causing the energy bias and resolution to no longer
represent the central part of the distribution well.
|
|
This commit updates how we handle PMTs whose type is different in the
snoman.ratdb file and the SNOMAN bank again. In particular, we now trust the
snoman.ratdb type *only* for the NCD runs and mark the PMT as invalid for the
D2O and salt phases.
This was spurred by noticing that with the current code GTID 9228 in run 10,000
was being marked as a neck event even though it was clearly a muon and XSNOED
only showed one neck hit. It was marked as a neck event because there were 2
neck PMT hits in the event: 3/15/9 and 13/15/0. After reading Stan's email more
carefully I realized that 3/15/9 was only installed as a neck PMT in the NCD
phase. I don't really know what type of PMT it was in the D2O and salt phases
(maybe an OWL), but in any case since I don't know the PMT position I don't
think we can use this PMT for these phases.
|
|
This commit fixes a small bug in cat-grid-jobs which was causing it to print
the wrong filename when there was no git_sha1 attrs in the HDF5 file.
|
|
I noticed that many of my jobs were failing with the following error:
module: command not found
My submit description files *should* only be selecting nodes with modules because of this line:
requirements = (HAS_MODULES =?= true) && (OSGVO_OS_STRING == "RHEL 7") && (OpSys == "LINUX")
which I think I got from
https://support.opensciencegrid.org/support/solutions/articles/12000048518-accessing-software-using-distributed-environment-modules.
I looked up what the =?= operator does and it's a case sensitive search. I also
found another site
(https://support.opensciencegrid.org/support/solutions/articles/5000633467-steer-your-jobs-with-htcondor-job-requirements)
which uses the normal == operator. Therefore, I'm going to switch to the ==
operator and hope that fixes the issue.
|
|
This commit updates get_expected_charge() to always use the index of refraction
for d2o instead of choosing the index of d2o or h2o based on the position of
the particle. The reason for this is that selecting the index based on the
position was causing discontinuities in the likelihood function for muon tracks
which crossed the AV.
|
|
This commit fixes a bug in gen-dark matter which was causing the positions to
all be generated with positive x, y, and z values. Doh!
|
|
This commit adds a new test to test the quad fitter when the t0 quantile
argument is less than 1.
|
|
This commit updates get_event() to clear any PMT flags except for PMT_FLAG_DQXX
from all PMT hits before loading the event. Although I *was* previously
clearing the other flags for hit PMTs, I was not clearing flags for PMTs which
were *not* hit. This was causing non deterministic behaviour, i.e. I was
getting different results depending on if I ran the fitter over a whole file or
just a single event.
|
|
This commit updates the likelihood function to use the PMT hit time without the
time walk correction applied (when the charge is greater than 1.5 PE) instead
of the multiphoton PCA time. The reason is that after talking with Chris Kyba I
realized that the multiphoton PCA time was calibrated to give the mean PMT hit
time when mulitiple photons hit at the same time instead of the time when the
first photon hits which is what I assume in my likelihood function.
Therefore I now use the regular PMT hit time without time walk correction
applied which should be closer to the first order statistic.
|
|
|
|
This commit updates the likelihood function to initialize mu_indirect to 0.0
since it's a static array. This can have an impact when the fit position is
outside of the PSUP and we skip calculating the charges.
|
|
|
|
|
|
This commit updates the crate, card, and channel variables in is_slot_early()
to be ints instead of size_ts. The reason is I saw a warning when building with
clang and realized that the abs(card - flasher_card) == 1 check wasn't working
if flasher_card was 1 greater than card because I was using unsigned ints.
|
|
|
|
get_hough_transform()
This commit adds two improvements to the quad fitter:
1. I updated quad to weight the random PMT hit selection by the probability
that the PMT hit is a multiphoton hit. The idea here is that we really only
want to sample direct light and for high energy events the reflected and
scattered light is usually single photon.
2. I added an option to quad to only use points in the quad cloud which are
below a given quantile of t0. The idea here is that for particles like muons
which travel more than a few centimeters in the detector the quad cloud usually
looks like the whole track. Since we want the QUAD fitter to find the position
of the *start* of the track we select only those quad cloud points with an
early time so the position is closer to the position of the start of the track.
Also, I fixed a major bug in get_hough_transform() in which I was using the
wrong index variable when checking if a PMT was not flagged, a normal PMT, and
was hit. This was causing the algorithm to completely miss finding more than
one ring while I was testing it.
|
|
|
|
This commit updates guess_energy() which is used to seed the energy for the
likelihood fit. Previously we estimated the energy by summing up the charge in
a 42 degree cone around the proposed direction and then dividing that by 6
(since electrons in SNO and SNO+ produce approximately 6 hits/MeV). Now,
guess_energy() estimates the energy by calculating the expected number of
photons produced from Cerenkov light, EM showers, and delta rays for a given
particle at a given energy. The most likely energy is found by bisecting the
difference between the expected number of photons and the observed charge to
find when they are equal.
This improves things dramatically for cosmic muons which have energies of ~200
GeV. Previously the initial guess was always very low (~1 GeV) and the fit
could take > 1 hour to increase the energy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit updates the flasher cut to flag events in which the PMT with the
highest pedestal subtracted QLX charge is 80 counts above the next highest QLX
charge, has at least 4 hits in the same slot, and passes the final check in
the flasher cut (70% of the normal PMT hits must be 50 ns after the high charge
channel and 70% of the normal PMT hits must be at least 12 meters away from the
high charge channel).
This update was motivated by run 20062 GTID 818162. This was a flasher event
but which had only 3 hits in the PC and so passed the previous version of the
cut. This new update was inspired by the SNO QvT cut.
|
|
|
|
|
|
This commit updates the flasher cut with the following changes:
- no longer require nhit to be less than 1000
- update charge criteria to be that the flasher channel must have a QHS or QHL
1000 counts above the next highest QHS or QHL value in the PC or a QLX value
80 counts above the next highest QLX value
- only check is_slot_early() for missing hits in the PC
These updates were inspired by looking at how to tag flashers in runs 20062 -
20370 which didn't fail the original cut. In particular, the following flashers
were previously not tagged:
Run GTID Comments
--- ---- --------
20062 818162 flasher with only 3 hits in PC
reconstructs at PSUP
ESUMH triggered
20083 120836 high charge missing (in next couple of events)
probably picked wrong flasher PMT ID
20089 454156 nhit > 1000
After this commit the last two are properly tagged.
|
|
This commit updates the QvNHIT cut to not require PMT hits to have a good
calibration to be included in the charge sum. The reason for this is that many
electrical pickup events have lots of hits which are pickup and thus have small
or negative charges. When the charge is low like this the PMT hits get flagged
with the bad calibration bit (I'm not sure if it's because of the PMT charge
walk calibration or what). Therefore, now we include all hit PMTs in the charge
sum if there ECA calibrated QHL value is above -100.
|
|
This commit updates the zdab-reprocess script with the following changes:
- don't run RAA
- don't run a bunch of the other fitters like FTI, etc.
- run the path and RSP fitters
- add --lower-nhit and --upper-nhit command line arguments to control the
primary nhit cut range
|
|
|
|
|
|
This commit updates the zebra code to store a pointer to the first MAST bank in
the zebraFile struct so that we can jump to it when iterating over the logical
records. I had naively assumed based on the documenation in the SNOMAN
companion that the first bank in a logical record was guaranteed to be a MAST
bank, but that doesn't seem to be the case. This also explains why I was
sometimes seeing RHDR and ZDAB banks as the first bank in a logical record.
|
|
|
|
This commit updates the breakdown cut to flag any event in which less than 70%
of the PMT hits have a good TAC value.
|
|
This commit updates the a and b parameters for the gamma distribution used to
describe the position distribution of shower photons produced along the
direction of the muon. Previously I had been assuming b was equal to the
radiation length and using a formula from the PDG to calculate a from that.
However, this formula doesn't seem to be valid for muons (the formula comes
from a section describing the shower profile of electrons and gammas, so it's
not surprising). Therefore, now we don't assume any relationship between a and
b.
Now, the value of a is approximated by a constant since I couldn't find any
functional relationship as a function of energy to describe a very well (and
it's approximately constant), and b is approximated by a single degree
polynomial fit to the values I got from simulating muons in RAT-PAC as a
function of energy.
Note that looking at the simulation data it seems like the position
distribution of shower photons from muons isn't even very well described by a
gamma distribution, so in the future it might be a good idea to come up with a
better parameterization.
Even if I stick with the gamma distribution, it would be good to revisit this
in the future and fit for a and b over a wider range of energies.
|
|
This commit updates plot-energy to select prompt events before applying the
data cleaning cuts. This fixes an issue where we might accidentally classify an
event as a prompt event even if it came after an event that was flagged by data
cleaning. For example, suppose there was a breakdown but for whatever reason
the event immediately after the breakdown wasn't tagged (ignoring the fact that
we apply a breakdown follower cut). If we apply the data cleaning first and
then the prompt event selection, that event would be a part of the prompt
events.
There are several other small updates to plot-energy:
- fix bug in 00-orphan cut
- make michel event selection a separate function
- make atmospheric tag into a separate function
|
|
This commit adds the sub_run variable to the ev array in the HDF5 output file
and updates plot-energy to order the events using the run and sub_run
variables. This fixes a potential issue where I was sorting by GTID before, but
the GTID can wrap around and so isn't guaranteed to put the events in the right
order.
|
|
|
|
This commit updates the ITC cut to use the pt1 time which is the ECA + PCA
without charge walk calibration time. The reason is that an event which is
mostly electronics noise may have all low charges which can't be calibrated
with the PCA walk calibration.
|