Age | Commit message (Collapse) | Author |
|
This commit adds four scripts:
1. calculate-atmospheric-oscillations
This script uses an independent python package called nucraft to calculate the
neutrino oscillation probabilities for atmospheric neutrinos. These
probabilities are calculated as a function of energy and cosine of the zenith
angle and stored in text files.
2. apply-atmospheric-oscillations
This script combines the high energy 2D atmospheric neutrino flux from Barr and
the low energy 1D flux from Battistoni and then applies neutrino oscillations
to them. The results are then stored in new flux files that can be used with a
modified version of GENIE to simulate the oscillated atmospheric neutrino flux
from 10 MeV to 10 GeV.
3. plot-atmospheric-fluxes
This is a simple script to plot the atmospheric flux files produced by
apply-atmospheric-oscillations.
4. plot-atmospheric-oscillations
This is a simple script to plot the 2D neutrino oscillation probabilities.
|
|
|
|
This commit adds the --save command line argument to plot-energy to save either
the corner plots or the energy distribution plots. It also updates the code to
make plots similar to plot-fit-results.
In addition there are a bunch of other small changes:
- plot the theoretical Michel spectrum for free muons
- energy plots now assume there are only a max of 2 particles fit for each event
- display particle IDs as letters instead of numbers, i.e. 2022 -> eu
|
|
|
|
|
|
the database
Also add a -r or --reprocess command line option to reprocess runs which are
already in the database.
|
|
results
|
|
|
|
To select stopping muons we simply look for the muons before a Michel event.
The muon distance is calculated by first projecting the muon fit back to the
PSUP along the fitted direction and then taking the distance between this point
and the fitted position of the Michel event. I then calculate the expected
kinetic energy of the muon by using the muon lookup tables of the CSDA range to
convert the distance to an energy.
I also changed a few other things like changing as_index=False ->
group_keys=False when grouping events. The reason for this is just that if we
do this we don't have to reset the index and drop the new index after calling
apply().
I also fixed a small bug I had introduced recently where I selected only prompt
events before finding the michel events and atmospheric events.
This commit updates the plot-energy script to plot the energy bias and resolution for stopping muons by computing the expected
|
|
This commit updates the dc script to calculate the instrumental contamination
to now treat all 4 high level variables as correlated for muons. Previously I
had assumed that the reconstructed radius was independent from udotr, z, and
psi, but based on the corner plots it seems like the radius is strongly
correlated with udotr.
I also updated the plotting code when using the save command line argument to
be similar to plot-fit-results.
|
|
This commit adds a script to calculate the background contamination using a
method inspired by the bifurcated analysis method used in SNO. The method works
by looking at the distribution of several high level variables (radius, udotr,
psi, and reconstructed z position) for events tagged by the different data
cleaning cuts and assuming that any background events which sneak past the data
cleaning cuts will have a similar distribution (for certain backgrounds this is
assumed and for others I will actually test this assumption. For more details
see the unidoc). Then, by looking at the distribution of these high level
variables for all the untagged events we can use a maximum likelihood fit to
determine the residual contamination.
There are also a few other updates to the plot-energy script:
- add a --dc command line argument to plot corner plots for the high level
variables used in the contamination analysis
- add a fudge factor to the Ockham factor of 100 per extra particle
- fix a bug by correctly setting the final kinetic energy to the sum of the
individual kinetic energies instead of just the first particle
- fix calculation of prompt events by applying at the run level
|
|
This commit updates submit-grid-jobs so that it keeps a database of jobs. This
allows the script to make sure that we only have a certain number of jobs in
the job queue at a single time and automatically resubmitting failed jobs. The
idea is that it can now be run once to add jobs to the database:
$ submit-grid-jobs ~/zdabs/SNOCR_0000010000_000_p4_reduced.xzdab.gz
and then be run periodically via crontab:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * submit-grid-jobs --auto --logfile ~/submit.log
Similarly I updated cat-grid-jobs so that it uses the same database and can
also be run via a cron job:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * cat-grid-jobs --logfile cat.log --output-dir $HOME/fit_results
I also updated fit so that it keeps track of the total time elapsed including
the initial fits instead of just counting the final fits.
|
|
|
|
|
|
|
|
This commit updates plot-fit-results to use the median when plotting the energy
and position bias and the interquartile range (times 1.35) when plotting the
energy and position resolution. The reason is that single large outliers for
higher energy muons were causing the energy bias and resolution to no longer
represent the central part of the distribution well.
|
|
This commit fixes a small bug in cat-grid-jobs which was causing it to print
the wrong filename when there was no git_sha1 attrs in the HDF5 file.
|
|
I noticed that many of my jobs were failing with the following error:
module: command not found
My submit description files *should* only be selecting nodes with modules because of this line:
requirements = (HAS_MODULES =?= true) && (OSGVO_OS_STRING == "RHEL 7") && (OpSys == "LINUX")
which I think I got from
https://support.opensciencegrid.org/support/solutions/articles/12000048518-accessing-software-using-distributed-environment-modules.
I looked up what the =?= operator does and it's a case sensitive search. I also
found another site
(https://support.opensciencegrid.org/support/solutions/articles/5000633467-steer-your-jobs-with-htcondor-job-requirements)
which uses the normal == operator. Therefore, I'm going to switch to the ==
operator and hope that fixes the issue.
|
|
This commit fixes a bug in gen-dark matter which was causing the positions to
all be generated with positive x, y, and z values. Doh!
|
|
|
|
|
|
|
|
This commit updates the zdab-reprocess script with the following changes:
- don't run RAA
- don't run a bunch of the other fitters like FTI, etc.
- run the path and RSP fitters
- add --lower-nhit and --upper-nhit command line arguments to control the
primary nhit cut range
|
|
|
|
|
|
|
|
This commit updates plot-energy to select prompt events before applying the
data cleaning cuts. This fixes an issue where we might accidentally classify an
event as a prompt event even if it came after an event that was flagged by data
cleaning. For example, suppose there was a breakdown but for whatever reason
the event immediately after the breakdown wasn't tagged (ignoring the fact that
we apply a breakdown follower cut). If we apply the data cleaning first and
then the prompt event selection, that event would be a part of the prompt
events.
There are several other small updates to plot-energy:
- fix bug in 00-orphan cut
- make michel event selection a separate function
- make atmospheric tag into a separate function
|
|
This commit adds the sub_run variable to the ev array in the HDF5 output file
and updates plot-energy to order the events using the run and sub_run
variables. This fixes a potential issue where I was sorting by GTID before, but
the GTID can wrap around and so isn't guaranteed to put the events in the right
order.
|
|
|
|
|
|
|
|
This commit updates cat-grid-jobs to only warn once about a mismatch between
the SHA1 of the current zdab-cat program and the grid results, and also cleans
up some of the output.
|
|
This commit updates the submit-grid-jobs script to use my version of splitext()
which removes the full extension from the filename. This fixes an issue where
the output HDF5 files had xzdab in the name whenever the input file had the
file extension .xzdab.gz.
|
|
|
|
This commit fixes the FTS cut so that it returns 1 when the event is flagged as
failing the cut. Previously, the function returned the following:
not enough PMT pairs: 0 (fail)
median time > 6.8 ns: 0 (fail)
otherwise: 1 (pass)
This had two issues: the return value wasn't consistent with the rest of the
data cleaning cuts and it should pass if there aren't enough PMT pairs.
Now, I fixed the not enough PMT pairs case and made the return value consistent
with the rest of the data cleaning cuts:
not enough PMT pairs: 0 (pass)
median time > 6.8 ns: 1 (fail)
otherwise: 0 (pass)
|
|
|
|
This commit updates the fit program to accept a particle combo from the command
line so you can fit for a single particle combination hypothesis. For example
running:
$ ./fit ~/zdabs/mu_minus_700_1000.hdf5 -p 2020
would just fit for the 2 electron hypothesis.
The reason for adding this ability is that my grid jobs were getting evicted
when fitting muons in run 10,000 since it takes 10s of hours to fit for all the
particle hypothesis. With this change, and a small update to the
submit-grid-jobs script we now submit a single grid job per particle
combination hypothesis which should make each grid job run approximately 4
times faster.
|
|
|
|
|
|
|
|
This commit fixes two small bugs in the plotting scripts. First, after the HDF5
commit I wasn't correctly computing the particle ID string which I had been
using before which was needed in order to plot things correctly. Second, I
realized that the dataframe groupby function first() actually selects the first
non-null column from each group! What I really wanted was the first row from
each group, so all instances of .first() were updated to .nth(0).
See https://stackoverflow.com/questions/20067636/pandas-dataframe-get-first-row-of-each-group.
|
|
|
|
|
|
- apply muon follower and muon cuts to atmospheric sample
- print warnings in red
- fix how events with fmin = nan are counted
|
|
Tidy up the code in the script by creating a function to plot the energy
distributions for each particle combo. Also fixed a bug which was causing the
neutron follower cut not to be applied.
|
|
This commit is a major update to the plot-energy script. The most significant
changes are:
- new prompt event selection
We now define prompt events as any event >= 100 nhit which is at least 250
ms away from the last 100 nhit event.
- add Michel electron event selection.
Michel electrons are now selected as any event between 800 ns and 20
microseconds after a muon or prompt event. Additionally they are required
to be >= 100 nhit and pass an extra set of data cleaning cuts (compared to
the prompt events).
- add an atmospheric "sideband"
I also updated the script to plot the energy distribution and particle ID
for all events that *do* have a neutron follower as a kind of sideband
which should contain only atmospheric events.
- plot muon energy spectrum and angular distribution
|
|
|
|
|
|
|
|
|