Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
This commit contains the following updates:
- remove hack to get rid of low energy events in plot-energy since while
writing the unidoc I realized it's not necessary now that we add +100 to
multi-particle fits
- update Ockham factor to use an energy resolution of 5%
- update submit-grid-jobs to submit jobs according to the following criteria:
- always submit prompt events with no data cleaning cuts
- submit 10% of prompt flasher events
- submit all other prompt events
- submit followers only if they have no data cleaning cuts
- update submit-grid-jobs to place the nhit cut of 100 on the calibrated nhit
|
|
This commit contains the following small updates:
- create a setup_matplotlib() function to set up matplotlib correctly depending
on if we are saving the plots or just displaying them
- change default font size to 12 when displaying plots
- switch to using logarithmic bins in plot-energy
- fix despine() function when x axis is logarithmic
|
|
This commit updates utils/sddm/__init__.py to not import everything by default.
The reason is that on the open science grid login machine they don't have the
module scipy.stats by default.
|
|
|
|
This commit adds an sddm python package to the utils/ folder. This allows me to
consolidate code used across all the various scripts. This package is now
installed by default to /home/tlatorre/local/lib/python2.7/site-packages so you
should add the following to your .bashrc file:
export PYTHONPATH=$HOME/local/lib/python2.7/site-packages/:$PYTHONPATH
before using the scripts installed to ~/local/bin.
|
|
|
|
|
|
This commit updates submit-grid-jobs so that jobs are now submitted first in
order of priority and then timestamp. So any jobs with a higher priority will
be submitted preferentially first, and within each priority level jobs are
submitted in the order they are added to the database.
|
|
This commit updates submit-grid-jobs to look for jobs in the RETRY state and to
retry them if the number of retries is less than --max-retries. This way if you
ever want to retry a bunch of jobs you can update the database:
$ sqlite3 ~/state.db
sqlite> update state set state = 'RETRY' where state == 'FAILED';
And then rerun submit-grid-jobs with more retries:
$ submit-grid-jobs --max-retries 10 ---auto
|
|
|
|
|
|
|
|
|
|
|
|
multi-particle fits
|
|
|
|
more than one fit per particle combo
|
|
|
|
|
|
|
|
This commit updates fit.c to start with 5 peaks for the direction seeds. I
chose this number because I did some testing with the test-find-peaks program
on the atmospheric MC and it looks like 5 peaks were necessary to capture the
majority of the peaks.
|
|
This commit updates the fit program to fit each event and particle hypothesis
twice, once using the normal quad implementation and the other by cutting on
the 10% quantile of times. The first way is much much better when the event is
fully contained since quad will return a really good starting point, and the
second is much better for muons where we want to seed the fit near the entry
point of the muon.
Ideally we would only need a single call and I have an idea of how to update
QUAD to maybe return reasonable guesses in both cases. The idea is to take the
cloud of quad points and find the position and time that has the smallest time
such that it is only a certain Mahalabonis distance from the distribution. This
(I think) corresponds roughly to what I would do by eye where you look at the
distribution of quad points in the cloud and see that it forms a track, and
pick a point at the start of the track.
I started working on this second idea but haven't successfully tested it out
yet.
|
|
This commit updates the find peaks algorithm with several improvements which
together drastically improve its ability to find Cerenkov rings:
- when computing the Hough transform, instead of charge we weight each PMT hit
by the probability that it is a multi-photon PMT hit
- we don't subtract off previously found rings (this makes the code simpler and
I don't think it previously had a huge effect)
- ignore PMT hits who are within approximately 5 degrees of any previously
found ring (previously we ignored all hits within the center of previously
found rings)
- ignore PMT hits which have a time residual of more than 10 nanoseconds to
hopefully ignore more reflected and/or scattered light
- switch from weighting the Hough transform by exp(-fabs(cos(theta)-1/n)/0.1)
-> exp(-pow(cos(theta)-1/n,2)/0.01). I'm still not sure if this has a huge
effect, but the reason I switched is that the PDF for Cerenkov light looks
closer to the second form.
- switch to calling quad with f = 1.0 in test-find-peaks (I still need to add
this update to fit.c but will do that in a later commit).
|
|
This commit adds the probability that a channel is miscalibrated and/or doesn't
make it into the event to the likelihood. This was added because I noticed when
looking at the likelihood for one very high energy event that there was a
single PMT that should have been hit that wasn't in the event and which was not
marked as bad in DQXX.
I did some testing and the addition of this term does not seem to significantly
affect that atmospheric MC or the psi values for flashers. One unexpected
improvement is that it seems that external muons are more likely to correctly
reconstruct at the PSUP with this change. I haven't determined the exact cause
but I suspect it's because there is some mismodelling of the likelihood for
muons near the edge of the detector when they exit and that adding this term
allows the likelihood to ignore these PMT hits.
|
|
This commit adds four scripts:
1. calculate-atmospheric-oscillations
This script uses an independent python package called nucraft to calculate the
neutrino oscillation probabilities for atmospheric neutrinos. These
probabilities are calculated as a function of energy and cosine of the zenith
angle and stored in text files.
2. apply-atmospheric-oscillations
This script combines the high energy 2D atmospheric neutrino flux from Barr and
the low energy 1D flux from Battistoni and then applies neutrino oscillations
to them. The results are then stored in new flux files that can be used with a
modified version of GENIE to simulate the oscillated atmospheric neutrino flux
from 10 MeV to 10 GeV.
3. plot-atmospheric-fluxes
This is a simple script to plot the atmospheric flux files produced by
apply-atmospheric-oscillations.
4. plot-atmospheric-oscillations
This is a simple script to plot the 2D neutrino oscillation probabilities.
|
|
electron showers
Also update the a parameter based on a simple 0 degree polynomial fit to the
shower profiles above 100 MeV.
|
|
|
|
|
|
This commit adds the --save command line argument to plot-energy to save either
the corner plots or the energy distribution plots. It also updates the code to
make plots similar to plot-fit-results.
In addition there are a bunch of other small changes:
- plot the theoretical Michel spectrum for free muons
- energy plots now assume there are only a max of 2 particles fit for each event
- display particle IDs as letters instead of numbers, i.e. 2022 -> eu
|
|
|
|
|
|
the database
Also add a -r or --reprocess command line option to reprocess runs which are
already in the database.
|
|
results
|
|
|
|
To select stopping muons we simply look for the muons before a Michel event.
The muon distance is calculated by first projecting the muon fit back to the
PSUP along the fitted direction and then taking the distance between this point
and the fitted position of the Michel event. I then calculate the expected
kinetic energy of the muon by using the muon lookup tables of the CSDA range to
convert the distance to an energy.
I also changed a few other things like changing as_index=False ->
group_keys=False when grouping events. The reason for this is just that if we
do this we don't have to reset the index and drop the new index after calling
apply().
I also fixed a small bug I had introduced recently where I selected only prompt
events before finding the michel events and atmospheric events.
This commit updates the plot-energy script to plot the energy bias and resolution for stopping muons by computing the expected
|
|
This commit updates the dc script to calculate the instrumental contamination
to now treat all 4 high level variables as correlated for muons. Previously I
had assumed that the reconstructed radius was independent from udotr, z, and
psi, but based on the corner plots it seems like the radius is strongly
correlated with udotr.
I also updated the plotting code when using the save command line argument to
be similar to plot-fit-results.
|
|
This commit adds a script to calculate the background contamination using a
method inspired by the bifurcated analysis method used in SNO. The method works
by looking at the distribution of several high level variables (radius, udotr,
psi, and reconstructed z position) for events tagged by the different data
cleaning cuts and assuming that any background events which sneak past the data
cleaning cuts will have a similar distribution (for certain backgrounds this is
assumed and for others I will actually test this assumption. For more details
see the unidoc). Then, by looking at the distribution of these high level
variables for all the untagged events we can use a maximum likelihood fit to
determine the residual contamination.
There are also a few other updates to the plot-energy script:
- add a --dc command line argument to plot corner plots for the high level
variables used in the contamination analysis
- add a fudge factor to the Ockham factor of 100 per extra particle
- fix a bug by correctly setting the final kinetic energy to the sum of the
individual kinetic energies instead of just the first particle
- fix calculation of prompt events by applying at the run level
|
|
|
|
This commit updates the ./fit program to add a ctrl-z handler to allow you to
skip events. This is really handy when testing nwe things. Currently if you
press ctrl-z and it has already done at least one of the initial fits, it will
skip to move on to the final minimization stage. If you press ctrl-z during the
final minimization, it will skip fitting the event. Currently this will *not*
save the result to the file but I may change that in the future.
|
|
This commit updates get_expected_photons() to check if there are any shower
photons or delta ray photons before adding them since if there aren't any
shower photons or delta ray photons the PDF isn't constructed.
|
|
PSUP
|
|
This commit updates submit-grid-jobs so that it keeps a database of jobs. This
allows the script to make sure that we only have a certain number of jobs in
the job queue at a single time and automatically resubmitting failed jobs. The
idea is that it can now be run once to add jobs to the database:
$ submit-grid-jobs ~/zdabs/SNOCR_0000010000_000_p4_reduced.xzdab.gz
and then be run periodically via crontab:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * submit-grid-jobs --auto --logfile ~/submit.log
Similarly I updated cat-grid-jobs so that it uses the same database and can
also be run via a cron job:
PATH=/usr/bin:$HOME/local/bin
SDDM_DATA=$HOME/sddm/src
DQXX_DIR=$HOME/dqxx
0 * * * * cat-grid-jobs --logfile cat.log --output-dir $HOME/fit_results
I also updated fit so that it keeps track of the total time elapsed including
the initial fits instead of just counting the final fits.
|
|
|
|
|
|
|