Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
instrumentals
This commit updates the contamination analysis scripts to take into account the
fact that we only fit a fraction of some of the instrumental events.
Based on the recent rate at which my jobs have been running on the grid,
fitting all the events would take *way* too long. Therefore, I'm now planning
to only fit 10% of muon, flasher, and neck events. With this commit the
contamination analysis will now correctly take into account the fact that we
aren't fitting all the instrumental events.
|
|
|
|
- label instrumentals in get_events()
- tag michels and stopping muons in get_events()
- update submit-grid-jobs to use get_events()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- print out mean acceptance fraction and autocorrelation time
- print standard normal distribution pdf with pull plot histograms
|
|
This commit updates the step size used for the MCMC in the contamination
analysis to 0.5 times the error returned by scanning near the minimum. I ran
some tests and this seemed to be pretty efficient compared to either the full
error or 0.1 times the error. I also reduced the number of workers to 10.
|
|
|
|
This commit updates the plot_energy.py module to not import from plot globally
since that means matplotlib gets pulled in by submit-grid-jobs and that causes
errors on the grid login nodes.
|
|
- fix Constraint.renormalize_no_fix() which could enter an infinite loop if the
fixed parameter was greater than 1 - EPSILON
- don't divide by psi twice in get_events()
- only use prompt events and cut on nhit_cal < 100
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit contains the following updates:
- remove hack to get rid of low energy events in plot-energy since while
writing the unidoc I realized it's not necessary now that we add +100 to
multi-particle fits
- update Ockham factor to use an energy resolution of 5%
- update submit-grid-jobs to submit jobs according to the following criteria:
- always submit prompt events with no data cleaning cuts
- submit 10% of prompt flasher events
- submit all other prompt events
- submit followers only if they have no data cleaning cuts
- update submit-grid-jobs to place the nhit cut of 100 on the calibrated nhit
|
|
This commit contains the following small updates:
- create a setup_matplotlib() function to set up matplotlib correctly depending
on if we are saving the plots or just displaying them
- change default font size to 12 when displaying plots
- switch to using logarithmic bins in plot-energy
- fix despine() function when x axis is logarithmic
|
|
This commit updates utils/sddm/__init__.py to not import everything by default.
The reason is that on the open science grid login machine they don't have the
module scipy.stats by default.
|
|
|
|
This commit adds an sddm python package to the utils/ folder. This allows me to
consolidate code used across all the various scripts. This package is now
installed by default to /home/tlatorre/local/lib/python2.7/site-packages so you
should add the following to your .bashrc file:
export PYTHONPATH=$HOME/local/lib/python2.7/site-packages/:$PYTHONPATH
before using the scripts installed to ~/local/bin.
|
|
|
|
|
|
This commit updates submit-grid-jobs so that jobs are now submitted first in
order of priority and then timestamp. So any jobs with a higher priority will
be submitted preferentially first, and within each priority level jobs are
submitted in the order they are added to the database.
|
|
This commit updates submit-grid-jobs to look for jobs in the RETRY state and to
retry them if the number of retries is less than --max-retries. This way if you
ever want to retry a bunch of jobs you can update the database:
$ sqlite3 ~/state.db
sqlite> update state set state = 'RETRY' where state == 'FAILED';
And then rerun submit-grid-jobs with more retries:
$ submit-grid-jobs --max-retries 10 ---auto
|
|
|
|
|
|
|
|
|
|
|
|
multi-particle fits
|
|
|
|
more than one fit per particle combo
|
|
|
|
|
|
|
|
This commit updates fit.c to start with 5 peaks for the direction seeds. I
chose this number because I did some testing with the test-find-peaks program
on the atmospheric MC and it looks like 5 peaks were necessary to capture the
majority of the peaks.
|
|
This commit updates the fit program to fit each event and particle hypothesis
twice, once using the normal quad implementation and the other by cutting on
the 10% quantile of times. The first way is much much better when the event is
fully contained since quad will return a really good starting point, and the
second is much better for muons where we want to seed the fit near the entry
point of the muon.
Ideally we would only need a single call and I have an idea of how to update
QUAD to maybe return reasonable guesses in both cases. The idea is to take the
cloud of quad points and find the position and time that has the smallest time
such that it is only a certain Mahalabonis distance from the distribution. This
(I think) corresponds roughly to what I would do by eye where you look at the
distribution of quad points in the cloud and see that it forms a track, and
pick a point at the start of the track.
I started working on this second idea but haven't successfully tested it out
yet.
|