Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit updates the criteria for selecting stopping muons from:
- calibrated nhit < 4000
- udotr < -0.5
to
- reconstructed kinetic energy < 10 GeV
The previous criteria were intended to remove through going atmospheric
events but produced a strong bias in the comparison due to the nhit cut
and an energy bias in the data relative to the Monte Carlo. The new cut
does a good job of cutting through going muons but doesn't produce the
same bias.
|
|
|
|
|
|
Previously the function to tag atmospherics was looking at the *first*
event to come after a prompt event and checking to see if it was a
neutron. However, this has a huge issue in that for large energy events,
there is often secondary events caused by afterpulsing.
I've now updated the algorithm to look for any events in which there is
*any* follower event that passes the neutron criteria.
|
|
ev.r -> ev_single_particle.r
|
|
|
|
|
|
|
|
- added a cos(theta) cut
- plot the energy and angular distribution of stopping muons
- fix bug in calculating Michel normalization constant
- plot legend for energy resolution plot
|
|
|
|
I found a really simple form for the log likelihood ratio of a Poisson
and multinomial likelihood.
|
|
|
|
|
|
memory
|
|
This commit updates get_events() to only merge fit info for events with
at least 10 events. The reason for this is that when analyzing recent
data where not all the fits have completed we don't want to plot the
data for events which haven't completely finished being fit.
|
|
|
|
|
|
|
|
|
|
stopping muons
|
|
- use pd.Series.where() instead of DataFrame.loc() to speed things up in
tag_michels
- don't set y limits when plotting bias and resolution for stopping
muons
|
|
- add get_multinomial_prob() function to stats.py
- add plot_hist2_data_mc() function to do the normal particle id plot
but also print p values
- other small bug fixes
|
|
This commit adds the new file sddm/stats.py to and adds a function to
correctly sample a Monte Carlo histogram when computing p-values. In
particular, I now take into account the uncertainty on the total number
of expected events by drawing from a gamma distribution, which is the
posterior of the Poisson likelihood function with a prior of 1/lambda.
|
|
|
|
- only look at muons with nhit < 4000 and udotr < -0.5
- switch from energy1 -> ke
|
|
This commit adds a first draft of a script to plot the michel energy
distribution and particle id histograms for data and Monte Carlo and to
plot the energy bias and resolution for stopping muons.
|
|
|
|
|
|
|
|
This commit adds a first draft of a script called chi2. This script calculates
a chi2 for the null hypothesis test if the events in the energy range 20 MeV -
10 GeV match what we expect from atmospheric neutrino events.
|
|
|
|
This commit updates get_events() to require at least 1 nhit trigger to fire.
The reason for this is that after looking at a small fraction of the data I
noticed a bunch of instrumental events that weren't getting tagged in run
10141. They looked sort of like neck events and were surrounded by hundreds of
orphaned PMT hits. My best guess is that they really were neck events but the
neck PMT hits and the hits in the lower hemisphere were erroneously not getting
built into the events.
Luckily, all of these events failed the psi cut, but it's not great to rely on
such a high level cut to cut these events. One other thing I noticed was that
these events were triggered mostly by MISS, OWLEL, and OWLEH. Therefore I
thought it might be a good idea to require events to have at least 1 NHIT
trigger. To test whether the NHIT triggers were reliably firing before the
other triggers I looked at all muon events which *didn't* have an NHIT trigger
fire. All of them appeared to be falsely tagged neck events so I'm fairly
confident that the NHIT triggers do reliably fire before the other triggers for
physics events.
|
|
This commit updates cat-grid-jobs to just add all the fits at once at the end
instead of continuously resizing the fits dataset. The reason for this is that
I noticed that several fit results files would occasionally have a large block
of the fits be set to all zeros. I have no idea how this happened, but I
suspect it might have been a bug with resizing the dataset so many times.
|
|
This commit updates cat-grid-jobs to change the reprocessed attribute to be 1
instead of True since the previous value was giving the following warning:
/usr/lib64/python2.7/site-packages/tables/attributeset.py:298: DataTypeWarning: Unsupported type for attribute 'reprocessed' in node '/'. Offending HDF5 class: 8
value = self._g_getattr(self._v_node, name)
when opening them up with pandas read_hdf() function.
|
|
|
|
|
|
|
|
|
|
|
|
instrumentals
This commit updates the contamination analysis scripts to take into account the
fact that we only fit a fraction of some of the instrumental events.
Based on the recent rate at which my jobs have been running on the grid,
fitting all the events would take *way* too long. Therefore, I'm now planning
to only fit 10% of muon, flasher, and neck events. With this commit the
contamination analysis will now correctly take into account the fact that we
aren't fitting all the instrumental events.
|
|
|