Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
scatter" and a "forced no-scatter" pass.
Since we want to include contributions from two populations of weighted
photons, we have to break up the DAQ simulation into three functions:
* begin_acquire()
* acquire()
* end_acquire()
The first function resets the channel states, the second function
accumulates photoelectrons (and can be called multiple times), and the
last function returns the hit information. A global weight has also
been added to the DAQ simulation if a particular set of weighted
photons need to have an overall penalty.
The forced scattering pass can be repeated many times on the same
photons (with the photons individually deweighted to compensate).
This reduces the variance on the final likelihoods quite a bit.
|
|
state (hit or not), rather than a floor on the hit probability.
A channel that is impossible to hit should have zero probability in
the likelihood and not be hit in the actual data. Before this change,
that channel would be forced to have a hit probability of 0.5 /
ntotal, which is wrong. We only need to ensure the probability of the
observed state of the channel is not zero so that the log() function
is defined.
|
|
same angle for greater efficiency.
|
|
step during photon propagation.
|
|
|
|
|
|
|
|
For consistence, weights must be less than or equal to one at all
times. When weight calculation is enabled by the likelihood
calculator, photons are prevented from being absorbed, and instead
their weight is decreased to reflect their probability of survival.
Once the photon's weight drops below a given threshold (0.01 for now),
the weighting procedure is stopped and the photon can be extinguished.
With weighting enabled, the detection efficiency of surfaces is also
applied to the weight, and the photon terminated with the DETECT bit
set in the history. This is not completely accurate, as a photon
could pass through the surface, reflect, and reintersect the surface
later (or some other PMT) and be detected. As a result, weighting
will slightly underestimate PMT efficiency compared to the true Monte
Carlo. This is not intrinsic to the weighting procedure, but only
comes about because of the inclusion of detection efficiency into the
weight.
Without the detection efficiency included, weighting cuts in half the
number of evaluations required to achieve a given likelihood
uncertainty (at least for the hit probabilities). Add in the
detection efficiency, and that factor becomes 1/5 or 1/6!
|
|
|
|
|
|
|
|
|
|
the mathematics convention for the angle names. Flipping theta and phi back to their correct meaning.
|
|
expecting.
|
|
you will be forking processes at multiple locations. This killed one of the unit tests.
|
|
|
|
remove requirement to use numpy >= 1.6
|
|
Press "s" and average time, charge and hit probability are computed
for all of the events in the input file. Then you can cycle through
the different values using the "." key, just as you can for single
events. You can flip between sum and regular modes, and the sum
calculation is only done the first time.
|
|
Now you can do:
reader = RootReader('file.root')
for ev in reader:
# Do stuff
|
|
in event viewer
|
|
|
|
|
|
|
|
to speed up likelihood evaluation.
When generating a likelihood, the DAQ can be run many times in parallel
by the GPU, creating a large block of channel hit information in memory.
The PDF accumulator processes that entire block in two passes:
* First update the channel hit count, and the count of channel hits
falling into the bin around the channel hit time being evaluated.
Add any channel hits that should also be included in the n-th
nearest neighbor calculation to a channel-specific work queue.
* Process all the work queues for each channel and update the
list of nearest neighbors.
This is hugely faster than what we were doing before. Kernel
estimation (or some kind of orthogonal function expansion of the PDF)
should be better ultimately, but for now the nearest neighbor approach
to PDF estimation seems to be working the best.
|
|
that all have a particular interaction process code set. Handy for selection just the detected photons from a large list of photons.
|
|
|
|
still look off, but this is an improvement.
|
|
|
|
|
|
separate function for easier debugging of channel-level likelihood behavior.
|
|
|
|
|
|
|
|
|
|
|
|
likelihood
|
|
|
|
|
|
to automatically pull in Tony's histogram classes when someone clones
the repository. Now the histogram code has been copied and committed as
part of chroma. Maybe someday we can drop this when histogram is an
installable python package.
|
|
Apologies for the lack of history, but Chroma's prehistory included
some very large files and possibly proprietary engineering data.
Rather than clutter up the public repository (and panic people), we
are starting fresh. All development happens here from now on.
|
|
the likelihood.
|
|
|
|
channels back to solid indices to set colors when displaying an event.
|
|
inventory of all the .stl/.stl.bz2 files present and creates building
functions for each of them in the models module.
This allows usage like the following:
chroma-cam chroma.models.lionsolid
chroma-cam chroma.models.tie_interceptor6
You don't need to worry about where the chroma package was actually
installed. Loading from STL files listed on the command line still
works, of course.
|
|
make plain meshes more opaque because the high transparency is visually confusing.
|
|
|
|
Vectorizing a lambda function is really slow, and it turns out that
advanced indexing already does what we want to remap the triangle
vertices to their unique values.
|