Age | Commit message (Collapse) | Author |
|
|
|
|
|
extensions using it in Chroma.
|
|
|
|
remove requirement to use numpy >= 1.6
|
|
count_nonzero().
|
|
|
|
mastbaum for noticing the discrepancy.)
|
|
Press "s" and average time, charge and hit probability are computed
for all of the events in the input file. Then you can cycle through
the different values using the "." key, just as you can for single
events. You can flip between sum and regular modes, and the sum
calculation is only done the first time.
|
|
Now you can do:
reader = RootReader('file.root')
for ev in reader:
# Do stuff
|
|
in event viewer
|
|
|
|
|
|
|
|
|
|
to speed up likelihood evaluation.
When generating a likelihood, the DAQ can be run many times in parallel
by the GPU, creating a large block of channel hit information in memory.
The PDF accumulator processes that entire block in two passes:
* First update the channel hit count, and the count of channel hits
falling into the bin around the channel hit time being evaluated.
Add any channel hits that should also be included in the n-th
nearest neighbor calculation to a channel-specific work queue.
* Process all the work queues for each channel and update the
list of nearest neighbors.
This is hugely faster than what we were doing before. Kernel
estimation (or some kind of orthogonal function expansion of the PDF)
should be better ultimately, but for now the nearest neighbor approach
to PDF estimation seems to be working the best.
|
|
that all have a particular interaction process code set. Handy for selection just the detected photons from a large list of photons.
|
|
|
|
still look off, but this is an improvement.
|
|
|
|
|
|
|
|
separate function for easier debugging of channel-level likelihood behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
likelihood
|
|
|
|
|
|
|
|
and stop the warning about _static
|
|
|
|
to automatically pull in Tony's histogram classes when someone clones
the repository. Now the histogram code has been copied and committed as
part of chroma. Maybe someday we can drop this when histogram is an
installable python package.
|
|
Apologies for the lack of history, but Chroma's prehistory included
some very large files and possibly proprietary engineering data.
Rather than clutter up the public repository (and panic people), we
are starting fresh. All development happens here from now on.
|
|
the likelihood.
|
|
|
|
channels back to solid indices to set colors when displaying an event.
|
|
inventory of all the .stl/.stl.bz2 files present and creates building
functions for each of them in the models module.
This allows usage like the following:
chroma-cam chroma.models.lionsolid
chroma-cam chroma.models.tie_interceptor6
You don't need to worry about where the chroma package was actually
installed. Loading from STL files listed on the command line still
works, of course.
|
|
make plain meshes more opaque because the high transparency is visually confusing.
|
|
|
|
Vectorizing a lambda function is really slow, and it turns out that
advanced indexing already does what we want to remap the triangle
vertices to their unique values.
|
|
Most of the time required to build the LBNE geometry is spent on
mesh_grid() for the highly segmented cylinder. (67 seconds!) The
speed hit is caused by the use of zip to connect the vertices. The
same task can be done in several lines with slice notation, and goes
much faster.
|
|
A bunch of small tricks have been applied to reduce the amount of time
required to build an already cached geometry:
* Replace uses of fromiter() on long sequences with code that operates
on bigger arrays.
* Use memoization on the Solids to more efficiently map materials
to material codes when a solid is repeated (as is the case in all
our detectors)
* Use numpy.take() instead of fancy indexing on big arrays. I
learned about this trick from:
http://wesmckinney.com/blog/?p=215
Also, switched over to compressed npz files for storing cache information.
They take the same size as the gzipped pickle files, but load 30% faster.
|
|
|
|
geometry, like the mapping from solid IDs to channels, and the time
and charge distributions.
Detector is a subclass of Geometry, so that a Detector can be used
wherever a Geometry is used. Only code (like the DAQ stuff) that
needs to know how PMT solids map to channels should look for a
Detector object.
There is a corresponding GPUDetector class as well, with its own
device side struct to hold PMT channel information.
The GPU code now can sample an arbitrary time and charge PDF, but
on the host side, the only interface exposed right now creates
a Gaussian distribution.
|
|
|