Age | Commit message (Collapse) | Author |
|
remove requirement to use numpy >= 1.6
|
|
Press "s" and average time, charge and hit probability are computed
for all of the events in the input file. Then you can cycle through
the different values using the "." key, just as you can for single
events. You can flip between sum and regular modes, and the sum
calculation is only done the first time.
|
|
Now you can do:
reader = RootReader('file.root')
for ev in reader:
# Do stuff
|
|
in event viewer
|
|
|
|
|
|
|
|
to speed up likelihood evaluation.
When generating a likelihood, the DAQ can be run many times in parallel
by the GPU, creating a large block of channel hit information in memory.
The PDF accumulator processes that entire block in two passes:
* First update the channel hit count, and the count of channel hits
falling into the bin around the channel hit time being evaluated.
Add any channel hits that should also be included in the n-th
nearest neighbor calculation to a channel-specific work queue.
* Process all the work queues for each channel and update the
list of nearest neighbors.
This is hugely faster than what we were doing before. Kernel
estimation (or some kind of orthogonal function expansion of the PDF)
should be better ultimately, but for now the nearest neighbor approach
to PDF estimation seems to be working the best.
|
|
that all have a particular interaction process code set. Handy for selection just the detected photons from a large list of photons.
|
|
|
|
still look off, but this is an improvement.
|
|
|
|
|
|
separate function for easier debugging of channel-level likelihood behavior.
|
|
|
|
|
|
|
|
|
|
|
|
likelihood
|
|
|
|
|
|
to automatically pull in Tony's histogram classes when someone clones
the repository. Now the histogram code has been copied and committed as
part of chroma. Maybe someday we can drop this when histogram is an
installable python package.
|
|
Apologies for the lack of history, but Chroma's prehistory included
some very large files and possibly proprietary engineering data.
Rather than clutter up the public repository (and panic people), we
are starting fresh. All development happens here from now on.
|
|
the likelihood.
|
|
|
|
channels back to solid indices to set colors when displaying an event.
|
|
inventory of all the .stl/.stl.bz2 files present and creates building
functions for each of them in the models module.
This allows usage like the following:
chroma-cam chroma.models.lionsolid
chroma-cam chroma.models.tie_interceptor6
You don't need to worry about where the chroma package was actually
installed. Loading from STL files listed on the command line still
works, of course.
|
|
make plain meshes more opaque because the high transparency is visually confusing.
|
|
|
|
Vectorizing a lambda function is really slow, and it turns out that
advanced indexing already does what we want to remap the triangle
vertices to their unique values.
|
|
Most of the time required to build the LBNE geometry is spent on
mesh_grid() for the highly segmented cylinder. (67 seconds!) The
speed hit is caused by the use of zip to connect the vertices. The
same task can be done in several lines with slice notation, and goes
much faster.
|
|
A bunch of small tricks have been applied to reduce the amount of time
required to build an already cached geometry:
* Replace uses of fromiter() on long sequences with code that operates
on bigger arrays.
* Use memoization on the Solids to more efficiently map materials
to material codes when a solid is repeated (as is the case in all
our detectors)
* Use numpy.take() instead of fancy indexing on big arrays. I
learned about this trick from:
http://wesmckinney.com/blog/?p=215
Also, switched over to compressed npz files for storing cache information.
They take the same size as the gzipped pickle files, but load 30% faster.
|
|
geometry, like the mapping from solid IDs to channels, and the time
and charge distributions.
Detector is a subclass of Geometry, so that a Detector can be used
wherever a Geometry is used. Only code (like the DAQ stuff) that
needs to know how PMT solids map to channels should look for a
Detector object.
There is a corresponding GPUDetector class as well, with its own
device side struct to hold PMT channel information.
The GPU code now can sample an arbitrary time and charge PDF, but
on the host side, the only interface exposed right now creates
a Gaussian distribution.
|
|
|
|
millimeters/nanoseconds/MeV in order to match GEANT4, and also avoid
huge discrepancies in magnitude caused by values like 10e-9 sec.
Along the way, cleaned up a few things:
* Switch the PI and SPEED_OF_LIGHT constants from double to single
precision. This avoid some unnecessary double precision calculations
in the GPU code.
* Fixed a silly problem in the definition of the spherical spiral. Now
the demo detector looks totally awesome. Also wrapped it in a
black surface. Demo detector now has 10055 PMTs.
* Updated the test_ray_intersection data file to reflect the new units.
* Fix a missing import in chroma.gpu.tools
|
|
channels.
|
|
|
|
|
|
Rayleigh scattering unit test
|
|
A little rough around the edges, and still needs some development work.
|
|
progressively slower the longer the program runs. Need to find another way to deal with running out of GPU memory due to reference cycles in PyCUDA.
|
|
|
|
|
|
the package.
Rather than use the logging module directly, we wrap it with this to ensure
that logger.basicConfig() is called automatically. All chroma code
should use this logger for printing status information so that it can
be hidden when chroma is part of a bigger application.
|
|
printed warnings when the user already has a .rootlogon.C to load the Chroma ROOT classes.
|
|
the contents of sys.argv whether you want it to or not.
A simple hack is to blank out sys.argv around the point where you
import ROOT. As an additional requirement, you have to actually use
the ROOT module for something (even just looking up a class) in order
for the TApplication to be initialized, so you can't just replace
sys.argv with an empty array around the ROOT import.
To ensure this is always done correctly, all Chroma modules that need
ROOT should obtain it by:
from chroma.rootimport import ROOT
|
|
models directory
|
|
|
|
|