Age | Commit message (Collapse) | Author | |
---|---|---|---|
2021-05-09 | Allow make_gpu_struct to pack structs containing the CUDA vector types. | Stan Seibert | |
2021-05-09 | Put time PDF back into likelihood calculation | Stan Seibert | |
2021-05-09 | Reduce number of scattering passes by default in likelihood calculation | Stan Seibert | |
2021-05-09 | Put back accumulation of time PDF information in PDF calculator | Stan Seibert | |
2021-05-09 | Fix bug in stepping through photons during multi-DAQ simulation. | Stan Seibert | |
2021-05-09 | Lower the weight threshold cutoff to propagate even more improbable photons | Stan Seibert | |
2021-05-09 | Split photon propagation in the likelihood calculation into a "forced | Stan Seibert | |
scatter" and a "forced no-scatter" pass. Since we want to include contributions from two populations of weighted photons, we have to break up the DAQ simulation into three functions: * begin_acquire() * acquire() * end_acquire() The first function resets the channel states, the second function accumulates photoelectrons (and can be called multiple times), and the last function returns the hit information. A global weight has also been added to the DAQ simulation if a particular set of weighted photons need to have an overall penalty. The forced scattering pass can be repeated many times on the same photons (with the photons individually deweighted to compensate). This reduces the variance on the final likelihoods quite a bit. | |||
2021-05-09 | Apply a floor to the probability of each channel being in the observed | Stan Seibert | |
state (hit or not), rather than a floor on the hit probability. A channel that is impossible to hit should have zero probability in the likelihood and not be hit in the actual data. Before this change, that channel would be forced to have a hit probability of 0.5 / ntotal, which is wrong. We only need to ensure the probability of the observed state of the channel is not zero so that the log() function is defined. | |||
2021-05-09 | Use the sincosf() function when computing both the sin() and cos() of the ↵ | Stan Seibert | |
same angle for greater efficiency. | |||
2021-05-09 | Allow for option of forced scattering or forced non-scattering on the first ↵ | Stan Seibert | |
step during photon propagation. | |||
2021-05-09 | Eliminate spurious commas in definition of photon history bits | Stan Seibert | |
2021-05-09 | Temporarily include the current directory in the module path when loading ↵ | Stan Seibert | |
detector geometries. | |||
2021-05-09 | Skip photon propagation queue if we are propagating weighted photons | Stan Seibert | |
2021-05-09 | Use bigger blocks for multi-DAQ calculation | Stan Seibert | |
2021-05-09 | Add optional weight to each photon being propagated. | Stan Seibert | |
For consistence, weights must be less than or equal to one at all times. When weight calculation is enabled by the likelihood calculator, photons are prevented from being absorbed, and instead their weight is decreased to reflect their probability of survival. Once the photon's weight drops below a given threshold (0.01 for now), the weighting procedure is stopped and the photon can be extinguished. With weighting enabled, the detection efficiency of surfaces is also applied to the weight, and the photon terminated with the DETECT bit set in the history. This is not completely accurate, as a photon could pass through the surface, reflect, and reintersect the surface later (or some other PMT) and be detected. As a result, weighting will slightly underestimate PMT efficiency compared to the true Monte Carlo. This is not intrinsic to the weighting procedure, but only comes about because of the inclusion of detection efficiency into the weight. Without the detection efficiency included, weighting cuts in half the number of evaluations required to achieve a given likelihood uncertainty (at least for the hit probabilities). Add in the detection efficiency, and that factor becomes 1/5 or 1/6! | |||
2021-05-09 | Disable time contribution to likelihood for weighting tests. | Stan Seibert | |
2021-05-09 | More documentation of event data structure. | Stan Seibert | |
2021-05-09 | Docstring for chroma.event.Photons | Stan Seibert | |
2021-05-09 | Fix ncorrect argument order to arctan2 | Stan Seibert | |
2021-05-09 | Argh! Someone used spherical polar coordinates in the pi0 generator with ↵ | Stan Seibert | |
the mathematics convention for the angle names. Flipping theta and phi back to their correct meaning. | |||
2021-05-09 | Add dependency on libatlas so that numpy will compile | Stan Seibert | |
2021-05-09 | Nope, Chroma really does require Numpy 1.6 or later. Putting dependency ↵ | Stan Seibert | |
back and including in installation guide the workaround to get numpy to install. | |||
2021-05-09 | Fix the units in test_generator_photon.TestG4ParallelGenerator.test_off_center | Stan Seibert | |
2021-05-09 | Cast channel ID array to uint32, as that is technically what the C++ code is ↵ | Stan Seibert | |
expecting. | |||
2021-05-09 | Do not use the zmq.Context.instance() method to obtain a ZeroMQ context when ↵ | Stan Seibert | |
you will be forking processes at multiple locations. This killed one of the unit tests. | |||
2021-05-09 | Fix documentation bug regarding location of GEANT4 data | Stan Seibert | |
2021-05-09 | Cast numpy sum to Python int to make PyROOT happy. | Stan Seibert | |
2021-05-09 | I give up. We have to install pyublas by hand to be able to compile any ↵ | Stan Seibert | |
extensions using it in Chroma. | |||
2021-05-09 | Require nose and sphinx for testing and documentation | Stan Seibert | |
2021-05-09 | Replace all uses of numpy.count_nonzero() with chroma.tools.count_nonzero to ↵ | Stan Seibert | |
remove requirement to use numpy >= 1.6 | |||
2021-05-09 | Require pyublas at setup-time for compiling. Adjust numpy requirement for ↵ | Stan Seibert | |
count_nonzero(). | |||
2021-05-09 | More fixes to installation guide. | Stan Seibert | |
2021-05-09 | Update ROOT version number and make the numbers consistent. (Hat tip to ↵ | Stan Seibert | |
mastbaum for noticing the discrepancy.) | |||
2021-05-09 | Add a summing mode to the event view. | Stan Seibert | |
Press "s" and average time, charge and hit probability are computed for all of the events in the input file. Then you can cycle through the different values using the "." key, just as you can for single events. You can flip between sum and regular modes, and the sum calculation is only done the first time. | |||
2021-05-09 | Make chroma.io.root.RootReader follow the Python iterator protocol. | Stan Seibert | |
Now you can do: reader = RootReader('file.root') for ev in reader: # Do stuff | |||
2021-05-09 | Make the period key cycle between charge, time and hit/no-hit color display ↵ | Stan Seibert | |
in event viewer | |||
2021-05-09 | Fix silly bug in RootReader.jump_to() | Stan Seibert | |
2021-05-09 | Copy implementation of check_output into our setup.py so it runs on Python 2.6. | Stan Seibert | |
2021-05-09 | Add optional argument to pi0 gun to pick direction of one gamma in rest frame. | Stan Seibert | |
2021-05-09 | Allow chroma to be imported on systems without pygame. Good for clusters. | Stan Seibert | |
2021-05-09 | Major overhaul to the way that DAQ and PDF information is accumulated | Stan Seibert | |
to speed up likelihood evaluation. When generating a likelihood, the DAQ can be run many times in parallel by the GPU, creating a large block of channel hit information in memory. The PDF accumulator processes that entire block in two passes: * First update the channel hit count, and the count of channel hits falling into the bin around the channel hit time being evaluated. Add any channel hits that should also be included in the n-th nearest neighbor calculation to a channel-specific work queue. * Process all the work queues for each channel and update the list of nearest neighbors. This is hugely faster than what we were doing before. Kernel estimation (or some kind of orthogonal function expansion of the PDF) should be better ultimately, but for now the nearest neighbor approach to PDF estimation seems to be working the best. | |||
2021-05-09 | Add a select function to GPUPhotons to extract a reduced list of photons ↵ | Stan Seibert | |
that all have a particular interaction process code set. Handy for selection just the detected photons from a large list of photons. | |||
2021-05-09 | Use global per-thread ZeroMQ context rather than make a new one. | Stan Seibert | |
2021-05-09 | Minor fixes to doing time & charge PDFs via kernel estimation. Things | Stan Seibert | |
still look off, but this is an improvement. | |||
2021-05-09 | OK, we really do need to call the garbage collector this frequently. | Stan Seibert | |
2021-05-09 | Use constant particle gun in chroma-sim so it can generate pi0 events. | Stan Seibert | |
2021-05-09 | constant_particle_gun now can generate pi0 decays | Stan Seibert | |
2021-05-09 | Factor calculation of channel hit probabilities and PDF densities to ↵ | Stan Seibert | |
separate function for easier debugging of channel-level likelihood behavior. | |||
2021-05-09 | Reduce the bin count requirement from 200 to 50. | Stan Seibert | |
2021-05-09 | Clean up hit probability calculation and stop skipping the first entry. | Stan Seibert | |