summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-05-09Add a function to create a Mesh from the coordinates for a convex polygon.Stan Seibert
2021-05-09Add the reemission probability and CDF arrays on the GPU to the ↵Stan Seibert
material_data list to prevent them from being garbage collected.
2021-05-09Photons tagged with NAN_ABORT should not continue to be propagated.Stan Seibert
2021-05-09Change surface re-emission simulation to not use the diffuseStan Seibert
reflection function. This allows the photon to reemit on either side of the surface and also removes a spurious diffuse reflection bit in the history.
2021-05-09Update photon history bits in Python to match C header.Stan Seibert
2021-05-09Add the ability to linear_extrude a mesh without the endcaps.Stan Seibert
2021-05-09Refactor the saving and loading of packed BVH nodes to fully abstractStan Seibert
away the split storage. Also fixes a bug discovered by Mike Jewell that crashed BVH creation after the last commit.
2021-05-09GPU geometry modification to permit the BVH node storage to be splitStan Seibert
between GPU and CPU. This allows much more complex geometries to be run on CUDA devices with less memory. The GPUGeometry object now takes a min_free_gpu_mem parameter giving the minimum number of bytes that can be free on the GPU after the BVH is loaded. By default, this number is 300 MB. Cards with sufficient memory will have the entire BVH on card, but those without enough memory will have the BVH split such that the top of the hierarchy (the most frequently traversed) is on the GPU.
2021-05-09add method to weld together solids at shared trianglesAndy Mastbaum
'weld' a solid onto another one at identical shared triangles. optionally apply a ``Surface`` or color to the shared surface. this isn't a boolean solid operation -- the triangles must be identical in the two meshes.
2021-05-09make bulk reemission isotropicAndy Mastbaum
2021-05-09more unit test fixesAndy Mastbaum
update remaining unit tests to build BVHs with ``loader.create_geometry_from_obj`` instead of the (removed) ``build`` method.
2021-05-09update unit test bvh buildingAndy Mastbaum
2021-05-09add simple bulk reemissionAndy Mastbaum
The ``Material`` struct now includes two new arrays: ``reemission_prob`` and ``reemission_cdf``. The former is sampled only when a photon is absorbed, and should be normalized accordingly. The latter defines the distribution from which the reemitted photon wavelength is drawn. This process changes the photon wavelength in place, and is not capable of producing multiple secondaries. It also does not enforce energy conservation; the reemission spectrum is not itself wavelength-dependent.
2021-05-09simplify surface modelsAndy Mastbaum
remove the ``SURFACE_SPECULAR`` and ``SURFACE_DIFFUSE`` models, since their functionality is available using the more-general ``SURFACE_DEFAULT``. also allow the user to specify the reflection type (specular/diffuse) for the complex and wls models. change wls so the normalization of properties is more consistent with the default.
2021-05-09update docs per 2c200fc928a0Andy Mastbaum
2021-05-09fixes and tweaks for surface modelsAndy Mastbaum
All surface models including ``SURFACE_COMPLEX`` and ``SURFACE_WLS`` are now working. Note that the WLS won't work right in hybrid rendering mode since that mode relies on matching up incoming and outgoing photon wavelengths in a lookup table.
2021-05-09update python-side gpu structs to reflect cuda changesAndy Mastbaum
this fixes hybrid rendering mode
2021-05-09add surface model documentationAndy Mastbaum
2021-05-09generalize surface models and add thin film modelAndy Mastbaum
reduce models to the following: SURFACE_DEFAULT, // specular + diffuse + absorption + detection SURFACE_SPECULAR, // perfect specular reflector SURFACE_DIFFUSE, // perfect diffuse reflector SURFACE_COMPLEX, // use complex index of refraction SURFACE_WLS // wavelength-shifting reemission where SURFACE_COMPLEX uses the complex index of refraction (`eta' and `k') to compute reflection, absorption, and transmission. this model comes from the sno+ rat pmt optical model.
2021-05-09towards a more flexible surface modelAndy Mastbaum
surfaces now have an associated model which defines how photons are propagated. currently, these include specular, diffuse, mirror, photocathode (not implemented), and tpb. the default is the old behavior, where surfaces do some weighted combination of detection, absorption, and specular and diffuse reflection. `struct Surface` contains as members the superset of all model parameters; not all are used by all models. documentation (forthcoming) will make clear what each model looks at.
2021-05-09If (0,0,0) passed in for direction vector, constant_particle_gun will pick ↵Stan Seibert
isotropically distributed directions.
2021-05-09Raise an exception if a zero 3-vector is passed to make_rotation_matrix()Stan Seibert
2021-05-09Fix starting point of SNO PMT profile to be on axisStan Seibert
2021-05-09Fixes to chroma-setup script.Stan Seibert
2021-05-09Add CUDA driver install directions and add matplotlib dependencyStan Seibert
2021-05-09Fix imports for chroma-camStan Seibert
2021-05-09Update installation instructions for CUDA 4.1 and GEANT4.9.5.Stan Seibert
2021-05-09Shell script from Andy Mastbaum that compiles all of Chroma's dependenciesStan Seibert
along with Chroma on an Ubuntu 11.04 system.
2021-05-09Silence more GEANT4 output when starting GEANT4 generator processes.Stan Seibert
2021-05-09Minor patches to include directories for GEANT4.9.5.Stan Seibert
2021-05-09Add the include directory for the virtualenvStan Seibert
2021-05-09Remove unneeded Node.kind member from struct. Speeds up benchmark further, ↵Stan Seibert
but no improvement to actual simulation.
2021-05-09Add an argsort_direction() function to chroma.tools and use it toStan Seibert
group photons so that they take similar paths on the GPU. argsort_direction() morton-orders an array of normalized direction vectors according to their spherical coordinates. Photons sorted in this way tend to follow similar paths through a detector geometry, which enhances cache locality. As a result, get_node() uses the GPU L1 cache again, with good results.
2021-05-09For paranoia reasons, add some padding on the low corner of the BVH leaf nodes.Stan Seibert
2021-05-09Report the correct algorithm, and use a degree 3 tree.Stan Seibert
2021-05-09Improve startup time in simulation by letting GEANT4 processes initialize in ↵Stan Seibert
background.
2021-05-09BVH optimization to sort child nodes by area. Only has a small effect.Stan Seibert
2021-05-09Collapse chains of BVH nodes with single children.Stan Seibert
2021-05-09Fix bug in grid BVH implementation. Now half as fast as old Chroma.Stan Seibert
2021-05-09Speed up recursive grid BVH generation. Now LBNE BVH can be generated in ↵Stan Seibert
about 60-90 seconds.
2021-05-09New BVH algorithm: Recursive GridStan Seibert
This is an adaptation of the original Chroma BVH construction algorithm. The generation stage is very slow, but can be fixed.
2021-05-09Import memoize decorator directly from pytools as it is not present in the ↵Stan Seibert
old location in newer pycuda releases.
2021-05-09Implementation of "node splitting" which places children into separateStan Seibert
parent nodes if combining them would result in a parent node that is excessively large compared to the surface area of the children. This doesn't help as much as you might imagine.
2021-05-09Bugfixes to BVH traversal and generation code.Stan Seibert
2021-05-09Use degree 2 tree by defaultStan Seibert
2021-05-09Redo node format to include number of children, rather than just leaf bit.Stan Seibert
2021-05-09Fix unit testStan Seibert
2021-05-09Add a chroma-bvh hist function that displays a ROOT histogram of theStan Seibert
areas of the BVH nodes in a particular layer of the tree.
2021-05-09Rename node_area() to node_areas() and make it return the array of node areasStan Seibert
2021-05-09Skip L1 cache when loading nodes.Stan Seibert
Node access is very irregular as each thread descends the BVH tree. Each node is only 16 bytes, so the 128 byte cache line size in the L1 cache means that a lot of useless data is often fetched. Using some embedded PTX, we can force the L1 cache to be skipped, going directly to L2. The L2 cache line is 32 bytes long, which means that both children in a binary tree will be cached at the same time. This improves the speed on the default generated binary trees, but does not help an optimized tree yet.