diff options
author | tlatorre <tlatorre@uchicago.edu> | 2018-08-14 09:53:09 -0500 |
---|---|---|
committer | tlatorre <tlatorre@uchicago.edu> | 2018-08-14 09:53:09 -0500 |
commit | 0b7f199c0d93074484ea580504485a32dc29f5e2 (patch) | |
tree | e167b6d102b87b7a5eca4558e7f39265d5edc502 /db.h | |
parent | 636595905c9f63e6bfcb6d331312090ac2075377 (diff) | |
download | sddm-0b7f199c0d93074484ea580504485a32dc29f5e2.tar.gz sddm-0b7f199c0d93074484ea580504485a32dc29f5e2.tar.bz2 sddm-0b7f199c0d93074484ea580504485a32dc29f5e2.zip |
initial commit of likelihood fit for muons
This commit contains code to fit for the energy, position, and direction of
muons in the SNO detector. Currently, we read events from SNOMAN zebra files
and fill an event struct containing the PMT hits and fit it with the Nelder
Mead simplex algorithm from GSL.
I've also added code to read in ZEBRA title bank files to read in the DQXX
files for a specific run. Any problems with channels in the DQCH and DQCR banks
are flagged in the event struct by masking in a bit in the flags variable and
these PMT hits are not included in the likelihood calculation.
The likelihood for an event is calculated by integrating along the particle
track for each PMT and computing the expected number of PE. The charge
likelihood is then calculated by looping over all possible number of PE and
computing:
P(q|n)*P(n|mu)
where q is the calibrated QHS charge, n is the number of PE, and mu is the
expected number of photoelectrons. The latter is calculated assuming the
distribution of PE at a given PMT follows a Poisson distribution (which I think
should be correct given the track, but is probably not perfect for tracks which
scatter a lot).
The time part of the likelihood is calculated by integrating over the track for
each PMT and calculating the average time at which the PMT is hit. We then
assume the PDF for the photons to arrive is approximately a delta function and
compute the first order statistic for a given time to compute the probability
that the first photon arrived at a given time. So far I've only tested this
with single tracks but the method was designed to be easy to use when you are
fitting for multiple particles.
Diffstat (limited to 'db.h')
-rw-r--r-- | db.h | 40 |
1 files changed, 40 insertions, 0 deletions
@@ -0,0 +1,40 @@ +#ifndef DB_H +#define DB_H + +/* This is a library for importing title banks from SNOMAN files. Each bank is + * stored in a dictionary with the bank name and id as the key. For example, to + * load the DQXX files: + * + * dbval *dbval; + * + * db_init(); + * load_file("DQXX_0000010000.dat"); + * dbval = get_bank("DQCH",1); + * db_free(); + * + * The return value of get_bank() is a pointer to the bank values. It's up to + * the caller to know the exact offsets for each value in the bank. Typically, + * SNO database title banks have a database header of 20 words and 10 unused + * words at the beginning of the bank. + * + * Note: Currently only 32 bit unsigned integers and 32 bit floating point + * numbers are supported. I don't think that any of the SNOMAN files have + * doubles. */ + +#include <stdint.h> /* for uint32_t */ +#include "dict.h" + +typedef union dbval { + uint32_t u32; + float f; +} dbval; + +extern char db_err[256]; + +dict *db_init(void); +void db_free(dict *db); +int add_bank(dict *db, const char name[4], uint32_t id, dbval *data); +dbval *get_bank(dict *db, const char name[4], uint32_t id); +int load_file(dict *db, const char *filename); + +#endif |