FAQ: Why has my N-tuple got too many entries?
FAQ: Why must I use PAW to look at SNOMAN Output?
FAQ: Can I change the order in which processors get called?
FAQ: With the Monte Carlo, can I generate a variable number of particles?
FAQ: Is SNOlib really a library?
FAQ: Which version of SNOMAN do I want?
FAQ: What does PCK: event too complex for packer mean?
FAQ: How can I turn off all absorption and scattering in SNOMAN?
FAQ: How are bug fixes propagated?
FAQ: How do I get the latest SNOMAN?
FAQ: How are ZEBRA bank numbers chosen, and what values are valid?
FAQ: Why do I get two time peaks when plotting MC data with centred source?
FAQ: What does "ZFATAM. !!!!! Going to ZFATAL for TZINIT fails." mean?
FAQ: Why doesn't the Monte Carlo data structure have electron and gamma tracks?
FAQ: How do I run the uncalibrator?
FAQ: Why is Database backward compatibilty so important?
FAQ: What is the difference between the Px banks and the PMT?
There are two possible explanations:-
- You have two NTPR banks with the same number (check ntuple.dat). It is
not sufficient to disable one version.
- All the banks you are using in the NTPR do not lie on a single path
in the data structure.
Both these problems are discussed in the
User Manual/SNOMAN - Operating Instructions/Defining Your Own N-tuples
The short answer is that it is not compulsory to use PAW! Indeed, the primary
output from SNOMAN is the event data structure not the n-tuple
file. The built in n-tuple creator just represents one way to analyse this data
structure.
See:-
Yes, see note 3 of the JOB bank
Yes, see notes 4 and 5 of the MCPI bank
Is SNOlib really a library - when I link against it I get everything whether I
want it or not?
SNOlib is structured into about 40 SUs (Software Units) that form a simple
dependency tree with lower SUs providing services to upper ones. In order
that SNOlib remains flexible, so that it can grow and develop to meet the
demands placed on it throughout the SNO experiment, it is essential that these
SUs remain as loosely coupled as possible. In particular, the way SUs
initialise, and reinitialise after titles update, is hidden from other SUs.
This is done with a special control system that allows SUs to declare
dependencies on other SUs and titles banks at execution time, rather than
hardwiring them at compile time. Although this is far more flexible, the
control system has to be able to call every SU, as it does not know, until
execution time, what SUs will be active. It is this system that pulls in all
the code when linking. This normally means that the executable program has
dead code. For ways to remove this dead code and a more detailed discussion
of SUs see chapter "Adding Code to SNOMAN" of the
User Manual.
At any time there are 3 versions of Snoman "released". The first is
the last
Official Release
It will be a gzipped tar file at surf and
will have a label like 3_01.tar.gz. This version is very "standard".
Oxford "freezes" this release about a month after its debut and then
the tar file never changes, even if new bugs are found. If
standardization is key to you, get this version. Oxford will send out
an email when the .tar.gz file is changed and when it is frozen.
If you want the known bugs in this version fixed, also get the
"post-release" directory from surf. This directory contains all the
bugs that could be fixed in this version. "Could be fixed" is the rub.
Some bugs, but only a few, require significant change to some part of
the data structure or some other key part of Snoman. To get these bugs
fixed you will have to wait for the next version. This happens rarely.
The post-release directory contains .inc, .for, and .dat files. Put
those files into the code or prod directory (as appropriate) and
rebuild Snoman. Note that since the tar file never changes after
freezing, you can keep Snoman up to date without re-downloading the
.tar.gz file.
Be warned though, this procedure builds a
Post-Release
version. It is not a good idea to use it for long production runs as it does
not represent an official version of the code.
If you need the latest released version, you can get the
Development Release
Oxford puts this out so the entire group can write code for the next
release. The purpose of this release is mostly for development, as the
name suggests. If 3_01 is the current release and you want to write
some code for the next release, you should almost certainly use the
development release of 3_02. Oxford does not guarantee that a
development release will work on all platforms.
If you want something more recent than that you could look to see if there is
Update File
in the standard tar directory.
By default, the ZDAB bank contains MC data, but it can only accept very modest
amounts of it. What you are doing generates too much MC source data for it. The
solution is to remove the MC data with:-
$zdab_option $zdab_no_mc
If you really need the MC data then you had better prune what you don't want and
output the rest, rather than try to squeeze it into the ZDAB.
First, you need to turn off Rayleigh and Fresnel scattering:-
$fresnel_scat $off
$rayleigh_scat $off
then you have to set the attenuation scale factor very small on any media
that should not absorb photons e.g.:-
$meda_light_water $attn_scl_fac 1.e-20
$meda_heavy_water $attn_scl_fac 1.e-20
As explained in
FAQ: Which version of SNOMAN do I want?
bug fixes are distributed to
Official Releases
of the code, at the cost of
making them version non-official! There is no system for propagating fixes to
Development Releases
although new development versions should come out regularly. There are also
Update Files
that can be issued at any time - just drop a mail to
n.west1@physics.oxford.ac.uk
If the latest
Development Release
is not recent enough, there are two ways to get right up to date.
Applying an Update to the Last Development Release
- If you have not already none so, get the latest Development Release.
- Apply the latest
Update File
If that does not look recent enough mail n.west1@physics.oxford.ac.uk
to make a new one.
Installing from Another Site
If you know the version you want is on another site, for example, you want the
dev version on surf then you can use the standard tools to install it on your
local site. All you have to do is to:-
- Create an rhost*.scr file. There are a number of these already on the
$SNO_TOOLS directory e.g. rhost_surf_snoman.scr. Simply copy one of
these and then edit the fields you need to change.
- Execute the rhost*.scr in the parent shell e.g.:-
source rhost_surf_snoman_dev_stable.scr
- Execute the upgrade_snoman tool i.e.:-
$SNO_TOOLS/upgrade_snoman.scr
When it asks, you must say that there is no tar file. This means, of
course, that the tool pulls a complete version, a file at a time, so
can be rather slow.
ZEBRA bank numbers are stored as integers in location -5 relative to the bank's
status word i.e.:-
IDN = iq(lbank -5)
In general, ZEBRA attaches no special significance to the bank number, and it is
perfectly O.K. to change it at any time. ZEBRA has to assign a number when the
bank is created and does so as follows:-
- Creating a bank (MZBOOK,MZLIFT):-
ZEBRA applies rules in the following order:-
- If bank is inserted into a chain, use IDN+1 of the following bank,
or if that is missing used IDN+1 of previous bank.
- If bank is connected as vertical link -n, set IDN = +n
- IDN = 1
- Input via Titles bank (TZINIT):-
Use the bank number on the *DO if supplied, otherwise apply
bank creation rules.
Bank numbers must be positive (a negative bank number gets treated as an option
on a *DO line!). Using bank number 0 is O.K. except that it is treated as a
wildcard search in TZFIND. However, in SNO, we don't use this routine (MTT does
bank location) and so where appropriate, e.g. where bank number = crate number,
we do use 0.
I generated events at the center of the detector and looked at the PMT time
distribution. When I plot times from the PMT bank, there are two peaks, one of
which corresponds to the direct hit, but there is another small peak shifted by
20ns. Why?
Although there is a fixed relationship between the event generation time
(word MC_JDY in the
MC bank)
and the
time in the
MCPM bank,
there is NO fixed relationship to
the time held in the
PMT bank.
Those times are relative to trigger times. The trigger depends on exactly when
the summed signal rises above threshold, which is dependent on the
statistical processes of photoeletron emission and collection and PMT noise.
The trigger is latched to a 20nsec clock and so can have a 20nsec jitter.
Consequently raw PMT times will display this jitter. The solution is to use
TIME_DIFFERENCE - Time difference
DQF and calculate the time difference between the time in the
PMT bank
and the
MC bank.
It means that TZINIT, which reads your titles files, doesn't like something.
Take a look at the log file which has more useful information than what gets
sent to the terminal.
If you see something like this:-
45 46 48 49 51 52 72 0 0 0 0 0
!!fSUP-OVL ^-> !!! too much data
then your bank is longer than ZEBRA was expecting. ZEBRA lifts a bank to
receive the data before it starts to read it. By default it only allows
banks of up to 2000 words. Anything larger must include a -n option on the *DO
line, for example:-
*DO GEDP 462 -i(31I -I) -n48010
means that this bank could be up to 48010 words long. There is not need to be
precise so long as you don't underestimate; ZEBRA will trim away the excess
afterwards.
If you see something like:-
20. 77 5 #. Partial fill D2O level.
!!f ^-> !!! invalid
then the data is bad somehow. In this particular case its because there is a
tab which ZEBRA doesn't treat as white space! Also watch out, it only allows
lines up to 80 characters long.
Electrons and gammas are tracked by EGS4 code, not SNOMAN, and don't normally
leave a record in the data structure. However you can make it do that by using
the command:-
$egs4_ds $on
Take a look in the User Manual at the
EGG4 data structure.
This is a brief tutorial in the use of the uncalibrator.
When running and analysing MC data there are three different approaches which can
be taken with respect to the uncalibration/calibration of MC times and charges.
The MC times and charges can be used directly, without using the uncalibrator or
calibrator. The MC times and charges can be uncalibrated, calibrated and then analysed,
all in the same pass. Finally the MC times and charges can be uncalibrated and written
out to a zdab file. This zdab file can then be read back in, calibrated and analysed.
There are four example command files which demonstrate each of these steps,
test_ucl_mconly.cmd, test_ucl_cal.cmd, test_ucl_zdab.cmd and test_zdab_cal.cmd.
The task which these command files address is the generation and time fitting of 100
mono-energetic electrons.
test_ucl_mconly.cmd just uses the MC times and charges directly. This is the simplest
and most robust approach. The processor list is:-
$processor_list ' MCO FTT ANL END'
The results of the other command files can be compared to this.
test_ucl_cal.cmd uncalibrates and then calibrates the MC times and charges in a single pass.
The processor list is:-
$processor_list ' MCO UCL PCK UPK CAL FTT ANL END'
The PCK UPK stage could be left out. The idea of this is to rough up the times and charges
produced by the MC, adding in the effects of digitisation and the fact that not all channels
have a calibration.
test_ucl_zdab.cmd uncalibrates the MC times and charges and then generates a zdab file.
test_zdab_cal.cmd reads in and calibrates this zdab file. The processor lists are:-
$processor_list ' MCO UCL PCK OUT END'
and
$processor_list ' INP UPK CAL FTT ANL END'
In principle the zdabs produced should be indistinguishable from real data! This approach
has the advantage that the time consuming MC can be run once and a zdab produced. This zdab
can then be analysed many times in many different ways.
These command files should be tested and understood before trying to use the uncalibrator
for any analysis. The limitations of the uncalibrator are discussed below.
First, here are some general comments on other command file settings:-
- $intial_date
This should be set to a date when there are both valid ECA and PCA constants otherwise
the calibrator and uncalibrator will not work. For example:-
$initial_date 20000101 12000000
- $use_ref_cal_consts
In principle this command could be used to do a UCL and CAL for any date. I suspect
that in practice there are not reference versions of all the banks needed for the MC.
- $old_eca and titles cal_const1_mc.dat
No Need! You are free to use ancient versions of the ECA if you like but more up to
date versions should work just as well - if not better. If you do use old ECA
constants then make sure you use old PCA ones as well:-
set bank tcal 1 word 10 to 1
set bank tcal 1 word 11 to 1
- $calibration_mode ?? a.k.a. set bank TCAL 1 word 4 to ??
This only affects the calibration of the times and charges and not
their uncalibration. The reason is simple. The MC always produces times in ns and
charges in pe, it is not capable of anything else. The output of the uncalibrator
is always ADC counts because the output is used to fill zdabs. The uncalibrator
therefore always uncalibrates in mode 44 regardless of the setting of the
calibration mode. The calibrator then calibrates the raw ADC counts produced by the
uncalibrator to the extent specified by the calibration mode (??).
- set bank TCAL 1 word 2 to ?
Word 2 of the TCAL bank has nothing to do with uncalibration. This word specifies
whether the times used by the trigger simulation (t_wobble_walk) are walked or not.
The time in the PMT bank is filled with t_wobble_only, correctly leaving the addition
of the walk to the uncalibrator.
Sadly the uncalibrator is not trouble free. There are two problems:-
The most serious problem with the uncalibrator is that the MC doesn't produce a time
spectrum that looks like the calibrated time spectrum of real data, it therefore has
trouble doing the calibration backwards. The real spanner in the works is provided by the ECA.
The eca time calibration maps between ADC counts, in the range 0-4095, and time in ns,
in the range x-y. When uncalibrating, if the time (after the PCA uncalibration) is not in
the range x-y it cannot be mapped to an ADC value. In this way the MC times get truncated
at one end.
Another problem is the lack of a place in the zdab structure to indicate that the uncalibrator
could not uncalibrate a charge or time due to the lack of calibration constants or that it was
outside the range x-y. Currently if the uncalibrator fails due to a lack of calibration
constants then it produces -9999 for the charge or time, this is then converted to 0 because
all the output must be in the range 0-4095 if it is to be stored in the zdab. To first order
this is not a problem, events happen on a particular dates and times and so if there are no
constants to uncalibrate with then there will be no constants to calibrate with and the attempt
to calibrate 0 ADC counts will mostly result in -9999. There are some catches though. Firstly
if the user looks at the raw ADC count spectrum it contains a surprising number of zeros.
Secondly if the uncalibrator failed due to the lack of PCA constants and the user only requests
an ECA calibration then they will get the ECA calibration for zero ADC counts rather than -9999.
A third, but rather unlikely, problem could occur. The ECA time calibration can produce
calibrated times for some ADC ranges but not others. If 0 ADC counts does have a calibrated
time but the uncalibration of the MC time failed then this will result in a spurious time for
that PMT.
Fixes to these problems are, not doubt, possible and volunteers are welcome.
BACKWARD COMPATIBILITY OF DATABASE BANKS IS CRUCIAL !!!!
Never think: "Oh well, we can get it right next time". The database
lives a life that is independent of the software development cycle.
At any time there are likely to be several versions of the software
connected to the same database. The natural consequence of this is:-
BACKWARD INCOMPATIBILITY CHANGES BREAK OLD SOFTWARE !!!!
A partial solution, if faced with an unavoidable backward incompatible
change, is to change the bank name or number to something that the old
software doesn't use. So old software doesn't break but it's ugly for
2 reasons:-
- The same information is now scattered over multiple banks.
- If the new code is to be able to process old data, either it has to
be able to load multiple banks or the old database data has to be
reformatted into new banks.
However, with just a little care and forethought it's nearly always
possible to avoid these problems. Here are a 2 simple rules to
follow if there is the slightest possibility that you may want to
change a bank:-
- Stick to a simple format of two parts:-
- A fixed length header.
It should have some spare slots for future use.
If a list follows record:-
- The start address of the list
- The length of a row
- The number of rows
- A list consisting of a fixed length row none or more times.
The row should have some spare slots for future use.
- Make changes only by adding new data, never by redefining existing
entries.
Ah! you have wandered into a very dark cobweb shrouded area of SNOMAN.
Back at the dawn of time (even before I [Nick] was on SNO), someone
thought it would make a lot of sense to have a bank that contained all
the PMT numbers and another for all the charges, and another for all
the times and so on. It's not a logical way to do things; all the
information about a single PMT should be collected together in one
place (a bank) not scattered about in many different banks all of
which have to be synchronised so that entry "i" in each applies to the
same PMT. The two models came to be known as the "long skinny"
version (PF, PT, PHL ..) and the "short fat" version (PMT). The long
skinny version was already being used in some early fitters so there
are a pair of routines:-
update_px_from_pmt: Create/Update Px banks from PMT bank chain
update_pmt_pf: Update PF word of PMT bank chain.
That convert between the two forms (when fitting using the "long
skinny" form only the PF word gets updated which is why only that
needs to be updated in the PMT).
One of my good intentions that never materialised was going to be to
rewrite all the code that used the "long skinny" to use the "short fat"
and erased all evidence of the former.
Basically, it's best to pretend the "long skinny" form doesn't
exist.
Go Back to the
Snoman Companion Top Page