next up previous contents
Next: Loading DARN and DAMN Up: Specification Previous: External Format   Contents


Local Cache and Database Update

The long database update latency is a problem for DMM when creating DAMN banks. When one job ends the process must terminate and will have to output any updated DAMN bank. However the next job could well process the same run (as runs can cross tape boundaries) so will need to resume processing the DAMN produced by the previous run. The bank may, by now, be on its way to the database, but it may not yet be available from that source.

The scheme to deal with this is as follows:-

  1. Once DMM has finished with a bank, its written to the DMM titles cache, which is a directory set aside to receive completed DAMN banks. File names have the following naming convention for mask i, run r sub-run s:-

    DAMN
    Single sub-run: damn_i_r_s.dat e.g. damn_0_0000123456_001.dat

    DAMN
    Multiple sub-run: damn_i_rdat e.g. damn_0_123456.dat

    DARN
    darn_1.dat e.g. darn_99.dat

    Note that the original format for the DAMN bank file names, before sub-run DAMN files where introduced, have a run number without leading zero padding.

  2. At intervals a database request is generated to import all banks from this cache that predate a fixed time. Once the banks are in the database and available to SNOMAN then all banks in the cache that still predate the fixed time are deleted.

  3. On start up and as it crosses run/sub-run boundaries DMM will look for fresh copies of DAMN banks. First it looks to see if they are in its cache and only if it cannot find them there will it look in the database.

For consistency DMM also uses the same system for DARN banks.


next up previous contents
Next: Loading DARN and DAMN Up: Specification Previous: External Format   Contents
sno Guest Acct 2009-09-09