Written by Vuk Mandic April 2005 This file describes the scripts and routines used to calculate the coherence and transfer functions. Three steps are identified: finding the data, setting up the code, and running the code. Step 1. Finding the Data a) The first step is to use segwizard to create a file containing all segments of interest (e.g. all H1L1 coincident segments of S4). The output of segwizard will be called "the jobfile". One can later specify a time interval to analyze only a subset of this file. You should delete the first few lines of this file (that start with #) and place this file in jobfiles subdirectory (the jobfiles subdirectory already contains several jobfiles for different detector combinations, for all of S4, and for v04 data quality flags). b) Use tcl routine LSCdataFindFiles.tclsh to get the LSCdataFindFilesH(L).jobnumber.txt (see cachefiles subdirectory for examples). Each of these files contains the urls of the framefiles for the given jobnumber. The tcl script LSCdataFindFiles.tclsh essentially just calls LSCdataFind for each line (jobnumber) in the jobfile. You may have to change the frametype (hard-coded in this script), depending on which level of the reduced data you are interested in. c) Finally, run orderFrameFilesLoop.m from matlab command line to create the gpsTimesH(L).jobnumber.txt and frameFilesH(L).jobnumber.txt files, which are used in loading the time-series data. These routines are specific for the Caltech cluster. The procedure described here is somewhat tedious and it could be improved. However, it needs to be performed rarely (only when the data flags are updated). The idea is to keep these cachefiles around and use only some of them, as needed. Step 2. Setting up the Code a) The main script is the sequencer tcl script tfcohseq. This script goes through several steps: 1. Determine temporary jobfile, based on the jobfile determined in Step 1 and on the start and end times given in the param file (see below). This is done by the compiled matlab routine make_jobfile.m. 2. Submit the condor jobs - each job calls compiled matlab routine tfcoh.m to analyze one of the jobs listed in the temporary jobfile. 3. Periodically check condor to see if the jobs are done - compiled matlab routine get_condorq.m is used for this. 4. When the jobs are done, call another compiled matlab routine (combine_tfcoh.m) to loop over the results and combine them. For the item 3. above, username vmandic is hardcoded in tfcohseq - it should be modified to your username on the cluster. All matlab routines are in the tfcoh subdirectory. Four of them should be compiled: mcc -m make_jobfile.m (makes temporary jobfile) mcc -m tfcoh.m (main matlab routine calculating PSDs and CSDs) mcc -m combine_tfcoh.m (final routined to combine results) mcc -m get_condorq.m (auxiliary routine to check if condor has finished) Also, two mexglx files are needed: dataread.mexglx and mlframequery.mexglx, and are also provided in tfcoh directory. In addition, several matlab routines used in the stochastic analysis are also used. All of these are in matapps, so a copy of the matapps repository should be in the path when compiling this code. You can modify the existing startup.m file for this. Step 3. Running the Code If you have completed Steps 1 and 2, running the code should be simple. In paramfiles subdirectory, you should create a parameter file (an example is provided) in which you specify the channels you are interested in, the jobfile created in Step 1, the time-interval of interest etc. Once you have this file, run tfcohseq (with a single argument - the name of the parameter file you just created). The results will be saved in .mat format, with the output prefix you specfied in the parameter file. Content: Files: condorq.out - automatically updated by the main script, while waiting for all condor jobs to be completed. LSCdataFind.tclsh, orderFrameFilesLoop.m, and orderFrameFiles.m - used for creating the necessary cache-files startup.m - startup file for matlab, should be modified. tfcoh_condor.sub - condor submission file, modified automatically for each calculation tfcohseq - main (tcl) script tfcoh.sh - script called in the condor submission file Subdirecotries: cachefiles - should contains all cachefiles for each jobfile (only examples provided) jobfiles - contains jobfiles (produced by segwizard). paramfiles - contains parameter files, specific to different calculations. results - where the results could be stored tfcoh - contains the matlab routines to be compiled (see above) along with the necessary mexglx routines