Skip to content

PyCBC Live O3 Review

Tito Dal Canton edited this page Mar 4, 2019 · 23 revisions

Review items for PyCBC Live analysis in O3

The major difference from the O2 analysis is that triggers are now generated from all detectors. This works by first generating double coincidences from all possible detector pairs, then picking the most significant double, and finally using the SNR time series in the remaining detectors to derive a p-value, which is then used to modify the FAR of the original double coincidence.

GitHub milestone for ER14

Show that the analysis is stable on online data and has the accepted latency

Procedure: let the pipeline run on O2 replay data and monitor the duty factor, lag and memory usage, making sure all combinations of detectors are explored.

Contact: Tito

Show that it produces false alarms at the expected rate

Procedure: plot the rate of triggers vs their FAR after running on real data (possibly from the first item).

Contact:

Show that triggers can be uploaded to GraceDB and that BAYESTAR can create skymaps

Procedure: using either offline data or commissioning/ER data, wait for triggers to be uploaded (with all possible ifo combinations) and run BAYESTAR on them.

Contact:

Show that the sensitivity computed with injections is comparable to some reference

Reference could be the O2 PyCBC Live configuration, or O3 offline search.

Procedure: TBD. Reuse the data/scripts for the O2 review?

Contact: Bhooshan

Show that the sensitivity increases with 3 detectors compared to 2

Optional?

Procedure: repeat the analysis done on step 4 with a different combination of detectors.

Contact:

Agree on configuration choices

  • Pregated/ungated strain
  • State/DQ channels and flags
  • SNR threshold, NewSNR threshold, GraceDB upload threshold
  • Choice of ranking statistic

Show effectualness of template bank with most up to date PSDs

Procedure: banksim

Contact: