This repository is a collection of data reduction tools for JWST data of PDRs. Most of this was developed in the context of the "PDRs4All" Early Release Science program (ERS-1288), which observed the Orion Bar with imaging and spectroscopy mosaics. A GTO program also recently completed, using a similar strategy for the PDRs in the Horsehead and NGC7023. Additions and improvements to the tools in the context of these other PDRs, will also be added.
I also aim to include some things to automate the creation of some derived data products (resolution-matched data cubes, aperture extraction of spectra, merging spectral orders).
This toolset is made public, as it can serve as a good starting point for the reduction of other similar observing programs.
The PDR programs (ERS-1288 and GTO-1192) all consist of NIRCam imaging, MIRI imaging, NIRSpec IFU spectroscopy, and MIRI IFU spectroscopy. The steps for the reduction of each of these are briefly explained below, and a few shell scripts that implement these workflows are provided.
Before running these tools on your data, the _uncal
files have to be sorted according to
- object
- instrument
- exposure type: science, background, and (for nirspec only) science imprint, background imprint.
The provided shell scripts can then be copied, and the paths set in them can be
slightly modified to point to the directories containing the _uncal
files. By
default, the provided scripts assume that they are placed at the same level as
the science, background, etc directories.
- object 1
- nirspec
nirspec_script.bash
science
science_imprint
background
background_imprint
- mirifu
mirifu_script.bash
science
background
- nirspec
- object 2 ...
Side note on the historical reason for this: To work around some issues with the default association files, we coded a simplified association generator. The files need to be sorted for this generator to work, as it uses glob within a directory.
- Stage 1 pipeline for
science, science_imprint, background, background_imprint
- 1/f noise reduction with NSclean (Rauscher, 2023arXiv230603250R), and custom masks.
- Stage 2 for
science, background
, with imprints used. - Modify DQ array of
_cal
files, to make sure certain bad pixels are set toDO_NOT_USE
(TODO: decide which flags) - Stage 3 with optional master background subtraction. The cube mosaic is built. Outlier reduction parameters should be tweaked to recommended values for NIRSpec.
- TODO: alternate Stage 3, with WCS that better matches the mosaic footprint.
TODO: provide region files for aperture extraction, and provide a stitched extracted spectrum in an extra script.
- Stage 1 pipeline for
science, background
- Stage 2 pipeline with optional image-to-image background subtraction
- Stage 3 pipeline with master background subtraction, if the stage 2 background was not performed.
This can be added later. Some alignment based on stellar catalogs will be required. For NIRCam, a 1/f noise reduction would also be useful.
See shell_scripts/
. It is recommended to copy one of these to your working
directory, and then modify the calls the pipeline
script and extra cleaning
tools as needed.
Then run bash script.bash
in the working directory where science/
etc are
located.
We provide a script that performs an aperture extraction on the final cubes, merges the spectral segments, and collects the results in a plain text table. To use it, the following is needed:
- A list of data cubes produced by the pipeline (can be the three NIRSpec cubes, the 12 MIRI cubes, or both sets)
- A single region file in the format as produced by DS9. All regions of interest should be in one file, in sky coordinates. Currently, only rectangle regions are supported
The command is then for example
python pdr_reduction/extract_templates.py my_regions.reg nirspec/stage3/*s3d.fits miri/stage3/*s3d.fits --template_names Atomic DF
where the number of arguments for the optional --template_names
should equal
the number of regions in the .reg
file. The output is a file called
templates.ecsv
, which can be loaded as an astropy table.
- Clone this repository
- Install the python package in your environment by running
pip install -e .
in the root directory of this repository. Alternatively, usepoetry install
, and thenpoetry shell
to create and activate a new environment. - Install a manual dependency: NSClean, see Paper on
arxiv, and download
page.
Download and
nsclean_1.9.tar.gz
, thencd
intonsclean_1.9/
and runpip install .
in your environment. - Run
pip install pandas
to work around a numpy version conflict somewhere down the dependency trees ofjwst
andpandas
.
- Sort your data (see above)
- Copy the appropriate bash script from
shell_scripts/
to your working directory - Edit the copy of the script. Make sure to check the number of processes and
the CRDS context (pmap number
N
,CRDS_PATH
, andCRDS_SERVER_URL
). - Activate the environment in which you installed this package (see installation instructions above)
- Run
bash modified_script.bash
Some of these tools were originally developed, tested, and used by the PDRs4All data reduction team, consisting ofFelipe Alarcon Pena, Amelie Canin, Ameek Sidhu, Ilane Schroetter, Boris Trahin, and Dries Van De Putte.
Others were developed in the context of program GTO-1192, by Dries Van De Putte.