Documentation of GLINT Pipeline

GLINT Pipeline is a set of scripts and classes used to estimate the null depth from raw data and put it in a nice format.

Raw data consists in matlab files containing cube of frames of 344x96 pixels (height x width). Each frame contains 16 outputs (4 photometric, 6 null and 6 antinull ones) stacked one above the other and spectrally dispersed along the width axis of the frame.

GLINT Pipeline’s classes is the library of the pipeline to deal with raw data and feed the model fitting algorithm.

Documentation of dark processing script processes the dark frames.

Documentation of data frame processing extracts null depths and intensities from data and dark frames. The outputs are used in the model fitting script.

Documentation of the geometric calibration is for calibration purpose. It returns the shape and location of the 16 outputs respect to the wavelength.

Documentation of the spectral calibration performs a spectral calibration for each one of the 16 outputs.

Documentation of the determination of the zeta coefficients determines the intensity ratios between the interferometric outputs and their related photometric ones.

The source can be found on Github.

Glossary

  • data: acquired data, called by a script. Can be raw or preprocessed.
  • product: data generated by a script and can be used by other scripts.
  • observing run: set of one or several consecutive nights of observing.

How to use GLINT

Step by step

  1. Send the light from SCEXAO to GLINT.
  2. Check the pupil camera: the pupil should be centered on the MEMS (segmented mirror). To do this check, note the value on Zaber 4 and replace it by 50000 to remove the mask.
  3. Check the image camera: the PSF should be centered on a reference pixel (check Evernote note to get the last one).
  4. If steps 2 and 3 are not satisfying, you are doomed and a realignment is necessary so you better call an expert to realign.
  5. Put the mask by typing the noted value on Zaber 4.
  6. Align the mask with the segments. Ideally, all the apertures should be centered on the segments without sey may nothing their edges.
  7. Optimize the flux in photometric taps on the Real-Time control software of GLINT by moving the chip with the Zaber 1&2 (translation) and 3 (focus). Iterate over these three axes to get the best results. Be careful to the wiggles, they can be hard to see on the real time preview; acquiring a dark, some data and run the python script glint_data_explorer.py is the best solution. It will display (among other things) a plot entitled check the wiggles.
  8. Find the best null you can for the desire baselines (theoretically up to 4 baselines can be nulled, empirically, several configurations are needed). The important thing is to know in which null you are (central, n-th). See the next section for the methodology.

Locating the null

The script is ready but needs to be documented. Before the observation, scans of the fringe are done. The frames and the real-time scan are saved. The frames allow to analyse the scan of the fringes, spectrally. The location of the null is done by fitting on one hand the non-dispersed scan and on the other hand the spectrally dispersed fringes. The non-dispersed scan gives a quick look of the portion of the envelop the scan was made while the dispersed scan gives an accurate location of the null depth. However the parameter space is periodic and the fitting algorithm is easily fooled so set boundaries on the OPD is mandatory. The feeling and the agreement between the two fits (dispersed and non-dispersed) on an OPD confirms the real position of the null depth.

How to use the pipeline

Calibration data

Calibration data sets are:

  • spectral data: data acquired with a tunable spectral source. One set (e.g. folder) of frames per wavelength. The name of the folder must contain the wavelength.
  • geometric data: no fringe data. Obtained either with OPD out of the coherent length or by blurring the fringes. NB: the results are better with the lab source and fake turbulence generated by the DM.
  • data with individual beam active. Obtained by moving the mask to be sure that one and only one beam is active. Be careful with cross-talk and wiggles. The name of the files must contain the keyword pX with X=1..4 the id of the beam.

Dark data has to be acquired as well.

Spectral data can be acquired only if there is a big new alignment of the spectroscope or the CRed2.

Geometric data data with individual beam active should be acquired at the beginning of the observing run or after a bigh alignment (position of the chip or the mask significantly different or moving the mirrors with picomotors).

Run the script in the following order:

  1. Documentation of dark processing script
  2. Documentation of the geometric calibration with geometric data
  3. Documentation of the spectral calibration with spectral data
  4. Documentation of the determination of the zeta coefficients with data with individual beam active

For sanity, step 2 should be run the day before the observation night.

Routine use

The calibration products exist. Data sets to acquire are: * dark frames * geometric data * science frames (lab or on-sky)

Run the script in the following order:

  1. Documentation of dark processing script with dark frames.
  2. Documentation of the geometric calibration with no-fringe frames in lab.
  3. Documentation of data frame processing with dark frames.
  4. Documentation of data frame processing with data frames.
  5. Feed the model fitting script with the products.

How to self-calibrate the null depth

The self-calibration of the null depth relies on the modelling of the statistical behaviour of the measured null depth because of the fluctuations of intensities and phase. The code BARNACLE (BAttling for the measuRements of the Null depth at Any Cost Like a bEast) handles this and its documentation is on its way. In order to know how to use BARNACLE, read the documentation of:

Empirically, it is found that data sets of 15 minutes with a frame rate of 1400 Hz give reliable results. It may be possible to use data set of 5 or 10 minutes as well.

Indices and tables