Wednesday, April 4, 2018

HiCAT: first results with the COFFEE wavefront sensor

Last week, Jean-François Sauvage (from the Office National d’Etudes et de Recherches Aérospatiales and the Laboratoire d’Astrophysique de Marseille) has been invited by STScI to come to the Makidon Lab to implement and test the COFFEE wavefront sensor on HiCAT data, with the HiCAT team: his PhD student Lucie Leboulleux, Christopher Moriarty, Keira Brooks, Peter Petrone, and Rémi Soummer.
COFFEE stands for COronagraphic Focal-plane wave-Front Estimation for Exoplanet detection and is a focal plane wavefront sensing method that can, with or without coronagraph, reconstruct the pupil aberrations. It requires two sets of images from the science camera: one with the pupil plane DM being flat, one with a known focus.
In our case, we had the Iris-AO (segmented mirror) set on HiCAT, in addition to the two deformable mirrors, and it provided us very original conditions: segment gaps, huge local phase differences, and cophasing errors.
In particular, we could reconstruct piston, tip, and tilt errors that we were applying on purpose on the Iris-AO to validate the reconstruction. The direct mode of COFFEE (without coronagraph) worked extremely smoothly, and after a few days the coronagraphic mode could also be validated (see images below). This result is particularly impressive since no prerequisite was required from COFFEE about this particular pupil: COFFEE was not aware that the pupil was segmented! Furthermore, COFFEE can not only reconstruct the cophasing errors, but also the print-through of the DM actuators that generate high-frequency effects.

Left: phase reconstructed by COFFEE, in direct mode (no coronagraph). Right: theoretical phase obtained from the commands sent to the Iris AO.

Left: phase reconstructed by COFFEE, in coronagraphic mode. Right: theoretical phase obtained from the commands sent to the Iris AO.

Wednesday, March 28, 2018

JOST: Installation of the HiCAT python package

For a couple of years now, our group had been doing really good work in developing first-class coronagraph technology and wavefront sensing techniques for segmented apertures. During this process, one of the focus points of the work is to create a concise python package which controls the HiCAT testbed and its environment and which would eventually be ready to be freely shared for implementation on other high-contrast testbeds. 

Starting last year in October, we used the past six months to work on major infrastructure updates for JOST. We exchanged the CCD camera for a much faster CMOS camera, implemented a pupil imaging lens, and cleaned up the software. This included migrating all codes to GitHub and putting the repository on version control, translating Mathematica codes to Python, eliminating IDL pieces of the experimental software by translating them to Python as we'll and finally, implementing hardware interfaces by using those developed for HiCAT. After individual parts of the HiCAT module have been successfully tested, e.g. the parts for the camera and laser control, we decided it was time to fully install the package and integrate it in the JOST software.

The code developed for HiCAT was designed as a proper Python package installable with pip. Even further, the interfaces developed for each instrument are easily accessible by just installing the HiCAT package and importing them. While this comes with lots of code specific to HiCAT, it is a quick and easy way to access robust instrument control if you happen to use any of the same ones. We are planning to abstract out the hardware control interfaces into their own package in the future and make that package open source.

Each hardware interface follows a simple object-oriented paradigm where the parent is an abstract class (e.g. "Camera"), which defines specific methods and implements a context manager. Context managers are important for hardware control because they will gracefully close the hardware even if the program crashes unexpectedly. The child classes implement the abstract methods such as open(), close(), takeExposure() with code for the specific camera. This keeps your scripts generic and means changing cameras will have little to no impact on your code.

Implementing this on JOST meant we could see in what instances the HiCAT package needs to be more generalized. The installation and implementation worked seamlessly and fairly quick, and HiCAT and JOST are now running off the same Python package for hardware control! The modular structure of the package makes the construction of new experiments fast and straightforward and we hope to continue our work on JOST a lot smoother now.

Monday, February 26, 2018

PASTIS: Applications to sensitivity analysis of segmented telescopes

A traditional error budget aims at quantifying the deterioration of the contrast with the rms error phase applied on the segments. For example, in the case of segment-level pistons, we can easily deduce from Fig.1 the constraints in piston cophasing in term of rms error.

Fig.1. Contrast as a function of the rms piston error phase on the pupil, computed from both the end-to-end simulation (E2E) and PASTIS.

Since PASTIS provides an accurate (~3% error) estimation of the contrast, but 10^7 times faster than the end-to-end simulation, it can replace this very time-consuming method in such error budgeting, which is particularly useful when numerous cases need to be tested. Similarly, it makes simulations of performance for long-time series of high-frequency vibrations possible.

However it is known that some segments have a bigger impact on the contrast than others, which appears in the PASTIS model. This is why we propose another approach to error budgeting, which provides also a better understanding of the repartition of the requirements on the segments.

First, from PASTIS we can derive the eigen modes of the pupil. Some of them are shown in Fig. 2, in the piston case. Since these eigen modes are orthonormal, they provide a modal basis of the segment-level phases (piston case here). All phases can be projected in a unique way on this basis.

Fig. 2. Eigen modes in the local only piston case. The top line corresponds to the four modes with the highest eigen values, the bottom line to four of the modes with the lowest eigen values. In this second line, we can recognize discrete versions of some common low-order Zernike polynomials: the two astigmatisms and the tip and tilt. Furthermore, the last modes focus more on the corner segments, that are typically the segments that impact the contrast the least, since they are the most obscured by both the apodizer and the Lyot stop. Conversely, on the top line, we can also see that the segments with the most extreme piston coefficients correspond to the segments hidden by neither the apodizer nor the Lyot stop, and so are the segments that influence the contrast the most. This explains why they have the highest eigen values.

Since these eigen modes form an orthogonal basis, they contribute independently to the contrast. Therefore computing a contrast due to a certain phase is equivalent to summing the contrasts of the projections of this phase on the different eigen modes.

As a consequence, this problem can be inverted: from a fixed target contrast, it is possible to reconstruct the constraints per eigen mode. To do so, we fix the contributed contrast of each mode (the sum of these contributed contrasts has to be equal to the global target contrast). From this contrast per mode and the egein value of each mode, it is possible to compute the constraint on each mode. Fig. 3 illustrates this constraints in the case of a global target contrast of 10^-6, where the constraints on the 35 first modes provide equal contributed contrasts of 10^-6/35, and the constraint on the last mode provide a contrast of 0. The way to read this plot is that, for example, our error phase cannot be higher than 1.6 times the first mode + 1.7 times the second mode + … + 9.5 times the 35th mode.

Fig. 3. Contributions on the different piston modes to reach a final target contrast of 10^-6, in the case where only local pistons on segments deteriorate the contrast.

To conclude, it is extremely to compute the constraints per mode with this method. But even more important, it provides a better understanding of the pupil structure and impact on the contrast, targeting the critical segments. It is then easier to optimize the backplane structure or the edge sensors on these segments to limit their impact on the contrast.

This method of inversion is also applicable to quasi-static stability study and to any other Zernike polynomial.

Wednesday, February 21, 2018

JOST: Integration of a new camera

Previously, JOST has been using a SBIG monochrome CCD camera, but the exposure times and readout times for it are so long that running any experiment on JOST took a long time: a defocused image needed an exposure time of 10 seconds, even when the laser power was cranked up to the maximum, and the readout and data saving took around 10 seconds too.
Apart from making the experiments very long, the CCD is also super sensitive to background light, and we always needed full darkness in the lab when running JOST, which rendered our main room unusable for other work.

Both of these issues were solved by replacing our detector with a monochrome CMOS camera from ZWO (ASI1600MM), which was put on the JOST translation rail (see Fig. 1). The detector size is only marginally smaller than the CCD, so we will still be able to test wavefront sensing and control on wide fields.


Figure 1: The new JOST imaging camera is a ZWO ASI1600MM CMOS camera (red, to the right). The shorter exposure and readout times will allow for faster experiments, while we also don’t need to darken the room anymore, the enclosure is enough to keep unwanted light out. Note how the pupil imaging lens on the flip mount is not in the beam in this image.
Since the CCD camera was considerably bigger, it was relatively easy to mount the new camera on the translation rail, as there was more than enough space. By running our autofocus python codes we were able to focus the setup quickly. Since HiCAT also uses this camera (among others), this was the opportunity to further unify the software between the two testbeds. Ultimately, JOST should be able to run by directly using the hicat python package, and by integrating the camera codes into the JOST software interface, we are now one step closer to this goal.


Figure 2: New ZWO camera on translation rail, back side. Note how the pupil imaging lens is down in the light beam in this image.

A significant difference between the SBIG and the ZWO camera are their pixel sizes. While we used pixels of 9 microns with the SBIG, the pixels of the ZWO are 3.8 microns small. This means that while we were just about Nyquist sampled with the CCD, we are now imaging highly oversampled PSFs. This results in us taking images that are twice as big (1024 x 1024 pixels instead of 512 x 512 pixels) in order to still capture the full PSF, but we are binning them back to 512 x 512 pixels to save computation time in the image processing and application of phase retrieval algorithms.