Double Star Data Reduction

An example of the output data plots. The first two show a slice of the original data before processing. The last four plots show the data after being processed.

I have posted my first version of Python code on github which can process through large groups of FITS data cubes for reduction. The main features are breaking up FITS files into smaller pieces, writing the header data into a CSV file for an easy “log” of your observation run, and align, stack, and process double star images down to a single, pristine image. This was only my first attempt at doing the processing, and I was able to get it working fairly easily using some of the scientific Python modules such as NumPy, SciPy, Matplotlib, and PyFITS.

I chose to work with the FITS data cubes since that seems to be the widest accepted format of astronomical data, and also is an open standard that can be manipulated by anyone. The basic workflow for alignment goes something like this:

  1. Select the first slice of data from the FITS cube as the “reference” slice to which all other slices will be aligned.
  2. Normalize it based on a range set by the user.
  3. Select the data from that slice which is over a certain threshold set by the user. This allows a variable amount of background to be filtered out, depending on the sensitivity and noise in the data set.
  4. Using this “good” data, use “scipy.ndimage.measurements.center_of_mass” to find the center of mass of this first reference slice. These coordinates are used as the center to shift all of the other slices to later.
  5. Now that the center coordinates have been determined, start processing the rest of the slices. The first thing is to run “scipy.ndimage.filters.gaussian_laplace” over the slice which helps bring out the bright parts and eliminate the less bright parts.
  6. Find the center of mass of this individual slice.
  7. Normalize the data based on the range set by the user.
  8. Shift the data for this slice by the difference between the reference slice center of mass and this individual slice’s center of mass.
  9. Add the processed data for this slice into a NumPy ndarray of processed data.
  10. Repeat steps 5-9 for the rest of the slices in the data cube.
  11. Once completed and there is an array of processed data, use NumPy to take the mean of this processed data stack. This is the final stacked image.

An example of a double star slice before processing.

Once the final stacked image comes out the end, there are many things that can be done. For now, I’ve implemented a simple hack to guess the position of the primary and secondary stars by finding the coordinates of the “brightest spot”, i.e. maximum value on the stacked image (this is the primary star, in theory), then temporarily making a second copy of the stacked image, zeroing out all of the values for a 10 x 10 pixel box around the bright spot, and then repeating the process to find the next “brightest spot” (maximum value) which should be the secondary star. While this method is admittedly imprecise, it seems to have no problem finding both stars for most of the hundreds of data sets I have tested with. Of course, this assumes that the proper threshold and sigma values for the Laplacian filter have been used.

This is the processed image, with a clear double star and ready for easy measurement.

There are plenty of improvements to be made, and this was just a proof-of-concept which had great results. I want to work more on integrating the calibration values so that final results can be obtained automatically. There might be a better way to figure out the positions of the primary and secondary stars using K-means. I’m not sure on that one yet, it requires more research. There are also other ways to reduce the data which may prove to be better than the Laplace filter, but that was the path of least resistance and provided excellent results for a first trial.

 

Using Python for Telescope/CCD Camera Automation and Data Processing

After having gone on my first real astronomy data run at the Pinto Valley Observatory, I saw quite a few things that could be done to improve the workflow for next time around. The two things that I am focusing on at the moment are:

  • Telescope and CCD camera integration for data acquisition
  • Data processing

The goal is to eventually be able to feed a list to a script, and go to sleep while it completes the data acquisition for the night.

Telescope and Camera Control

For the telescope and camera integration, I would like to write some code that can take a list of targets, integration times, and perhaps some other settings, and is able to figure out what order to aim at the targets and take data based on when it is closest to the zenith for the least amount of atmospheric noise. It was very cumbersome do take notes on the data set and target on one laptop, control the telescope from a second laptop, and take the data and control the camera from a third laptop. All of this could be simplified with some scripting that uses the provided lists of targets, moves the telescope (with ASCOM, INDI, SiTech, etc.), corrects for tracking errors and centers the target, and then starts the data acquisition and stores it away somewhere.

The main problem with all of this seems to be the complete lack of standardization in the telescope and camera world. Most every one operates with it’s own set of standards, which defeats the purpose. The camera we were using on PVO run was an Andor Luca-S which requires the proprietary SDK in order to control it. SBIG and a few others are able to be controlled using free and available libraries and software. On the telescope side, ASCOM was a well intentioned idea, but it’s limited to Windows based machines, which I do not have. I use Ubuntu linux, so I’m looking for alternatives that can work on any platform. INDI can do this, but it has limited device support. Fortunately, it works with the Celestron NexStar 6 that I have to play with.

So, in short, there seem to be three pieces to getting somewhere with this:

  • Figuring out how to handle the target list. Do I need to have a catalog, or am I going to rely on the user to provide RA/Dec for the targets?
  • Telescope control and tracking adjustment. Dan Gray wrote a simple spiral search script for the SiTech controller at PVO which worked well. It required manual intervention to hit “stop” once the target was in the FOV, but that could be automated easily if the camera is integrated by looking at the overall delta brightness of the images and once it changes by a certain threshold, you know that the target is in the FOV. Then you could center using this mechanism by moving in the x direction slowly until the delta changes, backup to get it in the FOV again, then move in the negative x direction until the delta drops again. Divide that by two and that target is centered for the x direction, then rinse and repeat to center in the y.
  • The camera automation seems easy enough with the Andor SDK, but, it’s proprietary which means that if I wanted to work with a different kind of camera, it would take more code. There probably isn’t a way around this other than writing functions to deal with detecting supported camera types and having code that can deal with it.

Data Processing

I’ve already started writing a bit of Python code for the data processing side of things. For the PVO run, we took data and recorded it in the proprietary Andor format called SIF. It is an undocumented format and there is nothing in the SDK for messing with these files, so you are stuck using their SOLIS software to convert it to different formats. I’m never taking data like that again. :) The Andor camera supports the FITS file format, which is used by NASA and lots of astronomy folks out there, and has quite a bit of support, including with the python module called PyFITS. I’ve been messing with that a bit today. Since all of our data from PVO is in SIF, we have to manually convert it to FITS using SOLIS before we can manipulate it by any other means. This is a tedious process which can be avoided in the future by recording data directly into the FITS format.

FITS files can be multidimentional “cubes” of data, so that our entire 2000 image set of a single target is in a single, convenient file. This is very easy to manipulate and run analysis on, and requires no extra conversion. This is what we will use for future runs.

Dr. Genet and I also use an wonderful piece of software called REDUC which is written by Florent Losse to compile our data and do various things like lucky imaging or speckle interferometry. The only issue is that REDUC only supports single dimensional FITS files, so we have to break up the FITS file from SOLIS into individual FITS files for a set of data. I’ve already been talking via email with Florent to get support in his software for data cubes, since this is widely used and accepted as a standard format for astronomical data. In the meantime, I’ve written a script that can slice up a multidimensional FITS into individual FITS files, and it also writes out a log file (in CVS format for Excel or LibreOffice, at the moment) with contains the header information for each data set, such as the exposure time, EM gain, number of photos, temperature of the camera, etc. If Florent can get the data cube support build in, I won’t need to slice up the FITS files anymore and no data conversions will need to take place which is ideal. I will still want to be able to write out a log file, since this comes in really handy once you’ve got a few nights worth of data.

The other piece which I just started playing with is actually analyzing the data. Python has some neat tools, such as the very powerful matplotlib, which might help in doing this and I’ve only begun to scratch the surface, doing some simple histograms, heat maps, and contours for some of the data that I have. This is a tantalizing possibility and I’m going to be very curious to see what I can come up with.

So there are plenty of pieces to these puzzles, and I’m going to attempt to figure something out. The data processing seems like the easiest part to tackle for the moment, and I will brainstorm on the other pieces as I go along. I’ll be sure to post something once I figure out anything substantial, since there seems to be bits and pieces all over the web, but no one with solid solutions. It looks like some of the big observatories have it figured out, but they have a lot of resources, and completely custom written software for their specific equipment. I would like to come up with something that helps out us amateurs so that we can make our own contributions to science without all of the headache.