A few days ago, I wrote about going out there in the mountains and taking some astrophotographs using a DSLR camera. Now the time has come to process the resulting files using free software. We are going to use Siril as it offers tools for every step I would have thought of in the procedure. The said procedures involves four main steps: converting the RAW files to bitmap files that we can manipulate, align and average the frames to increase the signal to noise ratio, do linear operations like a slight deconvolution, and finally stretch the histogram and apply arbitrary corrections in darktable to edit the image to our taste.

Sorting the files

The first thing I did was to import the series of frames into darktable for inspection. lighttable What darktable displays here are the embedded Jpegs to the RAW files. Those Jpegs have already undergone transformations inside the camera, including a stretching of the histogram (in particular a gamma correction, see below). The linear version of the data is way less impressive, as we shall see a bit further.

Using darktable, I apply the same 100% crop to all the images to make sure there are no outliers when it comes to the sharpness of the stars. I do not wish to include a frame if it is out of focus or if the stars in it are trailed. In this set of frames none was particularly worse than the others: I kept them all.

Debayering and saving bitmaps

Linearity of the image

The process of converting the RAW files to bitmap files can be made in a number of software. However, most of the software dedicated to photography uses color profiles that contain a gamma correction. By default it is the case of darktable: let us take a look at the output of a conversion from RAW to bitmap with no module applied and compare it to what Siril yields:

The fact the darktable image seems brighter shows that a gamma correction was applied. The image yielded by darktable is therefore not linear. However the linear image that Siril displays is hard to work with. Siril provides different ways to stretch the histogram and applies it only to the image displayed on the screen and not on the actual data. I personally like using the AutoStretch algorithm:

A gamma correction allows for a better use of the available brightness levels while considering the sensitivity of the human eye. Indeed, we are better at distinguishing between dark tones than light tones. As a consequence it is worth allocating more levels to the darker tones of an image than to the lighter tones. After such a transformation (see figure below), the image looks more like what our perception would be in real life. The transformation also pushes the dark tones towards the midtones, hence increasing the perceived contrast in the darker parts of the image.

When saving RAW files the DSLR camera does not apply such a correction: the RAW file is linear. Linearity is important as it allows for transformations that are not defined once the data has been stretched (naive example: an addition or a substraction). In summary, the linear corrections must be applied on linear data for them to be defined at all. They include in particular the substraction of the background, the white balance and the deconvolution. When processing the image, we will keep the image linear as long as possible and only at the end apply a stretch to the histogram.

Debayering

Debayering (or demosaicing) is the process of extrapolating on the missing information of the RAW image. In summary, there is information missing because (for example) a red pixel does not contain any information about the green component of the light hitting its location. Thus the green value is obtained from an estimation taking into account the nearby green pixels. Several methods exist to complete this task and it is an eternal astrophotographic debate to know which is the best one. Those methods include the nearest neighbours method (fast, but very low in quality), bilinear interpolation (slightly better) and more advanced ones (PNG, Amaze, VNG4, AHD ...). The latter is used by Siril but similar results could be obtained with any of the advanced ones. Everything was done for us in Siril, and it is an easy procedure: set the working directory in the corresponding box, add the RAW files, set a name for the sequence of files we are working with and set the "demosaicing" option before hitting "Convert".

Siril then moves on with the heavy computation and the saving of the resulting bitmap files. Once the task completed, Siril displays the linear version of the first shot and indicates it has loaded the sequence of images.

Registration

Now before proceeding to the registration (alignement) tab, I select a region of the sky where I will be sure not to find any mountain. If Siril decides to track a a feature of the moving mountain, it will not make for a good alignement of the images.

Reflecting this thought, I made sure to check "Match star in selection" in the registration tab. Now because I used a decently stationed equatorial mount, I would not expect the field to be rotated too much. However, I still went for the correction of the rotation as the quality loss is minimal and the stars in the corner of the frame gained in definition after stacking from this correction. Thus I left "Translation only" unchecked, and finally pressed "Go register" acting on the green layer. Make sure you have enough disk space before proceeding, because when computing rotations Siril will write a new sequence of images rather than simply storing the corrections in translations when choosing the other option.

Stacking

Now that we have registered the sequence (Siril should load the aligned sequence automatically if you chose to correct for the rotation of the field as well) time has come to put the images together. Stacking images together can be as simple as averaging them. This is precisely the way I chose as I don't mind shooting stars in my image. I am doing this to obtain a pretty picture and a shooting star will definitely not make it uglier. If they turn out ugly, I can always remove them in post processing. Now Siril features rejection algorithms that would get rid of the shooting stars automatically (discarding the outliers), but those are computationally more heavy and I prefer keeping it simple. Same goes for the normalization, you could compute weights based on some criteria (like the radius of the estimated point spread function of the image or the estimated signal to noise ratio), but as I took 45 frames that are virtually identical, I went for equal weights for all the pictures. Siril now has the simple task of computing the sum:

\[ I = \sum_{i=1}^{45} \frac{I_i}{45} \]

Which is a simple average and computes extremely fast. Once the stacking is complete, Siril saves the result to a bitmap file and opens it back right away. Let us zoom in excessively at 400% to make sure the registration was well made:

excessive zoom

The stars look round enough to me. We can now proceed with the actual processing of the image.

Processing: Siril steps (linear data)

Background substraction

The first steps include the extraction and substraction of the background contributions. In our case, that would be the light pollution. Siril has a tool for that:

I expect the light pollution to vary only with the altitude of the shot and not so much with the azimuth. That is because the thicker the layer of atmosphere the observed light has to go through, the more scattered pollutive light it will be brought with. Thus if the calculated background ("show background" on the Siril interface) has a luminous bulge in the center (implying a non zero gardient in the azimuthal direction), I would discard it as the samples were probably biased by the intrisinc brightness of the Milky Way. Under this assumption, an interpolation of order 1. This guarantees that the extracted background will have a constant gradient.

Siril uses a polynomial interpolation to estimate the contribution of the background at any point in the image from samples that can be automatically selected or given by the user. The coefficients are obtained through a minimization of the error between the estimated background and the actual samples. The linear estimated background can be written as follows:

\[ B(x_i,y_i) = C + a x_i + b y_i\]

Where the \(a\) and \(b\) coefficients are to be estimated. Adding more terms to the \(B(x_i, y_i)\) map would allow for a smaller error when considering the sample points, at the cost of a behaviour outside of the points that is not representative or reality for our situation (like a bulge at the center of the Milky Way, as there is no source of light pollution originating specifically from there).

Let us try this in Siril by selecting points where the Milky Way does not seem to contribute. Then, hitting successively the buttons "Compute" and "Apply" allows us to remove the background: As desired the obtained background indeed has a constant gradient going from the top to the bottom of the picture. After correction the field looks flatter and the central features of the Milky Way are better separated.

Color calibration

This part involves two steps: -- remove any remaining offset in the background (Background Neutralization) -- rescale the R, G and B channels such that a given reference is white (equal R, G and B values). The second step requires the data to be perfectly linear as it involves a multiplication. This is why Siril requires a background neutralization before it accepts to rescale the RGB channels. The substraction step of before should have left us with a homogeneous background. However the offset left in the background might not have the same value for all the channels. The neutralization removes from each channel the difference between the average median and the median of the current channel:

\[ \mathrm{Ref} = \frac{\mathrm{median(R)}+ \mathrm{median(G)}+ \mathrm{median(B)}} {3}\]

\[ C = C - (\mathrm{mean}(C) - \mathrm{Ref}), \quad C=\mathrm{R},\mathrm{G},\mathrm{B}\]

Visually, if the distributions of the intensities in a sample of the background for the different channels are as in the left picture, then applying the operation above will shift those distributions such that their averages become the same:

After this step, correcting the white balance by multiplying the channels by some coefficients will not favour any of the channels and the background will keep a neutral color.

Back in Siril, we select a sample of the background and apply the background neutralization:

Then we select a region that will be considered as "white" for the color balance. It is usually a region filled with the object of interest in the image. Moreover, if some of the background is selected as well the neutralization made before will ensure that the background will not contribute to the color balance.

The result looks more neutral indeed:

Deconvolution (not the most useful step for this purpose)

Unfortunately, a deconvolution is not a naive mathematical procedure. Before going too far with deconvolution, let us recall what the concept of a convolution is.

In terms of photography a convolution can be seen as a "multiplication" between the real world image and some convolution kernel. Usually the latter is called a Point Spread Function (PSF). Basically, every pixel of the real world image is spread among its neighbours. This is making the resulting image more blurry than the initial real world image. The spreading occurs mainly because of the atmospheric turbulence and can be given by a gaussian function at first approximation. Added to the turbulence of the atmosphere are the defects of the optics like the coma aberration. Those have PSFs that vary with the position on the field.

The convolution is a linear operation and thus so is its inverse, the deconvolution. Once again, having kept the data linear this far allows us to apply a deconvolution in a well defined manner. However unlike the convolution which is simply a sum of spread pixels, the deconvolution cannot be naively computed. Moreover it is an ill-posed problem. There is no unique solution that gives the original image given the blurred image and the PSF. We have to proceed carefully when applying a deconvolution algorithm to an image as artifacts (ringing) tend to appear very easily.

In Siril, the PSF is assumed to be a Gaussian function of which we can choose the radius. To estimate the proper value to be used for the radius, we will consider that the stars were originally points. In the actual image, those stars were convoluted with the PSF to give a spread point: and that is the PSF itself. We are going to use the Dynamic PSF analysis tool in Siril and estimate the radius of the stars in the image:

We get the value \(r=0.859\) pixel. Let us head directly into the deconvolution tool and input this value:

Siril uses the Lucy-Richardson algorithm to estimate iteratively the original image. Here is the result after some iterations:

The stars seem more point like than at the beginning. This comes at the price of more noise and some ringing artifacts. Most commercial astrophotography packages come with a masking feature, which allows for the exclusion of the core of the brighter stars when applying the deconvolution algorithm. This usually prevents some of the ringing. In Siril, masking is not available yet but we do not really need it in this case.

darktable: non linear arbitrary editing

I decided to record a video of the process rather than write down every single step.