Problem with Bias subframes on QSI 683

Calibration tools, stacking, aligning,..
Post Reply
Posts: 3
Joined: Sat Mar 17, 2018 5:32 am

Mon Oct 15, 2018 6:09 pm

Screen Shot 2018-10-15 at 9.05.05 PM.png
Screen Shot 2018-10-15 at 9.04.55 PM.png

I'm using Prism v10 latest updates with a QSI 683 WSG and I can't get a good Bias subframe. All my Bias subframes come out with a histogram to the right of normal, as compared to Bias subframes taken at the same temperature with TheSkyX w/ Camera Add On and with MaximDL6. I realized there was a problem with I could never get good calibration using a Master Bias generated by Prism and processed in PixInsight. Master Bias subframes created with the other software I've mentioned seem fine. It's almost as if the Prism Bias subframes are taken with exposures too long rather than with the shutter closed. Any suggestions? Please see the two histograms, one from my Prism captured Bias subframe and the other from a MaxIm DL captured Bias subframe.

Thanks for any help you can provide!

Posts: 3
Joined: Sat Mar 17, 2018 5:32 am

Mon Oct 15, 2018 6:18 pm

As additional information, here are the FITS headers from the good Bias (taken with Maxim DL) and the bad Bias (taken with Prism) as seen in PixInsight.
Screen Shot 2018-10-15 at 9.16.48 PM.png
Screen Shot 2018-10-15 at 9.16.19 PM.png
Posts: 53
Joined: Thu Dec 14, 2017 4:49 am

Sat Nov 03, 2018 10:44 am

I have been having similar problems with bias frames captured in Prism and used in PI..
I have tracked this down to an inconsitency of data formats and the way PI reads floating point.

First, your Prism bias appears to have been taken with the 32-bit float option in software settings (under camera tab)
In contrast the MaximDL appears to be a 16-bit file.

When PI reads in an integer file I think it can scale it properly to its normalised floating point representation between 0 and 1; It knows that 0 is 0, and 64K is 1. (PI wants everything to be normalised internally)

With floating point, there are no such anchor points (many PI threads addressing this., it seems a perennial point of confusion in PI...)
What I think it does, is normalise between 0 and 1 using max and min values so any pixel with nominal floating point value v, is assigned a value in PI of
(v - min)/(max - min)

For most lights and darks this works to an extent because there will usually be some hot pixels or bright stars to anchor the upper end so everything else sits roughly where it should be (but note this is fortuitous and not a principled conversion!)

For bias, this is not true and so it anchors the brightest bias pixel (quite a small value) at 1.

I think the best approach is to use 16bit integer capture in Prism. (I can't recall why I went over to 32-bit, for a while, in the first place. )

However, I have a related Qn which is - why does Prism use signed 16, bit so the maximum value is 32K? I would assume one bit of accuracy is lost?

I will raise this in another thread ..
Posts: 53
Joined: Thu Dec 14, 2017 4:49 am

Sun Nov 04, 2018 10:07 am

I am reasonably sure that usng the 16-bit format means you have 15-bits of resolution, so this is a bad idea.. My method now for integrating with PI is

(i) Set the floating point upper range to 65535 in the FITS format dialogue (in format viewer)
(ii) Import the files into the batchformatconverstion script
(iii) Save all as .xisf in floating point
Post Reply