Having run into some issues with data formats, I revisited this recently.
My camera (an Atik 460EX) delivers 16-bit output, presuambly rendered as 0 to 65535.
However, I see the integer format in Prism is 16-bit signed
Does this mean I can only represent pixels from 0 to 32767? Do I lose 1-bit of accuracy?
Kevin
Integer vs floating point capture
-
- Posts: 20
- Joined: Sun Apr 07, 2019 3:38 am
- Location: https://goo.gl/maps/AT1eV3nU1KhK9F7Z6
- Contact:
We have reviewed what you have described. I'm using an ATIK 383L + 16 bit ccd camera. First: I checked my image for the usual fits test. The Prism Fits header contains 16 bits and the maximum ADU pixel value is 65535.
Under Linux with Siril, the test image data is 16 bit color depth.
Under Linux, SAOimage DS9 has 16 bits and 65535 (max) values in the pixel value table in the galaxy core.
Did you mean this under the 16-bit Prism / Settings / Software Setup / Images (JPEG / TIFF) tab?
-
- Posts: 20
- Joined: Sun Apr 07, 2019 3:38 am
- Location: https://goo.gl/maps/AT1eV3nU1KhK9F7Z6
- Contact:
Prism pixelgrid really shows a different meaning!!!
What could be the explanation for this?
What could be the explanation for this?