I’ve been discussing some issues with the recent release of a new Sigma camera with some photographer friends of mine.
This new Sigma claims a resolution roughly 3 times it’s apparent sensor size, and some people are crying foul over this practice. This is largely the same argument that’s had about some of the Fuji DSLR sensors as well. With the Sigma it captures each channel separately, and with the Fuji, it captures differently sized pixels of data and extrapolates that information. This same extrapolation also is found in the Nikon (and I’m assuming Canon) universe, except that they sample certain colors (R, G, B) differently and interpolate the differences.
Regarding resolution and sampling sizes, using a little science and some off the shelf (well, a high shelf) software, most any camera can experience a boost in resolution and megapixel output. The image below is a pair of photos, both taken with a D300 in a poorly lit room. On the left you can see noise creeping in from the sensor, and some fuzziness around hard edges. That’s a full size crop of a D300 photo right out of the camera. On the right is another photo take from the same camera, upsized by combining multiple samples of the same image. The net effect is a picture that is twice as tall / wide with 4 times the area (information) … one with less noise than the original and greater clarity. This results in a 12*4 = 48 megapixel image.
Click on the image to read more about this and some counterpoints from a few photos on my flickr stream for this image.