Pixels and Intensity
Digital image resolution is a function of pixel density
The camera that you use to capture images has a given pixel density and dynamic range, and these parameters govern the camera’s ability to accurately record the fluorescent light coming from your sample.
Resolution of digital images mostly depends on the number of pixels
The digital image of your sample captured on a fluorescence microscope is essentially a map of the photons that are emitted from the fluorophores present, following illumination. The image is divided into multiple equally sized units called pixels. Each pixel in the image represents a discrete area in your sample and has an associated intensity value, so that in grayscale lower intensities appear very dark (black) and higher intensities appear very light (white). The pixels are usually pseudocolored to match the color of each fluorophore’s emission, and how bright the color appears depends on the intensity value associated with the pixel. Pseudocoloring makes it easier to view overlays of more than one fluorescent color.
Figure 1. HeLa cells labeled with 3 different fluorescent labels: NucBlue Fixed reagent (stains nuclei), ActinGreen™ ReadyProbes reagent (stains actin filaments), and a primary antibody against mitochondria followed by a fluorescently labeled secondary antibody (stains mitochondria). Pseudocoloring allows you to show each channel (or fluorescent dye) in a different color. This makes it easier to differentiate multiple fluorescent dyes in the same sample (since each is a different color).
The intensity value represents the number of photons detected by the camera at a specific location on your sample, so the digital image shows what you would see if you looked through the oculars at your illuminated sample. Often the camera can take a better picture than our eyes can see, because it has a larger dynamic range.
Pixels in a given image are all the same size, but images can be divided into a few large pixels or millions of small pixels. The maximum number of pixels assigned to your image is dependent on the camera you use to take the image. When you increase the number of pixels in an image but keep the image size the same before acquisition of the image, you will increase your image’s resolution.
Figure 2. Each image in the series (from left to right) contains more pixels than the previous one. As the number of pixels increases, the image becomes clearer. Eventually, you can see the fine details of the image.
In this example, you can see how increasing the number of pixels in an image that is maintained at the same size gives you a better-resolved image.
Using the full dynamic range of your camera
The camera you use to detect your fluorescent signal has a dynamic range. When you acquire an image, photons are collected by the detector (for fluorescence imaging, the detector is the camera). With increasing exposure time, more and more photons are collected by the detector, resulting in brighter (higher-intensity) pixels. However, the detector has a limit on the number of photons it can collect, and once this limit has been reached the pixel becomes saturated. Any photons arriving at the detector after saturation has been reached will not be counted; saturated pixels therefore do not give quantitatively accurate data. The dynamic range is an indicator of how many photons the detector is able to collect before becoming saturated. A camera with a larger dynamic range means you can get better contrast and will be able to see dimmer signals.
Noise vs. background
Both noise and background contribute to unwanted fluorescence intensity. Technically, noise is defined as the unwanted nonspecific fluorescence that comes from the imaging system; this includes noise from the excitation source, camera, and external light source. The term background refers to the unwanted nonspecific fluorescence that comes from the autofluorescence of samples, vessels, and imaging medium, or from the fluorescent signal that comes from fluorophores not bound to specific targets.