A primer for working with images

Photoshop Logo A primer for working with images

This page is intended as a primer to working with images.

Many people jump in at the deep end when working with image editing software and find they are not able to get the best out of it due a limited knowledge of the science behind images. A little time spent understanding the theory will help enormously with making the right choices about how to improve the images you work with.

Contents

  1. The two types of digital image
  2. Understanding bit-depth
  3. Understanding Resolution

1. The two types of digital image

Understanding a little about the behaviour of the two types of images you might use is important when it comes to working with them. The two types are:

  • Vector graphics
  • Raster images

Vector graphics

Vector graphics are mathematically defined. Fonts, such as those you would add via the Type Tool feature in Photoshop or by the Text box tool in PowerPoint are in this format. Vectors are often used as the original artwork for logos and many types of computer generated graphics. Because they are mathematically defined, when you scale them up or down the software program redraws them for you from scratch. Therefore, the quality is always perfect regardless of the size.

small vector image

Small vector

larger vector image

Scaled version of same image


Raster images

Raster images may also be referred to as bitmap or pixel-based images. Photographs are the most common example of this type of image. The typical raster formats are png, jpg, tiff and gif. Pixel-based images comprise a grid of pixels (short for picture elements) which are squares containing colour information. A particular image will have a set number of pixels. Increasing the scale of such an image only increases the size of the pixels and therefore, makes them more obvious (decreasing the visual quality of the image)

40 pixel image

A 40 x 40px image displayed at 100% original size

40 pixel image at 6x original size

The same image displayed at 600% original size

If you want to get a better overview of image types then take a look at our tutorial on the subject - Video 5 - Images are from Mars...


2. Understanding bit-depth

Bit-depth or colour-depth are two terms used to describe the same thing: that is, the number of colours or tones an image can support.

Digital images have a fixed capacity for the number of colours they can encode. For a raster image the number of colours is determined by the number of "bits" each pixel can support

In case you've never come across the term "bit" before it's worth just thinking about what it means. It's actually a contraction of two words "binary digit". All digital information is ultimately comprised of bits of data. Each bit, or unit of information, can either be a 0 or 1. So, if an image has a "bit-depth" of 1 this means it has only 1 bit of data to encode colour, therefore, two options. Ordinarily, 1-bit images are true black and white images. 0 might encode the colour "black" and 1 might encode the colour "white".

The higher the bit-depth of an image the larger the number of colours it can support and the more visually realistic the image will be. With raster images, the chosen colour depth will affect the final file size. More bits means more data which means bigger files!

Let's look at how the various bit-depth models work.

1-bit bi-tonal (e.g. true black and white)

As already mentioned a bi-tonal image requires just one bit of information. There are two values for each pixel: "0" or "1". Often the values stored for these two binary states would be black and white but in editing software you might chose to use two other colours. The critical thing to understand is that you have a maximum of two options.

original RGB colour crop of guinea fowl feather detail

Original full RGB colour image crop of feather detail

Bi-tonal conversion of image crop of guinea fowl feather detail

Bi-tonal (1-bit) conversion of the same image


8-bit greyscale (256 shades of grey)

In an 8 bit model we have the ability to store 256 different values (that's because there are 256 unique ways to arrange a series of 8 zeros and ones). 256 greyscale was a colour model well used historically for "black and white" images. It is still often used for the display of images on the web.

The 8-bit model is generally sufficient to seamlessly describe the gradation of "grey" colour values between black and white. Below 8-bits you will generally start to see a phenomenon known as colour banding or posterisation. This is where the transition from one tone to the next is so pronounced that it is detectable to the human eye (i.e. the neighbouring tones are visually too far apart to blend together)

256 greyscale bar

Example of a gradient using 256 shades of grey (8-bit)

32 greyscale bar

Example of gradient using 32 shades (5-bit) showing resulting "colour banding"


8-bit colour (256 colours)

This colour mode is often referred to as 8-bit single channel, "indexed" or paletted and it allows each pixel in an image to be one of 256 colours. In this colour mode, a palette of 256 colours is stored within the image. If a colour is unavailable the nearest colour from within the palette is used. 256 colours is not sufficient for photo-realistic images and will often result in colour-banding. 8-bit indexed images are however, small and have often been used for simple graphics such as logos and diagrams. The GIF format is probably the most well known 8-bit colour file format. In situations where bandwidth is a big concern the GIF format is advantageous as it allows you to select the size of the palette stored with the image (effectively reducing the bit-depth).

moorwild-logo

Example of suitable image for saving as 8-bit colour

extremadura-rainbow-6217

Example of an inappropriate image for 8-bit colour, showing resulting "colour banding"


24-bit colour (16.8 million colours)

24 bit colour images have 3 colour channels Red, Green and Blue. Each of these channels is 8-bits meaning that the total number of colours that can be encoded in each pixel is a whopping 16.8 million, or more precisely 16,777,216 colours (256x256x256). This format is often utilised for photos due to the extensive range of colours available.

extremadura-rainbow-24-bit-version

24-bit version of the image above showing accurate colour rendition and smooth gradients


32-bit colour (True Colour)

As above this format uses 8-bits for the Red, Green and Blue colour channels but also stores transparency information for each pixel (i.e a fourth channel). This allows each pixel to be one of 256 values from fully opaque to fully transparent. Because of the extra transparency information, the storage space for each pixel now requires four bytes.


48-bit colour

48-bit images are currently the gold standard in raster images. As you can probably deduce, these carry even more colour information than 32-bit images. In fact, each R,G,B channel for every pixel now encodes for 16-bits (or 65,536 colours). In total a 48-bit image can theoretically encode 281 trillion colours...

The most common use for 48-bit images is for high-quality image requirements and the file format used is .TIFF. Often, people utilise TIFFs for creating a "master" copy of the original artwork or image. From this, lower quality derivative images can be produced according to requirements. For example, you might scan a document only once at 48-bits, which captures all possible tonal information in the image. The resulting file will be very large due to the fact that each pixel carries 48-bits of data relating to colour. 48-bit images can be so large, in fact, that using using them in many situations is not possible. Therefore, from this master image you might create an optimised jpeg (24-bit image) at the same resolution and then create smaller surrogate images for use on the web or for emailing to people.



3. Understanding Resolution

One of the fundamental properties of a raster image is its resolution. To complicate matters resolution needs to be thought about in two quite separate ways. We'll call these "intrinsic" and "extrinsic" (or "output") resolution.

Intrinsic resolution

Intrinsic resolution can be thought of simply as the number of pixels that comprise a digital (raster) image. The intrinsic resolution of the image will be described by a horizontal and vertical figure such as 800 x 600px. An important concept to grasp is that digital images have no actual real world size until they are output (interpreted by software and displayed on a monitor or printed out). Until output they comprise only binary data and the pixels themselves have no size. They need software to describe how large they should be.

Extrinsic or output resolution

Extrinsic resolution will have a real world size and as such will have units of measurement associated with it. Some of the commonly used units of output resolution are:

  • ppi - pixels per inch
  • dpi - dots per inch
  • lpi - lines per inch
  • spi - samples per inch

These different units of output resolution are often (incorrectly) used interchangeably. When to use each one depends on what you are doing or describing at the time.

Ppi, or pixels per inch, is the output unit of resolution used in software applications such as Photoshop and with monitors and other screen based devices. Depending on the device and its configuration the resolution displayed can vary. For example, your monitor can be set up to display more or less information (pixels) according to your preference. The more information you display, the higher the resolution used and the smaller the displayed items appear. Every device has a maximum resolution which will equal the total number of pixels it can display in the horizontal x vertical plane e.g. 1920 x 1080px. More expensive devices will often boast higher resolutions since this contributes towards sharpness, which is one of key aspects of the visual quality of an image.

Dpi, or dots per inch, is used to describe the resolution of deskjet printers. Each printer will have a maximum print resolution (in dpi) which equals the total number of dots it can lay down per inch of paper e.g. 2880 dpi. It should be noted that dpi does not equal ppi. An inkjet printer outputting an image set to print at 300 ppi will be printing in the region of 92 microscopic dots of ink per pixel. This is because it prints in lines and lays down 9.6 dots of ink per pixel vertically (2880 dpi / 300 ppi) and 9.6 dots per pixel horizontally. Printing resolutions are traditionally specified using the units of lines per inch.

Lpi, or lines per inch, is the unit used with offset lithography - the printing process used by professional print shops to produce magazines, newspapers etc. This process creates images by printing lines of dots in a grid system. The centre of each dot is equidistant from its neighbours. The dots vary in size with larger dots covering more paper creating greater colour saturation than small dots.

Spi, or samples per inch, is the odd one out in this list. Spi, is used for sampling devices, usually scanners and it determines how many samples can be made per inch of the original medium. Film scanners traditionally have the highest optical resolutions since they are used to sample small originals (e.g. 35mm film transparencies) that need to be reproduced at a much larger size on paper.

When you are using pixel-based images you always need to make sure that you have enough "information" in the image to enable it to be output with the appropriate quality. Generally the "information" you are concerned with is the number of pixels.

There are some useful rules to understand when it comes to working with images:

  1. Rule 1 - Printing will show up quality issues much more than on-screen
  2. Rule 2 - You’ll need to think imperial to work with images
  3. Rule 3 - To get your image sizes right for your particular situation you’ll need to do some maths,... or be very lucky

Rule 1

You're looking at an image on Google. When viewed in your browser it fills the whole screen and is pin sharp. It's such a nice image that you decide to print it. You send it to the printer at A4 size. When you get it back it looks terrible; you can see all the pixels that make up the image. Why?

The reason is that the two devices (your screen and your printer) work in very different ways. Your monitor displays pixels on a 1:1 mapping basis - therefore, 1 pixel in your image relates to 1 pixel on screen. A monitor uses the same RGB additive colour model as the original image (red, green and blue overlapping at the same intensity creates white). However, printers are laying information down on another medium, usually paper. The colour model is subtractive and based on the CMYK colour model (the only way to get white is not to lay down any ink on the white paper but black is theoretically achieved by laying down cyan, magenta and yellow which absorbs all light and therefore, reflects nothing = black).

If this all seems too technical, don't worry. The key thing to grasp is that just because an image looks great on screen doesn't mean it will look good when printed. Move on to Rule 2

Rule 2

Printers have a resolution, therefore, a defined number of visual entities per unit length of paper. As explored above the units used are either Dots per inch, or Lines per inch. Imperial still rules absolute with digital imaging. For our purposes we're normally working with DPI.

Rule 3

You can be very precise about working out the required resolution for printing images or you can use a rule of thumb. The one most people apply is that for high quality (magazine quality) images you will need to supply 300 pixels in the original image for every inch of paper you wish to cover in the print. Therefore, if your print is to be produced at 10 x 8 inches you will need an image of 3000 x 2400 px.

If you want to find out more about images for onscreen vs print use this video created for our PowerPoint for Academic Posters course will take you through the issues

.

Back to the Want to learn new IT skills? page