Hello and Welcome to Burnley College News, where here we talk about all things to do with MEDIA! In this article we have all the information you need about... you guessed it; Digital Graphics!
That includes; Pixels; Raster; Vector; Bit Depth; Colour Space; Image Capture and everything in between including; File formats and File compression.
There is a lot to take on board, so try to keep up!
Starting from the Beginning;
PIXEL
What exactly are pixels?
Essentially Pixels are Dots, which are placed systematically one after another to display an image on a screen (Computerized image) or printed image. The word PIXEL comes from the words Picture Element which has been blended to form its abbreviation. The term Picture Element is usually referred to in a more digital context but is often generalized to interpret the smallest unit of an image. Pixels are small squares or rectangles of digital information with a colour value.
Pixels are often noticed as a square in shape, as is a common case in computer monitors, however the square formality isn’t always accurate in terms of the quality outcome of the image. Digital Video Standards such as NTSC and PAL use rectangular Pixels with an aspect ratio.
Pixels are not a measurement of size but that term can often be misconstrued as though they are.
For example, digital cameras give a measure of PPI or more simply, pixels-per-inch which is embed into the image files it creates. Users can see the measurement and often assume that pixels have a real size which uniformly holds true.
MEGAPIXELS
Since the beginning of the very first digital camera the term megapixels have been the defining feature of a digital camera for years. The megapixel race has simplified to one short phrase ‘more is better’. In that the more megapixels a camera has the more it will cost! So megapixels is a term that has been created by manufacturers in order to allow them to describe the resolution of which the cameras are capable of taking pictures, for advertising purposes.
So a pixel is the basic unit of programmable color on a computer display or in a computer image. Think of it as more of a logical unit than a physical unit. A pixel does not have an inherent size, its size depend on the resolution of the image or screen, which you have set. By setting the screen to its maximum resolution the size of a pixel within that will equal the size of the ‘dot pitch’ or in other words dot size, of the display. Confusing, I know. However if the resolution is set to a smaller setting than the maximum resolution the pixel would become larger than the size of the ‘dot’, essentially that means that the pixel on the screen will become larger than the physical size of the screens dot. . Think of it this way; the screen is made up of a ‘dot pitch’, the pixels within this image could be equal to the size of one dot (maximum resolution) or larger than the size of one dot (smaller resolution) that could mean that a pixel is the size of more than one dot.
These dots are important as they allow pixels to formulate a picture, the higher the more accurate/close the pixels are to the size of the dot, the better the quality of the image. So the resolution of a screen is very important: the higher the resolution setting the better the quality of an image.
RESOLUTION
Let’s go into more detail about resolution. In computer terminology resolution is a term which describes the detail an image holds. In that, resolution applies to raster digital images, film images, and many other image types. Higher resolution means more image detail.
Pixel resolution is a term which represents a simple calculation. This calculation tells you the quantity of pixels within a particular image or screen etc. Pixel resolution is displayed as a set of two positive integer numbers, the first is the number of pixel columns across the width of the screen and the second is the number of pixel rows along the height of the image, simply Width by Height. For example: 640 by 480.
This calculation tells you accurately how many pixels are present in the image, this is done by multiplying the resolution figures together (Width by Height) and dividing by… one million. The total is often given as a number of megapixels. Other units include; pixels per length; pixels per area unit (such as pixels per inch/square inch). These pixel resolutions are not true resolutions but are referred to, as measurements because of their particular order on the image.
There are two main forms of pixel: original pixel and effective pixel. An original pixel is the pixels that are used whilst viewing an image before it is captured whereas an effective pixel is the pixels which are utilised whist the picture is taken. So, within your digital camera or such, there is a ‘sensor’ or formally known as; an elementary pixel sensor, holds all the pixels required to make the desired output image. This sensor is larger than the number of pixels it holds which allows it to create the image. In that the group of pixels the sensor holds are called the effective pixels. Why do you need to know this? ... Because these effective pixels formulate the image which you see, and so, in correspondence with the resolution determines the final quality of that image. This illustration bellow demonstrates how the resolution impacts the quality by showing different pixel resolutions from low to high. Normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better:
To make this a little easier here is an example: an image which is 2048pixels by 1536pixels (W x H) has a total of 3,145,728 pixels (2048x1536) or in other words 3.1 megapixels, to which this image would be referred to as either 2048 by 1536 or more commonly and simply, a 3.1 megapixel image.
The term display resolution refers to pixel dimensions (e.g. 1920 x 1080). However this does not tell us anything about the resolution of the display on which the image you see is actually formed. Resolution more correctly refers to the PIXEL DENSITY; the number of pixels per unit distance or area not necessarily the TOTAL number of pixels on the image/screen. In correct digital measurements the display resolution would be given in such measurement as pixels per inch (as mentioned previously).
The sharpness and clarity of an image depends on the resolution as seen above. This is more commonly used to describe the resolution in monitors, printers and Bitmapped graphic images, which is measured in PPI. However for laser printers, the resolution referrers to the number of dots per inch; for example a 300 DPI printer is capable of printing 300 distinct dots in a line, in one inch, which means it, is able to print 90,000 dots per square inch. In less detailed terminology printers, scanners, monitors and other I/O devices (input/output device), are classified as high resolution, medium resolution, or low resolution; pretty straight forward. Although with the advances of today’s technology the ranges for each of these grades are constantly shifting, as you could imagine.
BIT RATE
Bit rate as most could imagine, describes the rate at which bits of information are transferred from one location to another over a unit of time (per second etc). Simply put, it's the number of bits that are processed every second, so in that bit rate measures how much data is transferred in a given amount of time. Bit rate is measured in any of the following; Bps – Bits per second; Kbps – kilobits per second; Mbps – megabits per second. Let’s look at it this way; a DSL connection (Digital Subscriber Line, or modem) might be able to download data at 768 kbps, whist a fire-wire 800 connection can transfer data up to 800 Mbps. Say, an MP3 file has been encoded with a constant bit rate(CBR) of 128kbps, that file will be processed at 128,000 bits each second.. In terms of broadband, bit rate suggests the number of bits processed per second/minute or in other words download speed. The higher the bit rates the faster the transmission speed. Bit rate is calculated in averages of how many bits the data will consume in that given time. A higher bit rate means that the file will be larger and have better video or audio quality whist lower bit rate means a lower file size and generally worse video/ audio quality.
In more simple terms bit rate means the speed of a digital transmission which is measured in bits per second (most commonly).
Bit rate also describes the quality of an audio/video file. Such as, and MP3 audio file which has been compressed to 192 Kbps will have a greater dynamic range and may also sound slightly clearer than if the file was compressed to 128Kbps. Why? Because for a file to be compressed some data/information must be lost, therefore the more bits present the better the audio or video file will sound/look. This is due to the fact that each bit represents the audio/video data for each section of playback, so the more bits you have the better the quality will be. Just like the quality of an image is measured in terms of resolution the quality of an audio or video file is measured by bit rate. For example; A standard CD with uncompressed audio data has a bit rate of 1,411 kbps which is much higher than MP3’s bit rate at a compressed 320 kbps, in that the CD would have better audio playback than that of an MP3 file. Bit rate can be referred to as data rate. Bit rate essentially represents the measurement of the continuous process of source coding or rather data compression. However without the process of compression the file would remain very large and therefore more space on the disk/memory would be required. So in other words the larger the file the more memory is required. For example if an audio file has a bit rate of 128 kilobits per second (kbps) requires 128,000 bits of computer memory to store each second of audio.
FILE COMPRESSION
The terms lossy and lossless are used to describe, in the compression of a file, whether or not all the original data can be recovered when the file is uncompressed. For those of you who are unaware compression is used to ‘compress’ or ‘squash’ a file of data to make it into a smaller file, this so it takes up less memory and therefore can be stored and transferred easier. The terms are pretty self-explanatory; lossy clearly states that this form of compression loses lots if data information whereas lossless implies that you don’t lose any data. As obvious this may seem the process of compression isn’t as simple: with lossless compression data does in fact get lost through its compression, however the term states that the lost data which was originally within the file, remains after the file is uncompressed, therefore all of the information can be completely restored. Often this form of compression is used for subjects such as text or spread sheet files, where losing words or financial data could cause problems. On the other hand, lossy compression reduces a file to a new permanent state, where the certain bits of information lost by compression is completely eliminated; cannot be recovered. When a lossy file is uncompressed only a small part of the original information is present even though you may not notice it. Lossy compression is generally used for files such as video and sound where certain amounts of information loss would not be detected by any user.
In lossy compression the procedure is to minimize the amount of data within the file that needs to be held, handled, or transmitted by computer. The images below demonstrate how much information can be dispensed with and how they progressively become increasingly coarse as the data which made up the original image, is discarded/lost through compression. In a typical instance a substantial amount of data can be lost before the effect of its degraded appearance is noticed by a user.

Original image (lossless PNG, 60.1 KB size) — uncompressed is 108.5 KB

Low compression (84% less information than uncompressed PNG, 9.37 KB)

Medium compression (92% less information than uncompressed PNG, 4.82 KB)

High compression (98% less information than uncompressed PNG, 1.14 KB)
Here you can see the higher the compression or in other words the more data lost the lower the quality of the image/file. Multimedia file such as audio and video and images are common files of lossy compression to which the large amounts of compression are less noticeable on these files. Lossy file compression is important in these files in applications such as streaming media and internet telephony: This because smaller files are easier to transfer and store. In contrast lossless compression as mentioned is required for text and data files such as bank records and text articles,
In order for a computer to determine which method of compression will be used for each file; programs which contain file extensions such as .jpeg and .gif are integrated with either lossy or lossless compression. Programs such as Microsoft office would be integrated with lossless compression, and programs such as windows media player would be integrated with lossy compression so that documents and files saved under any of those programs would be compressed accordingly to its file type. If you don’t quite understand, don’t worry, there will be more information on those later in terms of file types and compression methods.
Most people know that when you compress a file it will no doubt become smaller than that of the original file, but by repeatedly compressing the file it will not continue to compress the file to nothing, in fact it would usually begin to increase the size. Files and data streams more often than not contain more information than required for a particular purpose. For example, a picture could in fact have more detail than what your eye could distinguish when it has been reproduced to its largest size. Likewise, an audio file doesn’t need a lot of fine detail during a very loud passage. The idea of lossy compression is to allow the file to lose certain parts of information which cannot be noticed by human perception: this so that the file can be made smaller in size and yet still be of decent sound or visual quality for you, without noticing a difference from the original file. Lossy compression techniques are hard to develop as closely matching a file to human perception is a complex task. The ideal of a lossy file is to have the quality of the compressed file as close to the same perception of the original file as possible, with as much digital information as possible removed. At other times however, perceptible loss of quality is considered as a fair trade for the amount of the lost/reduced data.
In comparison, the advantages of lossy methods over lossless methods is that in many cases a lossy compression can produce a much smaller compressed file than any of a lossless file, whilst still maintaining a decent quality perception of the original. Lossy files being smaller in size can be downloaded/ streamed faster than any other file although the compromise is within the quality of the file. Audio files which are compressed using a lossy method can be compressed at 10:1 ratio with no noticeable loss of quality; video however can be compressed immensely with very little visible loss of quality at around a 300:1 ratio. Still images are often compressed to 1/10th of their original size, likewise with audio data files, although with closer inspection quality loss is more noticeable than that of an audio.
Compression was formulated to focus on the idiosyncrasies of the human anatomy, which means that the method must take into account of many features such as; the human eye can only see certain frequencies of light and up to ten million colours so compression can remove the colours and light frequencies we humans cannot see.
Lossless compression represents your data more concisely and perfectly by using logical and statistical data to determine what parts can be temporarily lost within the file. For instance within a text file, the letter ‘e’ is much more common than the letter ‘z’ therefore the probability that the letter ‘q’ will be followed by letter ‘z’ is very small. Lossless files retain a higher quality file with a smaller loss; therefore it means that the file will require more space of storage for a larger file. Lossy files more often lose quality in visual or audio data but can be compressed to smaller sizes and therefore require less space for storage and thus the storage device has more space for other files. Despite these comparisons, lossy compression is often a permanent process which means the chances of regaining lost data are slim, whereas lossless data is a temporary compression which allows data to be recovered at a later state (of uncompressing).
Lossy files are easier to compress than lossless files. Lossy files accept that some data loss is necessary for higher compression, however lossless data compression often fail to compress files as certain patterns restrict what a lossless file can compress. In that no particular compression method is better than another they are equally advantageous.
Still keeping up? There’s plenty more to come…
RASTER and VECTOR
RASTER
In computer graphics raster images are seen as data structures, these structures represent generally a more rectangular grid of pixels as opposed to the more common square, as mentioned earlier. These rectangular pixels are considered as points of colour which are viewable though a monitor (computer), paper or any other display medium. Raster is a format of digital images. Raster graphics however, are resolution dependant which means that they cannot scale up to the maximum resolution without loss of quality from the image.
Image editors such as; Painter, Photoshop, MS Paint, and GIMP, are raster based programs which revolve around editing pixels. When an image is rendered in a raster-based image editor, the image is composed of millions of pixels. The raster image editor works by manipulating each pixel individually.
Below is an example of a raster image also known as a bitmap image, which is the most common format used with raster images.
It is clear to see that the original smiley face (top left corner) when enlarged the individual pixels which form the image appear as clear squares. By zooming in closer to a pixel the colours can be analysed also. Raster images or bitmap images represent digital images. Raster images can be displayed in various formats including; .gif, .tiff, .jpg, .png, and also .bmp.
Raster images translate into pixels on the screen as of the image shown previously. When a raster image is created the image is converted into pixels which are assigned specific colours, including RGB (red, green, blue) starting from 0.0.0 which is the value for colour black, up to 256 for each colour, this is valuable for photographs with colour shading. Most pixel based image editors use colour models RGB or CMYK. (However, At this point these terms may be unusual to you, but I assure you, there is more information to come on those shortly).
When a normal raster image is viewed the pixels are generally smoothed out so when you look at the photograph or drawing image it becomes more visually appealing. However when this image is blown up the pixels becomes apparent. Often this can be beneficial as the artist can have a clear view of their image in terms of the colour quality for each individual pixel, although it is not always a desired choice. This is dependent on the resolution of the screen the image is being viewed on, some raster images can be enlarged quite vastly before they begin to distort, some although become difficult to see much more quickly. The smaller the resolution the smaller the digital files, and vice versa. In this instance professionals who work with computer graphics have to then find an equal balance between the resolution and image size. So, in more simple terms; when a raster image is enlarged the image becomes ‘pixelated’, whether it has been manually increased in size or the resolution has been altered.
Raster file formats
.bmp
BMP is a file format or file format which is also known as a bitmap image or file. Another abbreviation for this is DIB which means device independent bitmap. Simply BMP is a file format which is used to store bitmap digital images. (Just to avoid confusion here, bitmap is the format of a raster image. There are several file extensions which the bitmap image can have, such as; .gif, .tiff, .jpg and of course BMP. BMP is a standard format which windows stores bitmapped digital images).
BMP is a file format created by Microsoft; other platforms which support the .bmp file are OS/2 and MS-DOS. BMP was designed to contain bitmaps of different colour resolutions, this so, it would make it easier for the files to be exchanged between different internal devices (hence the name DIB). DIB/BMP is the external version of the bitmap format which allows the file to be transported in metafiles.
A BMP file consists mainly of four parts/sections of data. The first is the file header, which contains all the general information you need to know about the BMP file. The second is the bitmap header which contains detailed information about the bitmap image. The third section is a colour palette which defines all the colours in the indexed colour bitmaps. The fourth and final section contains the actual image, pixel by pixel.
Normally bitmap images which are saved at higher number bits per pixel are greater quality images than those which are saved at lower number bit images, except of course it will be larger.
The integers within a BMP file are stored in a format called endian which in other words is the name for the Intel format. What this means is that the BMP format was created for computers with Intel processors, originally. This particular file extension (both .bmp and .dib) are very popular because of its usability and wide software support. Imagine that you have an image on your computer which you wish to send to a friend but are unsure what program they use, by sending a .bmp file it is certain to work nonetheless.
Popular bitmap editing programs are:
• Microsoft Paint
• Adobe Photoshop
• Corel Photo-Paint
• Corel Paint Shop Pro
• The GIMP
.gif
GIF stands for graphics interchange format and is used for transmission of images across data networks. GIF was originally introduced by CompuServe Information Service in 1987; it is supported by all web browsers making it a very popular format.
GIF displays up to a maximum of 256 colours which makes it an unsuitable file format for pictures which contain continuous colours. GIF is generally a very standard format which is best for colour images such as clip art, black and white line drawings or images with block solid colours.
Uses of GIF include images with transparent areas, colours in detached areas, and buttons on websites, small icons (computer) images containing text and standard animations.
Other uses of GIF include images with transparent areas, colours in detached areas, buttons on sites, small icons, images containing texts, and animations.
The mathematical compression required for GIF is also known as LZW or Lempel Zif Welch which is lossless compression. By using that technique on GIF files it reduces the file size without diminishing its visual quality. The GIF file is stream based which means that it is composed of blocks/data packets which contain each, separate information. Like BMP, GIF is divided into blocks then sub-blocks, containing additional data for the file. The reason these blocks are necessary is so that the method for reproduction of the image is defined.
There are two versions of GIF: 87a which was introduced in 1987; and 89a which coincidentally was introduced in 1989. The newest version 89a contains a format which is used specifically for animations, which is actually a sequence of images in a single GIF file. If you’re thinking about using GIF, it is better to have a system which supports both versions as the older one is still in good use. A system which only supports one version will not support the other.
GIF is supported by all software applications that read and write graphical image data, and can be used on various platforms like MS Dos, Macintosh, UNIX, and Amiga. The maximum image size produced by a GIF file is 64K X 64K pixels.
.jpg
JPG is also an image file format which is commonly used for photographs and many other complex still images on the internet. JPEG stands for Joint Photographic Expert Group which are photographic colour images with either a .jpg or .jpeg or .jpe file extensions.
JPEG is an image which has Lossy compression, and so, by using .JPG compression, you decide how much loss you are willing to introduce to the image and share this between the file size and image quality.
This format is perfect for that photo images which must be very small files: Such as for websites or email. JPG is used on digital camera memory cards. The file is wonderfully small and can be compressed to only 1/10th of the original file data which is highly beneficial when modems are involved. Although such a fantastic compression comes with a compromise as you would expect. JPG as mentioned is lossy which means losses to quality, therefore image quality is lost when JPG data is compressed and saved and unfortunately the lost data cannot be recovered. This compression technique involves splitting the image into minute pixel blocks, to which are halved again and again to achieve the desired amount of compression. These images can be compressed by 90%, to do this it loses any information which is not important to the appearance of the image. Jpeg images can contain up to 16.7 million colours this is because it was designed specifically for the use of highly detailed photorealistic images. JPG is applied to rendered images and digitalised photographs. Since compression makes the image smaller they are fast to transmit and download, or in other words send or receive over the internet. JPEG images have smoother variations of colour and tone than many other file formats but can be embedded into other file formats such as TIFF. These features have made JPEG popular. So simply; JPEG is known for is digitally precise and coloured images.
However, JPEG is great for photographic images but does not support transparency and animation, and is not suitable for rough drafts, line drawings, screen captures and other image types which use sharply-defined lines etc. the reason for this is because when the image is compressed through the JPEG method the image would distort them.
JPEG files can be viewed by a variety of downloadable software on both the PC and Mac. Such as:
· Photoshop
· Elements
· Photo impact
· Photo Deluxe
· Paint shop Pro
· Corel
.png
PNG, Stands for Portable Network Graphics, is a lossless (compression) image file format. It was designed as a superior replacement for the GIF format as the LZW data compression scheme used in GIF was patented by Unisys. PNG does not require a patent license.
PNG has many advantages over GIF, such as alpha transparency, gamma correction and also two-dimensional interlacing. The alpha transparency has a anti-aliasing feature which allows you to make rounded and curved images which will look very appealing on any background- not just white. The gamma correction ensures that all the images appear clear. In other formats images are often displayed darker or lighter when they are gamma differences within the image. For those of you who do not know gamma refers to gamma correction which is the process whereby contrast values within an image are optimised (light and dark). The two dimensional interlacing you receive with PNG allows an image to be displayed both horizontally and vertically.
Another important feature of PNG is the compression method, of which is lossless compression. PNG lossless compression contains a method which is called deflate, which predicts the colour of each pixel based on the other pixels in the image. Then the predicted colour is filtered, from the original colour. By filtering the colours of the image file the compression can reduce a file size by up to 60%.
Colour scales such as RGB and Greyscale are supported by PNG which allows three image types, as PNG is a lossless file format, saving, opening then resaving does not corrupt the colour quality of the image. PNG seems pretty good so far! Although there is one limitation of PNG is unlike GIF does not support animations.
A PNG (pronounced “ping”) file can be opened with a picture viewer or the Web browser on Windows and Mac OS, and with GIMP (The GNU Image Manipulation Program) on Linux.
There are various software programs which support PNG files, such as;
· Photoshop
· Flash
· Corel paint shop pro
· Fireworks
.tiff
TIFF or rather Tagged Image File Format is a file extension for storing pixel image data. This was originally developed by Aldus and Microsoft Corp. in the 1980’s but is now owned by Adobe Systems.
The file can be referred to as ‘.tiff’ or ‘tif’ suffix. Tiff is the most widely supported format across all platforms, including, Windows, Mac and Unix. TIFF has a flexible and adaptable format which has many image process applications. It was specifically designed for monitor, scanners and printers, as it contains information such as ‘colourimerty calibration’ and ‘gamut tables’- all very technical- which basically means it is very useful for remote sensing, and multispectral applications. TIFF contains tags or information fields which state certain data such as size and copyright information. It also decomposes images by tiles rather than scan lines, which means that the file can be compressed into larger imagery.
TIFF uses lossless compression which allows editing and resaving without any loss of quality. This makes TIFF ideal for archiving images. Besides, TIFF can support a wide range of data types and is great for scientific data (supports signed, unsigned integers, complex data and floating point values): -Which is great for you science fanatics!
TIFF has a multi-page feature which allows you to place multiple images into a single file also.
Software programs which support this data are such as follows;
· Brava Reader
· Pic Viewer
· IrfanView32
· Nico's Viewer
· Xnview
· Plugins
· AlternaTIFF
· Graphic Converter
VECTOR
The alternative to a raster image is a vector image, which uses a mathematical formula to draw a picture. A vector image defines points and the paths that connect them to form a digital representation of an image. Because mathematics can be easily scaled, a vector image can be enlarged but still have smooth edges. However, vector images are limited. They are most suitable for typography, line art, and illustrations. A raster image remains the best choice for a photograph or shaded drawing. This is all good for your arty types!
So in more detail vector graphics is the use of ‘geometrical primitives’, which is a complex word for things such as points, lines, curves and shapes such as polygons. Those shapes are all based on mathematical expressions which will make up an image (representation) through computer graphics. Vector graphics is based on more than just a straight line; vectors use paths and strokes which lead the shape through certain points also known as control points. If you studies mathematical vectors in school, it will not be difficult for you to interpret this method. Essentially these control points made specifically for the image you wish to create form a definite position on the X and Y axis of the work plan, or document. Each point individually includes information which locates the next point in the work space which makes the direction of the Vector. This then makes up the path that the lines will follow to make the shape of the desired image. Each track made (or shape) can be assigned a colour, shape, thickness and fill. No files are affected in size by this because all the information for the shape remains within the image- the information explains how to draw the vector.
Both vector and raster tools can be the best practice individually for different reasons- which will be explained further, later on. Although sometimes both format come together in terms of limitations and advantages with each different format. By acknowledging this the relationship between them can result in an efficient and effective use of the tools. So certain things can be used from one and certain things from another and can be combined to form one image. This is possible as a vector file can be converted into a raster image (bitmap) but has to be done before display so that is can be ported between the systems.
Vector graphics uses digital images through a sequence of commands/mathematical equations which makes the image; that includes 2-dimentional and 3-dimentional images. In that a vector file can be referred to as a geometric file. Adobe illustrator, CorelDraw are in the form of Vector files. Vector files can be easier modified than a raster image so if a vector file has been converted to raster, it can be reconverted to vector files for further development. Which is great so then you get the best of both worlds!
Animation images are often created as vector files, i.e. Shockwaves Flash product, which allows you to create both 2-D and 3-D animations which are sent to a ‘requestor’ as a vector file and then rasterised ‘on the fly’ as they arrive.
Vector Vs Raster graphics
So, let’s have a brief recap, on both computer graphics; raster images are composed strictly of pixels and vector images are composed strictly of paths. Already there is a definite difference present in terms of the format. Raster images are known as bitmap images which require a grid of pixels where each individual pixel can be a different colour or shade. Vector graphics require mathematical relations between points and the paths connecting them to portray and describe the image. Pretty straight forward so far.
Below is an image which represents a bitmap image, and to the right of that is another image which represents a vector graphic. These images are shown four times larger than their actual size, in order to exaggerate the edges of a bitmap image becoming jagged as its scale increases.
Bitmap Image:

Vector Graphic:

The larger you display a bitmap image the rougher and jagged it appears, whist a vector image remains considerably smooth at any size. Adobe systems is a common example of a raster/bitmap based program which can display this pixelated image when enlarged. Programs such as PostScript and TrueType are always smooth as they are vector based. The jagged appearance of a bitmap image can often be overcome slightly, with the use of “anti-aliasing”. This is the term for the application of smooth subtle transitions in the pixels edges, which minimises the jagged effect. The left image below demonstrates. A scalable vector image will always appear smooth (right image).
That is why PostScript and TrueType® fonts always appear smooth - they are vector-based. The jagged appearance of bitmap images can be partially overcome with the use of "anti-aliasing". Anti-aliasing is the application of subtle transitions in the pixels along the edges of images to minimize the jagged effect (below left). A scalable vector image will always appear smooth (below right):
Anti-Aliased Bitmap Image:

Smooth Vector Image:

Bitmap images require a higher resolutions and anti-aliasing for a smoother appearance. Vector based images are mathematically describes and from this will appear smooth at all times as the vector would increase the size of the image whilst altering the equation made to form the path of the image so it will always remain smooth at any size or resolution.
Bitmaps are best of photographs and images with subtle shading whist vector graphics are suitable for line art or detailed illustrations. Wherever possible you should use the vector format for all those forms of illustrations and bitmaps for more complex photo or images with non-uniform shading.
For you arty types, files such as Deneba Canvas, Adobe Illustrator, CorelDRAW, Macromedia or Freehand should be used first.
Todays graphic artists have to be a master of both skills; editing and illustration. Adobe Photoshop incorporates vector based paths which can be exported as native vector files. Drawing programmes, such as Illustrator and Freehand, are best suited for type and strong graphics where sharp edges are required, because when an image is resized, a new mathematical calculation is made and quality is maintained.
Each method is different in its own right, and therefore has different uses and are beneficial in separate ways. For those who want to use these formats, you should consider the applications of both and relate it to what you wish to use them for, in often times it would be beneficial to incorporate both into your work.
BIT DEPTH
A monitor is made up of many millions of pixels arranged in a grid. Monitors also have a bit depth, which controls how many greys or colours each pixel is capable of displaying. Bit depth works off how many unique colours are present in an images colour palette; this is in terms of 0’s and 1’s or in other words ‘bits’. These bits specify each colour. An image does not use all of these colours, but the colours it specifies to make the image can be created through this format. The higher the bit depth in an image the more shades of colour is available. This is so because each colour has a specific combination or sequence of 0’s and 1’s such as ‘0110’. The higher the bit depth the more combinations of colour (0’s and 1’s) are available.
Each individual digital image pixel has a colour, and this colour is created through some form of red, green and blue combination (primary colours). These primary colours are also referred to as ‘colour channels’ which vary in intensity. The colour intensity of an image is monitored and specified by its bit depth, the value of this is named ‘bits per channel (bpp)’. BPP refers to the sum of all the bits in each of the colour channels; the sum represents the total colours available for each pixel.
So think about your digital camera for a moment here; images from digital cameras have 8-bits per channel, which in other words means, there is a total of eight 0’s and 1’s available. In that there are a total of 256 combinations of colour and 256 different intensity values for each colour. In an instance where all three primary colours are combined into each pixel to make separate colours, that totals up to 16.7 million different colours, or ‘true colours’ as it is formally known. This is also referred to a 24 bit depth or 24 bit per pixel. If you’re wondering how this is calculated, it’s quite simple… ( 3 x 8 = 24 ), simple math. The three comes from the combination of the 3 colour channels, the 8 comes from the 8-bit colour and of course the 24 is the outcome.
Here you can see the differences between a 8bit image, a 16bit image, and a 24bit image.
This is an example of the 24 bit colour, which you can see has a larger spectrum of colour and clearly would impact the colour quality of your image immensely. With such a large range of colour and intensity the 24 bit colour is clearly the best.
This is an example of the 16 bit colour, which in comparison to 24 bit has not much in difference. The quality of an image with 16bit colour would also be highly detailed but would not of course have as high a rating as the 24 bit.
This is an example of 8bit. When you compare the 8 and 24 bit colour spectrum, it is very clear to see the difference in colour; the choices are more limited than the other and have a less smooth transition from colour to colour. An 8 bit colour image would be of a lower quality than that of a 16 or 24 and would more than likely be a noticeable issue.Why, you may ask, does it stop at 24bit? This is due to the fact images of this quality do not require any more colour, the human eye is estimated to distinguish up to 10 million colours. At 24 bit 16.7 million colours are present. Which means not all of those colours would be noticed or recognised by humans, and therefore would be unnecessary to formulate a higher bit depth than 24. Simple!
Below is a table which refers to the different bit depths available and the number of colours available and also the more TECHNICAL names for those patterns.

COLOUR SPACE
RBG
RGB is a term, which has been mentioned several times throughout this article, and now you will become a master of colour space. RGB as you may already know stands for; Red, Green, Blue; and are all primary colours. This is a colour model whereby; red, green, and blue lights are added together in different combinations to produce various other colours. In a standard RGB monitor, for example, each pixel has three dots within it: a red, blue, and green dot these dots are placed in different patterns to formulate the image. Think back to when you were a child and used to paint or colour, and you would, no doubt mix them together. Different combinations of colour can be formed, and the chances of you mixing to make them exact colours can be more than unlikely. The term quite noticeably comes from the initials of the three colours; RGB, this because they are the three important primary colours. Below is an example of which RGB is combined to demonstrate the range of colour which can be produced:
The main use of the RGB colour model is for its display in ‘electronic systems’, such as computers television and now, photography. RGB is referred to as a ‘device dependent’ colour model which means; different devices create RBG values differently, so the colours the device reproduces from RBG differ from device to device. This is due to the fact that different manufacturers form different colour elements so that their response to the colours; R, G and B differently, in terms of the levels of colour. Without colour management devices do not form the same colour code.
Various input devices make up different standard RGB setups. For Example: colour TV and video cameras, image scanners and digital cameras. Output devices which contain the standard RBG setup are such as; LCD/PLASMA Televisions, computer displays, mobile phone displays, video projectors and even multicolour LCD displays.
RGB is the standard colour model for many of those mentioned, however, there is a more complex colour model which is known as CMYK, which many of you may or may not have heard of. This in more ways than one, is different to the RGB model and soon you will know why and how…
CMYK
CMYK stands for Cyan, Magenta, Yellow and BlacK. This process includes the mixing of paints, dyes, inks and other natural colours in order to create a larger range and spectrum of colours; So basically mixing colours to make other colours. The process however is a little more complex, it includes the absorption of colours and the reflection others. Below is an example of a CMYK colour chart which demonstrates the mixture of these colours.
The process works by absorbing the colour source. So let’s say we are talking about a printer here. In this printer there will no doubt be ink, the device/printer would absorb the ink colour and reflect it.
Below is an example of RGB and CMYK colour models in comparison to each other.
In the image you can see that RGB starts from three primary colours and combines to form secondary colours such as magenta and cyan. However CMYK combines to form primary colours such as red and green. RBG can form White whereas CMYK can form Black. RBG as you may have noticed is strictly a term which applies to the colour scale of a display screen, whereas a CMYK colour model is a colour term which applies to printing.
The image below, here, demonstrates the process of the four colour combinations on the same image.
Here you can see the colours CMY combined in the image; the image is much more detailed however, without the black the image appears more washed out. The image on the right has black (K) added to it.
Before these colour methods/models were created, a more basic style was formulated known as greyscale.
GREYSCALE
Greyscale is a term which describes the values of each pixel in a digital image as a single. Or in more simple contexts makes the image look like its black and white. Quite obviously, hence the title, the term composes an image in shades and intensities of grey. These images vary from black and white, black being the strongest intensity and white being the weakest.
Greyscale images are formed from one bit black and white images, which in terms of computer images mean the image is composed of only two colours, white and black. A more technical name for this is ‘bilevel’ or ‘binary’ images. Greyscale images, as you would imagine, have many shades of grey in-between which compose the image, this is also known as monochromatic which means; the removal of any chromatic variation, or rather colour intensity.
Below is an example of the greyscale:
Greyscale in computing stores the pixels in binary. Some old greyscale monitors show 4 bit display or 16 different shades of grey however today, monitors with greyscale are intended for visual display visual and printed such as photographs. These modern monitors contain 8 bit display as mentioned earlier which means that there are up to 256 colour intensities/variation of grey. The more modern scale provides much more precision which is convenient for programming.
Regardless of which pixel depth is used, binary representations assume that 0 is black and that the maximum value possible is white. So for example, if it is a computer monitor with an 8bit display the highest value at 256 would be white.
Below is a representation of greyscale and the effect of the colour on the image of the parrot in comparison to the RGB colour model.
YUV
YUV colour space is a slightly more complex model which states that Y determines the brightness of a colour, known as luminance or luma, while U and V determine the colour itself, known as the chroma. In that YUV refers to Luminance and Chrominance of an image. Below is a table which explains the range of Y, U and V.

The best part of YUV is you can remove the U and V components to give you a greyscale image! In fact, the human eye responds to brightness more so than it does to colour so in that, compression formats (lossy and lossless) relieve the image of half or even more of the colour samples in U and V to make the file smaller. So in compression of images YUV or rather just UV is sacrificed; simply because our eyes are incapable of noticing the difference. Clever stuff!
Here is an image where the YUV has been separated into separate colour formats.

This image is the final combination: YUV together to form the image.
This image is just Y, as you can see without UV, forms an image based on greyscale.
This image is just U, where the colour for the image is shared between V. in U the image contains a more blue cast.
In this V image the colours are more yellow. When these colours are combined together- blue, yellow, and grey they formulate the top image.
And so finally this brings us to image capture, the last segment of this article...
IMAGE CAPTURE
Image capture is available in forms of scanners and digital cameras. There is however other forms which include data storage which often can be interpreted as a form of image capture.
Scanners work by analysing an image and processing it to form a digital copy of a (hard copy) image. Image and also text capture known as OCR (optical character recognition) allows you to save files on your computer. This then allows you to alter the image if you require and then to print, store, or use (i.e. webpage).

The main part of a scanner is known as a CCD (charged couple device) which is a collection of tiny light sensitive diodes or in other words circuits which channel electric currents. These CCD’s convert light or (photons) into electrical charge or (electrons) which make diodes. These diodes are also called photosites.
Photosites are sensitive to light, the brighter the light on the photosite the greater the electrical charge.
When a document is placed onto the scanner, and closed, the scanning of the document begins. A lamp within the scanner is lit, which illuminates the document. By closing the lid the light hitting the document is prevented from passing through so that the detail on the paper can be noticed thoroughly by the scanner. The lamp used is often a cathode fluorescent lamp of CCFL. Within the scanner there are often two mirrors, these mirrors reflect the image of the illuminated document onto a lens. In some scanners three mirrors are used. The lens focuses the image through a filter on the CCD and splits the image into three small sections. Complex, I know. The scanner then combines the three parts of the CCD array into one single full coloured image, identical, to the document scanned.
Like anything else technologically, scanners vary in resolution and sharpness, scanners have a true hardware resolution of 300 x 300 dots per inch (dpi). The sharpness depends on the quality of the optics used to make the lens and the brightness of the light source. Rather than a CCFL lamp if a Xenon lamp is used, and also a higher quality lens, the image would be much sharper and clearer than a standard lamp and basic lens.
Scanners are capable of producing a bit depth of 24bit which creates a true colour image. This as mentioned previously is the highest quality of an image. Often some support a bit depth of 30 or 36 bits, which however only output in 24 bit anyway.
DIGITAL CAMERA
Digital images essentially composed of a long string of 1’s and 0’s which represent all the coloured dots and pixels which make up the image you see.
To be able to get your image into digital format you can do one of the following, use a standard conventional camera to take a photograph and then process the film onto photographic paper then scan the image using a digital scanner. Or you could just simply use a digital camera and also if you like, use the USB cable to transfer the image from camera to computer.
Digital cameras, unlike conventional cameras, do not use film. Digital camera has sensors which convert light into electrical charges. This is done through the use of CCD chip as seen below:

The CCD chip is essentially a sensor which converts the light passed through the camera, into electrons. It is these electrons which have an individual charge or value which is read by the CCD each individual electron represents each cell in the image. There is also what is known as a CMOS (complementary metal oxide semiconductor) which is an alternative to CCD. They both convert light into electrons.
CCD work by placing a charge across the chip which reads the electrons which hit the CCD array (surface). An analogue to digital converter (ADC) turns each electron value into a pixel value which is turned into a digital value. A digital value is measured by the amount of charge each photosite has and converts it into a binary form (0s and 1s).
CMOS use transistors (a conductor device), these transistors take each pixel in the image and amplify and move the charge of the electrons through waves.
Here is a chart I have collated to compare both sensor chips, so that you can decide which you think is best for you:

Both formats of image capture have different advantages over another. However CCD sensor chips are more common in digital cameras then CMOS chips are. But it is clear to see the differences between the two, some more opposite that others.
CAMERA RESOLUTION
Cameras have different amounts of detail which when captured are called camera resolution. As mentioned earlier resolution; are measured in pixels. The more pixels a camera has the more detail it can capture in an image and therefore the larger the image can be without appearing distorted (blurry).
Some camera resolutions include:

Digital camera sensors however are incapable of creating all the colours of the spectrum. In compensation to this, the sensor captures the light around the image and uses the colours it can form to make the three primary colours (red, green and blue) which can be mixed to make a spectrum. Camera sensors do not remember this and so the process of colour separation occurs again. Cameras use different methods for this process, higher quality cameras use three separate sensors with different filters. Look at it this way, the light which enters a camera is like water flowing through a pipe. A beam splitter sits within this pipe and then splits equal amounts of water into 3 other pipes. Each sensor however only responds to one of the three primary colours. Very complex!
Which brings us nicely to the end by which we need to store the image we capture. Which as you are well aware, we have mentioned several file formats or file storage components that nicely connect to the end of this section.
So I hope you enjoyed reading this article on digital graphics. If you are not an expert by now then cameras don’t take photographs! Please do join us next week for our next article on Interactive Media.
Goodbye.
Photosites are sensitive to light, the brighter the light on the photosite the greater the electrical charge.
When a document is placed onto the scanner, and closed, the scanning of the document begins. A lamp within the scanner is lit, which illuminates the document. By closing the lid the light hitting the document is prevented from passing through so that the detail on the paper can be noticed thoroughly by the scanner. The lamp used is often a cathode fluorescent lamp of CCFL. Within the scanner there are often two mirrors, these mirrors reflect the image of the illuminated document onto a lens. In some scanners three mirrors are used. The lens focuses the image through a filter on the CCD and splits the image into three small sections. Complex, I know. The scanner then combines the three parts of the CCD array into one single full coloured image, identical, to the document scanned.
Like anything else technologically, scanners vary in resolution and sharpness, scanners have a true hardware resolution of 300 x 300 dots per inch (dpi). The sharpness depends on the quality of the optics used to make the lens and the brightness of the light source. Rather than a CCFL lamp if a Xenon lamp is used, and also a higher quality lens, the image would be much sharper and clearer than a standard lamp and basic lens.
Scanners are capable of producing a bit depth of 24bit which creates a true colour image. This as mentioned previously is the highest quality of an image. Often some support a bit depth of 30 or 36 bits, which however only output in 24 bit anyway.
DIGITAL CAMERA
Digital images essentially composed of a long string of 1’s and 0’s which represent all the coloured dots and pixels which make up the image you see.
To be able to get your image into digital format you can do one of the following, use a standard conventional camera to take a photograph and then process the film onto photographic paper then scan the image using a digital scanner. Or you could just simply use a digital camera and also if you like, use the USB cable to transfer the image from camera to computer.
Digital cameras, unlike conventional cameras, do not use film. Digital camera has sensors which convert light into electrical charges. This is done through the use of CCD chip as seen below:

The CCD chip is essentially a sensor which converts the light passed through the camera, into electrons. It is these electrons which have an individual charge or value which is read by the CCD each individual electron represents each cell in the image. There is also what is known as a CMOS (complementary metal oxide semiconductor) which is an alternative to CCD. They both convert light into electrons.
CCD work by placing a charge across the chip which reads the electrons which hit the CCD array (surface). An analogue to digital converter (ADC) turns each electron value into a pixel value which is turned into a digital value. A digital value is measured by the amount of charge each photosite has and converts it into a binary form (0s and 1s).
CMOS use transistors (a conductor device), these transistors take each pixel in the image and amplify and move the charge of the electrons through waves.
Here is a chart I have collated to compare both sensor chips, so that you can decide which you think is best for you:

Both formats of image capture have different advantages over another. However CCD sensor chips are more common in digital cameras then CMOS chips are. But it is clear to see the differences between the two, some more opposite that others.
CAMERA RESOLUTION
Cameras have different amounts of detail which when captured are called camera resolution. As mentioned earlier resolution; are measured in pixels. The more pixels a camera has the more detail it can capture in an image and therefore the larger the image can be without appearing distorted (blurry).
Some camera resolutions include:

Digital camera sensors however are incapable of creating all the colours of the spectrum. In compensation to this, the sensor captures the light around the image and uses the colours it can form to make the three primary colours (red, green and blue) which can be mixed to make a spectrum. Camera sensors do not remember this and so the process of colour separation occurs again. Cameras use different methods for this process, higher quality cameras use three separate sensors with different filters. Look at it this way, the light which enters a camera is like water flowing through a pipe. A beam splitter sits within this pipe and then splits equal amounts of water into 3 other pipes. Each sensor however only responds to one of the three primary colours. Very complex!
Which brings us nicely to the end by which we need to store the image we capture. Which as you are well aware, we have mentioned several file formats or file storage components that nicely connect to the end of this section.
So I hope you enjoyed reading this article on digital graphics. If you are not an expert by now then cameras don’t take photographs! Please do join us next week for our next article on Interactive Media.
Goodbye.















No comments:
Post a Comment