Why hello there! Welcome back to YourHookUpToMedia’s guide to everything about media. In this article we will tell you everything you need to know about web animation, including all the practical uses and all the nitty-gritty, ins and outs, to help you understand how create your very own animation.
Alright-y then: so let’s break it down a little closer, here’s what we will be discussing;
Uses of Web Animation; banner ads; linear and interactive animations; promotion; information; entertainment
History of Animation; (following history- starting from the very early original technologies up to the current methods of motion capture).
Animation; optical illusion of motion; stop motion; computer generation
Digital Animation; vector; raster; compression; file formats
Web Animation Software; authoring and players.
USES OF WEB ANIMATION
Web animation can be interpreted into many different features including Banner ads, Linear and interactive animations, promotion, information and entertainment.
So, lets take a closer look into how these features work and how web animation is relevant to them. Which also means that by the end of this article without a doubt you will be an expert on web animation and can even interpret them into your own websites. FUN!
Banner Advertisements
So what exactly is a banner ad? Banner ads are forms of advertising as you could imagine, which are on the Internet or rather World Wide Web server. Essentially it is a form of advertisement on the Internet. Banner ads are embedded with a specific advertisement (generally to advertise your company or service), onto a webpage. They were made specifically to attract ‘traffic’ to a website by using a link direct to their website. I am sure you will have come across one before, they are often little advertisements, which more often than not are animated, and when you click the ad, you will be redirected to their webpage. This method is known as an ‘impression’ or often a ‘click through’
Banner ads are constructed from an image such as a GIF file, Flash file- of which are often animated and include sound or video, to maximize attraction and presence. I mean, how many times have you been surfing the internet and come across a really noisy advertisement on a webpage, yes it is annoying, but all credit to the fact it attracted your attention, right?
These images are formed in a high-aspect ratio shape (such as wide and short, or tall and narrow), which as you could imagine in the reference to banners. Banners are often found on webpages with interesting content, such as newspaper articles or opinion pieces. (No doubt there is one on this webpage).
So I bet your wondering how they make money from this advertisement. Right? …In fact it is pretty straightforward. A method called CPC, which is where affiliates earn money based on Cost Per Click basis. For every single click on the advertisement the affiliate earns money. For those of you who don’t know, an affiliate refers to a person or organization, which is officially attached to a larger body- bigger company. Banner ads can be referred to as a Web banner, which relates better to the fact it is loaded onto a web browser. Essentially when an advertisers scans their log files and discovers that a web user has visited the advertisers site from the click of the banner, the advertiser sends the content provider some small amount of money usually around 5-10 pence.
Web banners work just as of any other advertisement; notifying customers of the product or service and presenting reasons why the customers should choose the product, or in other words make people want to by their product or use their service.
As you most probably already know these form of advertisement can be highly annoying because they distract you from the web pages actual content. Then again that is how they make the money that they do. Although problems these advertisers often have now is that, new proxy servers and internet settings block these pop ups and advertisements. Such servers are Privoxy or Ad-block plus for Mozilla Firefox, Adthwart for Google chrome and ietpro for Internet Explorer, all of which are web extensions created in order to prevent such disturbances.
Linear and Interactive
Advertising
Linear and interactive advertising can also be referred to as; linear
and Non-Linear, which defines how the advert plays out. Linear is the process
by which there are no interactions between you as the viewer and the advert
itself. The advertisement would play out as intended. Non-linear or interactive
animations, allow users to alter the outcome of the entire animation. Such examples
of non-liner animations are flash games imbedded onto websites used by flash or
java technology; websites of this kind include www.minclip.com or even www.facebook.com
The image below shows miniclip.com here you can see a webpage
surrounded by interactive animated flash games; some are re-released versions
of classic games many are new and updated regularly.
Facebook (which I am sure you all know about- very well); has
embedded games which have been developed by third parties for Facebook users,
some of which are displayed in the image below.
Games are so much fun!
Another type of animations is Informative Animations...
Informative Animations
As you could imagine informative animations provide information,
education related. Generally such animations are cartoons developed for
classrooms to be displayed on education television programs, however they can
includes uses to people who which to provide information to others, of all
ages, in a new, different, clear, and accessible yet informative way. Many companies
incorporate such education animations into their meetings. Many creators often
produce educational animations in a range of ways including short clips to full
length presentation. Many of these cartoons are accessible over the internet
and can be purchased or streamed from company websites/video sharing services. The
image below displays a company called makemegenius.com of which create such
educational animations for schools etc. for subjects such as sciences in
particular.
Educational animations include information which has been broken
into chunks which are easy to present and understand. Animations often tell a
story including characters of which interact with each other and in doing so
provide the information. Some are much simpler and include a basic explanation
of the information with the animation relating to the words as spoken, such as
what is the water cycle. Many animations include features such as
questions/quizzes and occasionally ‘mnemonic’ features such as rhymes and songs
and other mental method of memorizing information, in order to make the
information from the animations more memorable. In schools educational
animations are highly preferred as the experience provides the students with a
more dynamic learning practice. Many students prefer the visual/audio
representation of information, and by adding a different type of teaching into
the mix, all children including those who prefer text book learning are catered
too.
Promotional Animations
Promotional animations are standard promotional animation videos which
as you would imagine, promote services through both types of animation (liner
and non-linear). This form of animation
advertising shows their content in a finer detail than any other advertisement.
An example of which is the image below.
Education animations can provide people with the ability to interact
with the animation- thanks to today’s fantastic technology. So now you an alter
the settings manually and observe how the effects alter the image, this works
almost like an experiment providing the user with the ability to question why
these changes occur and create their own understanding- which therefore makes
this process highly effective.
Entertainment
In this circumstance the term entertainment in relation to
animation, relates specifically to animations created primarily for entertainment,
not education or promotional. The majority of web animation is developed with
entertainment in mind, whether that is in a short animation, or fully animated
games. There are many websites created from across the world dedicated to hosting
interactive flash gaming or short animation films and clips. Animators often
use specific websites directed at their type of animation to promote their work. Publications involved with these animations often
are advancing in popularity, simply because of the ease of uploading, and viewing
accessibility. Because the animations can be so easily reached the popularity
can spread (if lucky it could be viral). An example of this is the cartoon series ‘Cat
Face’ or Simons Cat on YouTube:
So enough about the types of animation, let’s take a closer look
into the processes of animation and how they came to be what they are today…
HISTORY OF ANIMATION
Back in the day… the early inventions of animation were designed as novelties
for children and small party amusement. Such animation devices include the Phenakistoscope,
the Zoetrope, the good old Flip Book and the Praxinoscope.
So without further a due…
The Phenakistoscope
Many attempts of this type of animation had been practiced; however
it wasn’t till 1829 that the Phenakistiscope was officially established as an
animation method. A man named Joseph Plateau a Belgian chap. Plateau invented
the working Phenakistiscope in 1832- officially. The phenakistoscope was often
referred to ‘in those days’ as the magic disk.
The phenakistoscope created the illusion of motion following the
persistence of vision. Below is an example of such magic…
Pretty cool, right?
The phenakistoscope uses a spinning disc attached vertically on a handle. Around the center of the disc a series of pictures was drawn corresponding to frames of the animation; around its circumference were a series of radial slits. The user would spin the disc and look through the moving slits at the disc's reflection in a mirror.
The scanning of the slits across the reflected images kept them from simply blurring together, so that the user would see a rapid succession of images with the appearance of a motion picture. A variant of it had two discs, one with slits and one with pictures; this was slightly more unwieldy but needed no mirror. The phenakistoscope could only practically be used by one person at a time.The phenakistoscope was only famous for about two years due to the changing of technology.
The Zoetrope
The zoetrope was officially produced in 1833 by a man named William
George Horner. However, the earliest version of such zoetrope was known to be
created in China by a man named Ting Huan, in (…here’s the scary part) 180 AD! This
was developed from the use of translucent paper, of which was hung over a lamp,
the rising air turned vanes at the top from which hung painted pictures on to
panels, these would appear to move provided the device is spun at the correct
speed.
However the modern zoetrope created by British mathematician William
Horner named his adaption the Daedalum (“the wheel of the devil” - scary). This
invention was not popular until the 1860’s when Milton Bradley and William F.
Lincoln named their toys the zoetrope (“the wheel of life” - ahhh).
The Flip Book
The flip book was the first invention of animation originating from
1868, patented by John Barnes Linnet. The flip book in actuality was called the
kineograph (moving picture). The flip book was the first linear sequence of images
animation, which in other words is a book; containing a series of images which
vary gradually from page to page (consecutively), so once the pages are flipped
(turned quickly) the images appear to be animated suggesting the idea of some
motion or change. Flip books are generally illustrated books aimed for the entertainment
of children, and are occasionally advertised to adults with the use of
photographs as opposed to drawings. It should be noted that in those days such
flipbooks would have been created manually-hand drawn. Flip books can be books
on their own, or even incorporated within other books and magazines in the
corners and such. Take a look at the flip book below:
(woooo! Flip book-y)
The German film
pioneer, Max Skladanowsky, first exhibited his serial photographic images in
flip book form in 1894, as he and his brother Emil did not develop their own
film projector until the following year. In 1894, Herman Casler invented a
mechanized form of flip book called the Mutoscope, which mounted the pages on a
central rotating cylinder rather than binding them in a book. The mutoscope
remained a popular attraction through the mid-20th century, appearing as
coin-operated machines in penny arcades and amusement parks.
The Praxinoscope
This is known
as the successor to the zoetrope. It was originally invented in France in 1877
by a Charles-Emile Reynaud. Like the zoetrope a strip of corresponding images
in a strip were placed around the inner surface of a spinning cylinder. The praxinoscope
was superior to the zoetrope just simply by replacing its narrow slits with an
inner circle of mirrors. This was done so that the reflection of the pictures
appeared much smoother and less stationary in position as the wheel span. If you
looked into the mirror of a praxinoscope you would see a much more rapid progression
of images with a brighter and less distorted image than what the zoetrope had offered.
Below is a video of the praxinoscope in process:
In 1889 Reynaud created the Theatre Optique (optical theatre) which
was an improved version of the praxinoscope; capable of projecting images onto
a screen from a longer roll of images. This allowed him to formulate hand-drawn
animated cartoons for longer, to larger audiences. Until the Lumiere brothers
created the Cinematograph shortly after which applied to the same rule except
photographs could also be used. This newer adaption was much favored over the
praxinoscope however; essentially they presented the same idea.
Animated Films
The first animation created for the cinema was by Charles-Emile
Reynaud (our praxinoscope inventor). On
the 28th October 1892 at Musee Grevin in Paris he exhibited
animations consisting of around 500 loops of frames, through using his Theatre
Optique system- (imagine a film projector).
One of his first ever film animation creations was named ‘Pauvre
Pierrot’
It is one of the first animated films ever made, and alongside Le
Clown et ses chiens and Un bon bock was exhibited in October 1892 when
Charles-Émile Reynaud opened his Théâtre Optique at the Musée Grévin. It was
the first film to demonstrate the Theatre Optique system developed by Reynaud
in 1888, and is also believed to be the first usage of film perforations. The
combined performance of all three films was known as Pantomimes Lumineuses.
These were the first animated pictures publicly exhibited by means of picture
bands. Reynaud gave the whole presentation himself manipulating the images.
Cell Animation
A man named John R Bray was the first patent applicant for various
techniques for animating, in the year 1914, in particular the process of
printing the backgrounds to animations. It was in 1914 that Earl Hund applied
for a patent known as Cell Animation; this required the technique of drawing
the animated sectors of the animation on a clear celluloid sheet and placed
above the corresponding background. The
image below explains how the method is composed:
The whole concept of celluloid sheets was a huge break through into
the evolution of traditional animation. The concept allows certain parts of
each frame to repeated from frame to
frame- which is great saves time and money as less equipment is required. For instance
if two objects or characters are present within a particular scene, but only
one is moving, the stationary character can be drawn into the background,
whilst the animated/moving character will be developed on a series of celluloid
sheets.
Anything within a given scene which does not move, simply, does not
need to be animated, so you will not need to draw the images onto cells. Any
stationary objects throughout the entirety of the scene can be drawn onto/with
the background. If however certain objects or characters do move at any point
within the scene they should be drawn onto cells ready for movement.
Cell paints were manufactured in shaded versions of each colour this
so each extra layer of cell added between the background and the camera can be
altered so the cell sheets cannot be noticed. For instance, if a cell with a
drawing on is placed beneath another cell the colour is darkened because the
cell over it dulls out the colour. The idea is to make the colours brighter in
case they are covered by another cell; therefore the variation of shades and
colour is extremely beneficial in such cases. In 1934 a man named Urb Irwek
created a ‘multi-plane’ camera (sounds cool, doesn’t it?) this gadget allowed
the camera to film multiple, separate layers of cells; giving ‘birth’ to the
very first 3D animation!
Here is an example of how it works;
Computer Generated Imagery was the next advance of the animation
evolution. Ken Kowlton worked at Bell Laboratories and he began to develop
computer software specifically for producing animated films. His technologies
lead to our trusted CGI ‘Flash/Blender/3DS Max’ software, which we use
today. Most computer animation software
has been developed for job specific uses. Alias software and MenV (developed by
Disney and Pixar) are common users of such features. Ed Catmull from The
University of Utah, developed a scripting language called siggraph 98. This
scripting produced smooth animations, in particular of a shaded hand in 1972.
Fred Parke also of Utah University pioneered this technology by producing the
first full computer generated facial animation.
The first full length feature animation film, making extensive uses
of such software was the original Tron in 1982.Here is the original trailer; you will see the animated scenes which
have been created using such software;
Many people believe that animation has been created mostly through
the use of computers however, only 15-20 minutes of the material requires CGI.
The graphics were created partially by triple-i using a computational engine
called F-1 or Super Foonly- (that name is much more fun!). this technology has been adapted since then
and used for the remake released in December 2010.
This is the trailer of the newer version:
Morphing Technology advanced CGI technology even more after the 1988
film ‘Willow’ using a grid warping technology, the image is transferred onto a
computer and a grid overlay is produced, this is then enhanced. An example of
this is the film Indiana Jones (the villain dying):
The effects of this were created by Industrial Light and Magic using
grid warping technology. Both Tom Brigham and Doug Smythe from AMPAS developed
this technique. The morphing technology was used recently in Star Trek: Deep
Space Nine to create Odo:
CGI was an emerging technology at this point and used in films more
frequently. However it wasn’t until 1993 that it was established fully in
Jurassic Park where molded dinosaurs were designed on a computer.
In 1995 the first ever, full length feature film entirely composed
of CGI was the one and only Toy Story. Each character was created from clay or
molded following computer drawn diagrams prior to the animated design. Once
animators have a model they require articulation, motion and controls to be
coded, in order to allow the character to move in many ways including talking,
talking and jumping. Woody was the most complex to develop; he required 723
motion controls – 212 for his face and 58 for his mouth alone. The
synchronization of the actors voiced to the characters required one week for
every 8 second frame, to detail the characters mouths and expressions. Once
completed the animators compile all the scenes and develop new storyboards with
the computer animated characters.
Finally shading, lighting and visual effects are added. In total 300
computer processors were required to render the film into is final masterpiece!
This resulted in 800,000 hours of machinery, 114,240 frames of animation and
2-15 hours spent per frame to animate.
Easy? - I think not.
Motion Capture
Motion capture technology has developed
dramatically in correspondence to the vast improvements in CGI. Motion capture
technology, quite obviously captures movement but also translates such
information on to a digital model, such as movements on the human body, limbs
and even the most difficult of digital editing; subtle facial expressions or
otherwise performance capture. Animation data such as this would be mapped onto
3D models so that the model would perform the same actions as the actor
themselves. For the oldest fans of 1978’s Lord of the Rings the animated film
was completed using the motion of an actor and later used as a guide for the
frame by frame motion for the later hand drawn animated character.
Sounds like a lot of work, I know!
The camera movement itself can also be motion
captured in that a virtual camera will be created to pan, tilt and even dolly
throughout the scene, with assistance of a camera operator, during the actor’s
performance. The system would not lonely capture the camera and props it will
also motion capture the actor’s performance. In doing so the digitally
generated characters/images/sets are able to obtain the same perspective as the
camera itself. A computer of course is required to process all the data and
displays the movements of the actor in the chosen areas on the screen
corresponding to all the objects within the set. The data obtained from the
captured footage according to the camera movements, is referred to as “Match
Moving” or simply “Camera Tracking”.
In order for motion capture technology to work,
optical systems and passive marker systems are required.
Optical Systems
Optical systems work by utilizing data which
has been captured from image sensors. They do this in order to triangulate the
3D position of the subject between one or more cameras calibrated, this provide
overlapping projections. Data attainment is most commonly implemented using
special markers attached to an actor often noticed through specially formulated
costumes. Although now, with our amazing technological advances; recent systems
are now able to generate accurate data by tracking surface features identified
dynamically for each particular subject. For a larger accumulation of
actors or capture area the addition of more cameras is the solution. Optical
systems produce data at 3 degrees of freedom for each marker, including
rotational information which
These systems produce data with 3 degrees of
freedom for each marker, and rotational information must be created from the
related orientation of three or more markers. In simpler terms, said markers,
such as the shoulder, elbow and wrist markers information combined will provide
the angle of the elbow.
The passive marker systems work with a reflective material which reflects a special camera generated light near the lens. The cameras threshold is adjusted so all other lights and pigment, including skin and fabric is rejected; responding only to the bright reflective markers. The centre of the marker is estimated as a position within the 2D image which has been captured. The use of greyscale values of each pixel is used to formulate sub-pixel accuracy; ensuring the exact shading definition of each pixel.
In order to calibrate the cameras an object
with markers attached at known positioned is placed in front of the lens, the
distortion of each camera is measured and altered accordingly, this so later
the marker to camera calibration will be precise. A 3D fix is obtained when
over two cameras are calibrated. Generally the entire system will require
anything from 6 to 24 cameras. Extra cameras are required so that full coverage
around the subject/s can be formulated.
Most marker and magnetic systems require the
user to wear wires or electronic equipment, however Passive systems do not.
Hundreds of rubber balls are attached with reflective tape which must be
replaced periodically. The markers are normally attached directly to the skin,
or velcroed to the performer wearing a designed suit specifically for motion
capture.
Most recently this technique was used in the
popular film AVATAR, motion capture was used to such a degree that it was an integral
part of the film, using this and animated graphics, the native aliens called
the Na'vi were created using the actors with motion capture suits and
markers.
The Animation Process
Persistence of vision
Persistence of vision is the phenomenon of the
eye by which an afterimage is thought to persist for approximately one
twenty-fifth of a second on the retina.
The phenomenon is that the eye is not a camera; vision is not as simple as light passing though a lens, as the brain is required to formulate a sense of the visual data provided by the eye, and from this creates a coherent picture of reality.
Persistence of vision is still the accepted term for this phenomenon in the realm of cinema history and theory. In the early days of film innovation, it was scientifically determined that a frame rate of less than 16 frames per second (frame/s) caused the mind to see flashing images. Audiences still interpret motion at rates as low as ten frames per second or slower (as in a flipbook), but the flicker caused by the shutter of a film projector is distracting below the 16-frame threshold.
The phenomenon is that the eye is not a camera; vision is not as simple as light passing though a lens, as the brain is required to formulate a sense of the visual data provided by the eye, and from this creates a coherent picture of reality.
Persistence of vision is still the accepted term for this phenomenon in the realm of cinema history and theory. In the early days of film innovation, it was scientifically determined that a frame rate of less than 16 frames per second (frame/s) caused the mind to see flashing images. Audiences still interpret motion at rates as low as ten frames per second or slower (as in a flipbook), but the flicker caused by the shutter of a film projector is distracting below the 16-frame threshold.
Modern theatrical film runs at 24 frames a second. This is the case for both physical film and digital cinema systems.
In physical film systems, the frame rate and flicker rate must be distinguished in that is important to ‘pull down’ the film frame and then concealed by a shutter which will minimise blurring, in other words, there needs to be at least one flicker per frame in each film. When hand drawn animation is considered, moving characters are shot in two’s, meaning; one drawing is established in two frames of film, there are in total 24 frames per second and therefore only 12 drawings are required per second. The image update rate will be low yet the visual aspect is exceptional for most subjects. However in the circumstance that a character is required to perform quick movements, it is necessary to increase the 1:2 ratios (1 drawing to 2 film frames) to 2:2. Having only 12 drawings each second would be too slow to convey the motion successfully.
“A blend of the two techniques keeps the eye
fooled without unnecessary production cost”
Most children’s early morning cartoons are
filmed on ratios of 3:1 or 4:1 (drawings to frames, per second), translating to
only 6 or 8 drawings per second. This so less work is required and the
production costs are much lower aiming as ‘cheap as possible’!
Stop Motion Animation
This type of animation is a technique to make a
physically manipulated object appear to move on its own. The object is moved in
small increments between individually photographed frames, creating the
illusion of movement when the series of frames is played as a continuous
sequence. Clay figures are often used in stop motion for their ease of
repositioning. Motion animation using clay is called clay animation or
clay-mation.
A very
good example of stop-motion clay animation is Wallace and Gromit, after
detailed storyboarding, set and plasticine model construction, the film is shot
one frame at a time, moving the models of the characters slightly between to
give the impression of movement in the final film. In common with other
animation techniques, the stop motion animation in Wallace and Gromit may
duplicate frames if there is little motion, and in action scenes sometimes
multiple exposures per frame are used to produce a faux motion blur. Because a
second of film constitutes 24 separate frames, even a short half-hour film like
A Close Shave takes a great deal of time to animate well. General quotes
on the speed of animation of a Wallace and Gromit film put the filming rate at typically
around 30 frames per day — i.e. just over one second of film photographed
for each day of production. The Curse of the Were-Rabbit is an example
for how long this technique takes to produce quality animation; it took five
years to make! And let’s be honest here, the film was no, ‘Godfather’.
However time, patience and skill must be
noticed here particularly and they are pretty cool.
As with Park's previous films, the special effects achieved within the limitations of the stop motion technique were quite pioneering and ambitious. In A Close Shave, for example, consider the soap suds in the window cleaning scene, and the projectile globs of porridge in Wallace's house. There was even an explosion in "The Auto Chef", part of the Cracking Contraptions shorts. Some effects (particularly fire, smoke, and floating bunnies) in The Curse of the Were-Rabbit proved impossible to do in stop motion and so were rendered on computer.
Computer Generated Animations
Frame
Rates
This is the measure of the number of frames
displayed sequentially per second of animation in order to create the illusion
of motion. The higher the frame rate, the smoother the motion, because there
are more frames per second to display the transition from point A to point B.
In various applications, 24 frames are usually used for every 1 second of
animation required.
Key
Frame
A key frame in animation and filmmaking is a
drawing that defines the starting and ending points of any smooth transition.
They are called "frames" because their position in time is measured
in frames on a strip of film.
A sequence of key frames defines which
movement the viewer will see, whereas the position of the key frames on the
film, video or animation defines the timing of the movement. Because only two
or three key frames over the span of a second do not create the illusion of
movement, the remaining frames are filled with inbetweens (or tweens)
Onion
Skinning
This is a 2D computer graphics term for a
technique used in creating animated cartoons and editing movies to see several
frames at once. This way, the animator or editor can make decisions on how to
create or change an image based on the previous image in the sequence.
Traditional animations incorporate the concept
of onion-skinning by drawing the individual frames on thin onionskin paper over
a light source/lighting box. Animators would place the previous and next
drawings exactly beneath the current drawing, this so they could draw all the ‘in
between’ frames to provide a smooth film motion. When computer software is considered,
teachnologies such as FLASH, PHOTOSHOP and AFTER EFFECTS are commonly used. The
onion-skinning effect is achieved by making the previous frames faintly translucent
and projecting them on top of each other. The effect can be used also to formulate
a blurring effect, such as in THE MATRIX, when characters dodge bullets.
Insert video here: http://www.youtube.com/watch?v=Kc4cBiSXoCs
Tweening
The term tween is abbreviated from the word
in-between, and refers to the creation of successive frames of animation
between key frames. The term generally applies to FLASH shape tweening
and motion tweening, whereby the user defines two key frames (generally the
start of the motion, and the end of the motion) and the software automatically accumulates
the in-between frames by either morphing
one shape to another over a period of time or moving a shape/s from one point t
another over a period of time. The latest technology today allows 3D animation
programs to follow the method of tweening.
DIGTIAL ANIMATION
Vector Animation
Vector animation refers to the type of
animation whereby the art or motion is controlled through vectors as opposed to
the standard pixels. This particular type of animation allows for a cleaner and
smoother animation as the pictures are displayed and resized accordingly,
through mathematical values rather than stored pixel values. A very popular
vector based animation program is MACROMEDIA’S FLASH.
In this video you will notice the strong
cartoony effect these type of animations portray, this is due to the accentuation
of the vectors.
Raster Animation
Raster based images and animation frames are
composed of individual pixels. Pixels for those of you who aren’t familiar,
contain information about the colour and brightness of that particular little ‘spot’
on the image. This is similar to the idea of pointillism in painting where the sum
of the points formulates the total accumulation of the image. Raster animations
are used in contrast to VECTOR ANIMATION for the more realistic representations
of the images, as opposed to the stylized, anime representations generated
within vector graphics. Raster animation is also use to create animation for
logos and banners based on photos or drawings
One of the problems involved with creating
raster-based animations on a computer is the enormous amount of computer power
that is often involved in creating them. For example, a single frame of
animation that is 400x300 pixels in size will have a total of 120,000 pixels.
Each of these pixels will have (depending on the colour scheme being used)
eight to 48 bits, meaning each frame might use as many as 5.76 million bits. This
means that an animated 14-frame-per-second video of 20 minutes would have 2.02
trillion bits of information. Most films are larger than this with a higher
frame rate. A major difficulty with working with raster-based animation or
images is that they are not infinitely enlargeable. This means that if you
create a raster based animation at a certain size (400x300, for example), you
will not be able to enlarge it to any significant extent without loss of
resolution in the images. Vector graphics do not have this problem.
Compression
Compression is a computerised technique for
reducing the number of bits required to represent text, data or image so as to
save storage space or reduce transmission time. In that, there are two main
types of compression when animation is considered.
- Colour compression
- Pixel compression (pruning)
Smaller techniques such as eliminating comments
and certain technical parameters can also be used, but are not the often
choice.
Colour
Compression
A .gif file can store
from 2 to 256 different colors (1 bit to 8 bit color), however the more colors,
the bigger the file. If you used significantly less colors than 256 colors, you
can make the file smaller by compressing the color palette. Take a bouncing
ball animation for example: the images is drawn using only 4 colors (black,
grey, green and red) but the global palette contained 256 colors in the
palette. The file was saving 252 colors that aren’t really needed. By
compressing the palette from 256 colors (8 bit) to only the colors used colors
(2 bit in this case) the file was reduced from 2622 bytes to 1455 bytes - a 45%
reduction in size. Also having a global palette, and not having to stores
a local palette for each image also saves space.
Pixel compression
The second, and often more significant, way to reduce the file size is to prune out redundant pixels. Often only a small section of an image changes from image to image. Consistently having the same full image on each frame is a waste of space, but these result in an animation where all the non-moving parts are redrawn for each image along with the moving parts. How great would it be if your browser could download a smaller image, from the section that is being animated and simply just draw if over the rest of the static image? But wait… it can! This is the exact purpose for pixel compression. There are many different techniques to do this, such as "minimum bounding box" and "difference method".
How amazing is technology?!
ANIMATION FILE FORMATS
.fla
Flash files are in the SWF format,
traditionally called "ShockWave Flash" movies, 'Flash movies' or
'Flash applications'. These usually have a .swf file extension, and may be used
in the form of a web page plug-in, strictly "played" in a standalone
Flash Player, or incorporated into a self-executing Projector movie.
Flash Video files have an .flv file extension and are either used from within .swf files or played through a flv-aware player, such as VLC, or QuickTime and Windows Media Player with external codec’s added.
The use of vector graphics combined with
program code allows Flash files to be smaller, therefore for streams to use
less bandwidth than the corresponding bitmaps or video clips. For content in a
single format (such as just text, video, or audio), other alternatives may provide
better performance and consume less CPU power than the corresponding Flash
movie, for example when using transparency or making large screen updates such
as photographic or text fades.
SWF is the abbreviation of SHOCKWAVEFLASH which
was later changed to a small web format to prevent the confusion with
shockwave. SWF is a format for multimedia vector graphics, supported by
ActionScript and Adobe Flash. This file format originated with FutureWave
software which was later transferred to Macromedia until it finally came under
the control of Adobe. SWF is the main format for animated vector graphics on
the web. It can also be used for programs and internet browser games through
ActionScript.
SWF files can be found using Flash, Flash
Builder and After Effects, and many more Adobe products.
.svg
SVG files are from a family of specifications
of an XML-based file format which describe two-dimensional vector graphics,
both static and dynamic such as interactive or animated. SVG images are
defined in XML text files, which means that they can be searched, indexed,
scripted and, if required, compressed. Since they are XML files, SVG images can
be created and edited with any text editor, but drawing programs are also
available which support SVG files formats.
All major modern web browsers support and render SVG directly with the exception of Microsoft Internet Explorer (IE). The Internet Explorer 9 beta supports the basic SVG feature set. Currently, support for browsers running under Android is also limited. SVG files allow for Vector graphics Raster graphics and text. Graphical objects such as PNG and JPEG raster images can also be grouped, styled, transformed and composited into previously rendered objects. SVG does not directly support for separating drawing order from document order for overlapping objects, unlike some other vector mark-up languages like VML.
All major modern web browsers support and render SVG directly with the exception of Microsoft Internet Explorer (IE). The Internet Explorer 9 beta supports the basic SVG feature set. Currently, support for browsers running under Android is also limited. SVG files allow for Vector graphics Raster graphics and text. Graphical objects such as PNG and JPEG raster images can also be grouped, styled, transformed and composited into previously rendered objects. SVG does not directly support for separating drawing order from document order for overlapping objects, unlike some other vector mark-up languages like VML.
.gif
GIF stands for graphics interchange format and is used for transmission of images across data networks. GIF was originally introduced by CompuServe Information Service in 1987; it is supported by all web browsers making it a very popular format.
GIF displays up to a maximum of 256 colours which makes it an unsuitable file format for pictures which contain continuous colours. GIF is generally a very standard format which is best for colour images such as clip art, black and white line drawings or images with block solid colours.
Uses of GIF include images with transparent areas, colours in detached areas, and buttons on websites, small icons (computer) images containing text and standard animations. This so in animations GIF allows a separate palette of 256 colours for each frame. The colour limitation makes the GIF format unsuitable for reproducing colour photographs and other images with continuous colour, but it is well-suited for simpler images such as graphics or logos with solid areas of colour. GIF images are compressed using the lossless data compression technique to reduce the file size without degrading the visual quality.
WEB ANIMATION SOFTWARE
AuthoringFlash
The all famous ADOBE FLASH PROFESSIONAL MULTIMEDIA authoring program is used for the content of the ADOBE ENGAGEMENT PLATFORM, this includes web applications, games, movies, content for mobile phones, and other embedded devices etc. Adobe flash professional is the successor of FUTURESPLASH ANIMATOR which is a vector graphics and animations program released in May 1996.
Futuresplash animator was developed in relation to FUTUREWAVE SOFTWARE, a small software company. Their debut product was SMARTSKETCH, a vector based drawing program for pen-based computers. It wasn’t till 1995 when the company decided animation capabilities would be their next step forward in their technological evolutions; this would be combined with a vector based platform for the WWW. (WorldWideWeb) this later brought us the creation of futuresplash animator.
By December 1996 Macromedia developed FUTUREWAVE which was re-branded and released as a formulation of FUTURESPLASH animator titled MACROMEDIA FLASH v1.0. When 2005 came around ADOBE SYSTEMS acquired MACROMEDIA which later released ADOBE FLASH CS3 PROFESSIONAL in 2007, being the next version of MACROMEDIA FLASH.
SWiSH
This is a flash creation tool that is commonly used to create interactive and cross-platform movies, animations, and presentations. It is developed and distributed by Swishzone.com Pty Ltd, based in Sydney, Australia. SWiSH Max primarily outputs to the .swf format, which is currently under control of Adobe Systems.
SWiSH Max is generally considered to be a simpler and less costly Flash creation tool in comparison with Adobe Flash. SWiSH Max does not support some Adobe Flash features such as ActionScript 3.0, shape tweens, and bitmap drawing capabilities. It does, however, include general Flash creation features such as vector drawing, motion tweens, and symbol editing. In addition, SWiSH Max incorporates a number of automated effects and transitions, which make building certain Flash elements such as buttons, advanced transition effects, and interactive Flash sites simpler. One drawback of SWiSH Max is its inability to open or save .fla files, which limits exchanges between other programs to final .swf files.
Adobe Director
Adobe Director
Adobe Director (formerly Macromedia Director) is a multimedia application authoring platform created by Macromedia; now part of Adobe Systems. It allows users to build applications built on a movie metaphor, with the user as the "director" of the movie. Originally designed for creating animation sequences, the addition of a powerful scripting language called Lingo made it a popular choice for creating CD-ROMs and standalone kiosks and web content using Adobe Shockwave. Adobe Director supports both 2D and 3D multimedia projects.
The differences between these two products have been the subject of much discussion, especially in the Director Development community. Extensibility is one of the main differences between the two, as are some of the sundry codec’s that can be imported. Director has tended to be the larger of the two, but unfortunately that footprint has become part of its weakness.
Animation Players
Flash Player
The Adobe Flash Player is software for viewing animations and movies
using computer programs such as a web browser. Flash Player is a widely
distributed proprietary multimedia and application player created by Macromedia
and now developed and distributed by Adobe after its acquisition. Flash Player
runs SWF files that can be created by the Adobe Flash authoring tool, by Adobe
Flex or by a number of other Macromedia and third party tools. Adobe Flash, or
simply Flash, refers to both a multimedia authoring program and the Adobe Flash
Player, written and distributed by Adobe, which uses vector and raster
graphics, a native scripting language called Action Script and bidirectional
streaming of video and audio.
Strictly speaking, Adobe Flash is the authoring environment and Flash Player is the virtual machine used to run the Flash files, so in other words “Flash" can mean the authoring environment, the player, or the application files.
Strictly speaking, Adobe Flash is the authoring environment and Flash Player is the virtual machine used to run the Flash files, so in other words “Flash" can mean the authoring environment, the player, or the application files.
The Flash Player was
originally designed to display 2-dimensional vector animation, but has since
become suitable for creating rich Internet applications and streaming video and
audio. It uses vector graphics to minimize file size and create files that save
bandwidth and loading time. Flash is a common format for games, animations, and
GUIs embedded into web pages.
The Flash Player is available as a plug-in for recent versions of web browsers (such as Mozilla Firefox, SeaMonkey, Opera, and Safari) on selected platforms. The plug-in is not required for Google Chrome any more since Google integrated Flash support into the Chrome browser. Adobe is backwards compatible with past plugins.
The Flash Player is available as a plug-in for recent versions of web browsers (such as Mozilla Firefox, SeaMonkey, Opera, and Safari) on selected platforms. The plug-in is not required for Google Chrome any more since Google integrated Flash support into the Chrome browser. Adobe is backwards compatible with past plugins.
Adobe Shockwave is a
multimedia platform used to add animation interactivity to web pages. It allows
Adobe Director Applications to be published on the Internet and viewed in a web
browser on any computer which has the Shockwave plug-in installed.
It was first developed by
Macromedia, and released in 1995 and was later acquired by Adobe Systems in
2005. Shockwave movies are authored in the Adobe Director
environment. While there is support for including Flash movies inside Shockwave
files, authors often choose the Shockwave Director combination over Flash
because it offers more features and more powerful tools.
QuickTime
Player
Developed by Apple
Inc. QuickTime is capable as you would expect, multiple, various, formats of;
digital video, picture, audio, panoramic images, interactivity and more.
QuickTime is available for Mac OS Classic, Mac OS X and some Microsoft Windows
Operating systems. The newest version of QuickTime is QuickTime X, and is only
available on Mac OS, and therefore is integrated software. QuickTime for
Windows is downloadable, often bundled with an iTunes Download but can be
installed as a separate component. It is available for FREE for both Mac OS X
and Windows Operating Systems- Great Stuff!
iTunes requires
QuickTime framework in order to provide specific features, these features
however, are not available through the standard QuickTime Player. Such features
are how iTunes exports audio in WAV, AIFF, MPS, AAC and Apple Lossless, and to
do so requires QuickTime.
Quicktime supports
many file formats, including many audio and picture formats. There are multiple
video formats supported by QuickTime some of which are listed below;
·
3GP
·
Animated GIF
·
AVI (Audio Video interleave)
·
DV
·
MPEG-1
·
MPEG-4
·
QuickTime Movie
·
QuickTime VR
RealPlayer
RealPlayer is
referred to as a closed source cross-platform media player, which was created
by RealNetworks. RealPlayer plays many multimedia formats incuding, MP3, MP4,
QuickTime, Windows Media and multiple versions of RealAudio and RealVideo.
RealPlayer in the earlier day, back when the internet was new, was considered
as a popular streaming method. However more recently Windows Media Player and
QuickTime, has quickly surpassed that reputation. Since 2007 Apples iTunes
would have required RealPlayer as the initial plugin to watch streamed videos
of listen to streamed audio files. The BBC website most specifically, used
RealPlayer to do just that.
RealPlayer is a
capable media library providing you with that all so important organisation of
media track tagging and editing. The premium version provides an Audio
Converter function enabling the conversions between RealMedia, MP3, AAC,
Windows Media, WAV and more formats.
RealPlayer
provides what is noted as a LivePause ability, which allows you to pause your
streamed video clip without stopping the buffering, so essentially the longer
the pause the more buffer the file has had (more it has loaded). It also
enables Video Download specifically version 11 for Windows and OS X which
allows you to download Flash Video files from websites such as the all famous,
YouTube. MP4 files can also be downloaded but often require a larger premium
version or the standard free versions of other players such as Winamp of VLC
media players. Video Sharing is another component entitling you to post your
videos directly online, so that includes direct uploads to facebook, twitter
and myspace from the software.
RealTime can play
large numbers of formats including the following;
·
MPEG (.mpg / .mpeg / .mpe)
·
AVI (.avi /.divx)
·
Windows Media (.wma / .wmv)
·
QuickTime (.mov / .qt)
·
Adobe (.swf / .flv)
·
DVD
___________________________________________________________________________And so we reach the end of our media blog trilogy, together we have conquered DIGITAL GRAPHICS, INTERACTIVE MEDIA and WEB ANIMATION. Surly you are all media geniuses by now! So in conclusion to that I sincerely hope you all have found this blog super informative and useful over the last few weeks.
This was your HOOK UP TO MEDIA!
Is this the end… forever?
No way!
The Media never sleeps… who knows what’s next? Stay tuned…


































No comments:
Post a Comment