Author: Laurence D. Finston.
This copyright notice applies to the text and source code of this web site, and the graphics that appear on it. The software described in this text has its own copyright notice and license, which can be found in the distribution itself.
Copyright (C) 2003, 2004, 2005, 2006 The Free Software Foundation
Permission is granted to copy, distribute, and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of this license is included in the file COPYING.TXT
Last updated: April 29, 2006
Top |
Animation |
Full and Limited Animation |
Cels |
Rotoscoping |
Effects |
Fade |
Blur |
The Birth of a Dodecahedron |
Rotating Sphere |
Titles and Text Effects |
Contact |
Animation, with cartoons and puppets as well as computer animation, is my main interest. It's also one of the reasons I started writing GNU 3DLDF in the first place. Now that I've started to use it for making animations. I will be putting examples and explanations on this webpage.
Cinematic film is projected at a rate of 24 frames per second (fps). This translates to 1440 frames per minute. Video, on the other hand, generally runs at a faster rate. This is why artistic animation is measured in seconds or small numbers of minutes rather than hours.
In Germany, the United Kingdom, and possibly other European countries, video runs at 25 fps or 1500 frames per minute. In the US, I believe it runs at 30 fps per second. The ppmtompeg program from the Netpbm package creates an MPEG-1 video stream with a frame rate of 23.976, 24, 25, 29.97, 30, 50, 59.94, or 60 frames per second.
An animation sequence where the image changes with every frame is said
to be animated on ones
. Often, however, when the motion
depicted isn't too fast, the images are displayed on two subsequent
frames; this is called animating on twos
. If you have a video
player with single-frame advance, you can step through an animated
film frame-by-frame, which can be very instructive. If you do so with
a video containing full animation
, such as Hollywood cartoon
animation from the 1930s or 1940s, you are likely to see predominantly
animation on twos interspersed with short sequences of animation on
ones.
When nothing moves in a scene for more than 1/12 seconds (at 24 fps) or 2/25 seconds (at 25 fps), then, of course, the same image will appear on more than two subsequent frames. For example, for a title to appear for 5 seconds at 25 fps, there will have to be 125 frames containing that image.
The greater the number of frames per second, the smoother
the animation can be. 24 or 25 frames per second is not really a lot.
When I started watching videos with single-frame advance, I was
surprised at how far apart
the drawings were, especially during
drastic action. One of the advantages of computer animation is that one is
not limited to a single rate of advance.
During the 1920s and '30s, animation studios in the US, primarily in Hollywood and New York City (Fleischer Studios), made high-quality animated cartoons assembly-line fashion using lots of personnel. Many animation drawings were used, producing a convincing illusion of movement. This type of animation is called full animation.
Due to a number of factors, Hollywood-style cartoon animation began to decline around 1940. By around 1950, techniques of limited animation were well established in Hollywood animation. (By this time, Fleischer Studios had left New York for Florida. Soon thereafter, Max and Dave Fleischer lost control of the studio, which became Famous Studios. The name was apt---it became famous for making lousy cartoons.)
A typical limited animation technique is to have sets of mouths,
eyes, noses, etc., which are placed on top of a head and photographed.
In full animation, the whole head and all of its features would be
redrawn. This allows the shape of the head to change depending on the
expression, whereas in limited animation, the expressions will tend to
look pasted-on
.
There is no fixed borderline between full and limited
animation. Nor is there anything wrong with the limited animation
techniques in themselves. The reason why a lot of limited animation
looks so bad is because the studios were trying to imitate the look of
full animation while trying to make cartoons on the cheap
.
In traditional cartoon animation, the animation drawings were
transferred to cels, which were (nearly) transparent sheets of
acetate. The name comes from celluloid
, and was therefore a
misnomer, since they were never made from this material. I doubt that
anyone mourns the passing of the cel. They were probably not very
much fun to work with. Only certain inks would stick to them, and
since they were only nearly transparent, different shades of ink had
to be used, depending on the position of a cel in the stack of cels
being photographed. This took careful planning. In addition, only
about four cels could be stacked before the combined effect became
too dark. (Cels are still in use for some purposes, though.
Acrylic paints do stick to them.)
Combining images using ImageMagick's composite program is analogous to stacking cels, but, theoretically, at least, there is no limit on the number of images that can be combined. Nor is there any need to use different colors depending on the layer.
Rotoscoping is another animation technique which can be simulated by using computer programs. The rotoscope (and rotograph) were tools for tracing outlines from live-action footage. There's nothing wrong with rotoscoping per se, but however you slice it, it's not really animation. I think it can be used creatively, but I find that I lose interest in cartoons where it's used too much, and too obviously as a way of cutting costs.
Incidentally, rotoscoping is not the worst debasement
of the art of
animation. I remember as a child watching the cartoons
Space Angel
and Clutch Cargo
, which
were notable for having virtually no movement, except for the mouths, which
were real mouths from live-action footage---an effect of singular hideousness.
L. Nobre G.,
the author of the
FEATPOST and
ANEEMATE
packages, has experimented with using the programs potrace,
autotrace, and pstoedit to make
MetaPost code from raster images.
See his
Vectorization page.
This technique can be used to make vector
images from artwork, which wouldn't be cheating
.
The effect of text or objects fading in or out can be achieved within the 3DLDF code. A loop is used to produce an image for each stage of the fade. The smoothest results are achieved by making an image for each frame, but a smaller number of images can be produced and copied. The text or object is drawn and/or filled using one or more color variables, whose values are reset in the loop. For example:
verbatim_metapost "verbatimtex \font\large=cmbx12 scaled 5000 etex";
color c;
color d;
for i = 0 upto 30:
j := i / 30;
k := (30 - i) / 30;
set c (j, j, j);
set d (k, k, k);
beginfig(i);
label.rt("{\large Fade out}", origin shifted (0, 1.25))
with_text_color c;
label.rt("{\large Fade in}", origin shifted (0, -1.25))
with_text_color d;
endfig with_projection parallel_x_y;
endfor;
verbatim_metapost "bye;";
end;
This is the result: fade_2.pdf.
The effect of text going in or out of focus (blurring), on the other hand, can be achieved by using ImageMagick's mogrify program with the -blur option. For example:
3DLDF code:
verbatim_metapost "verbatimtex \font\large=cmbx12 scaled 5000 etex";
beginfig(1);
label("{\large Blur}", origin);
endfig with_projection parallel_x_y;
verbatim_metapost "bye;";
end;
Assume that the result of running 3dldf and MetaPost is the EPS file blur_1.1. The following Emacs-Lisp code performs the following actions:
(progn
(let (i i-string (display-string "display blur_1.ps "))
(shell-command "cnepspng 3DLDFmp 1 1 1cm 1cm")
(setq i 2)
(while (<= i 5)
(setq i-string (number-to-string i))
(shell-command (concat "cp blur_1.ps blur_"
i-string ".ps"))
(shell-command (concat "mogrify -blur "
i-string
"x" i-string
" blur_" i-string ".ps"))
(setq display-string (concat display-string " blur_" i-string ".ps "))
(setq i (1+ i))
) ;; while
(setq display-string (concat display-string "&"))
(shell-command display-string)
) ;; let
) ;; progn
I like to use Emacs-Lisp for trivial tasks like this. Other people may prefer to write a shell script, or use some other method.
2005-05-09
MPEG movie:
The Birth of a Dodecahedron.
Download compressed version.
MPEG animations like this one can be created by following these steps:
The file
generate.el contains Emacs-Lisp code for automating these
steps, including the function (or defun
, in LISP jargon)
generate-file. I plan to use this code as the basis for an
animation controller
written in C using CWEB.
Thanks to L. Nobre G. for teaching me how to generate MPEG movies from the EPS files generated by MetaPost.
2005-10-02.
The explanations that follow the animations were originally postings
to a forum on an animation website. They require some editing, which
I will do as I find time.
MPEG movie:
Scrolling Closing Titles 1
Copyright (C) 2005 The Free Software Foundation
Download compressed version.
Copyright (C) 2005 The Free Software Foundation
Source code:
scroll_1.ldf
scroll_1.tex
This animation is already here, its title is
Coming Soon 1
.
MPEG movie:
Coming Soon 1
Copyright (C) 2005 The Free Software Foundation
Download compressed version.
Copyright (C) 2005 The Free Software Foundation
Source code:
titles_2.ldf
titles_2.tex
MPEG movie:
Rotating Title 1
Copyright (C) 2005 The Free Software Foundation
Download compressed version.
Copyright (C) 2005 The Free Software Foundation
Source code:
titles_1.ldf
titles_1.tex
TeX is a very powerful and flexible typesetting package. It is also Free Software. Therefore, it seems like a good choice for making titles and text effects for animations.
The short animations above demonstrate the use of TeX for text effects and titles.
While it is possible to use TeX by itself for these purposes in a limited way, I find it better to use it in combination with MetaPost. MetaPost can use TeX (or PostScript) for its labelling commands: label and dotlabel. It's much easier to position text in MetaPost than it is in TeX itself. It's also much easier to draw ruled boxes in MetaPost.
However, I generally use GNU 3DLDF instead of using MetaPost directly. It's more convenient for me, it gives me the chance to see whether I need to make any additions or changes to 3DLDF, and it makes it possible to combine titles with 3D objects.
This is how it works:
3DLDF writes MetaPost code, MetaPost calls TeX as a subprocess for the
labels and writes PostScript code. If you're using PostScript fonts or
no text at all, you can have MetaPost write structured PostScript (PS)
right away. However, if you're using TeX fonts, you will have to have
it write Encapsulated PostScript (EPS). This is what I'm describing
here. In order to continue, you must convert EPS to PS. I use a
utility called cnepspng for this purpose. (Authors note:
I've replaced cnepspng with conveps, as described
below.) It is included in the 3DLDF package. You then convert the PS
files (containing the individual images) to PNG and then to PPM. Then
you pack them into an MPEG animation using ppmtompeg,
which is part of the NETPBM package.
In order for this to work, the size of the images must be either 320×240 or 640×480 pixels. I coerce them to this size after converting them from PS to PNG.
The original images (before coercion) must be of the same size, and they should also be aligned correctly. Otherwise, your animation will not look right. The easiest way to do this is to surround them with a frame. As far as MetaPost is concerned, the frame can be drawn in the background color, and it will still contribute to the final size of the image. However, cnepspng currently uses mogrify -crop 0x0 to cut off the margins created when converting from EPS to PS. So here, if the frame is in the background color, it will be ignored, and your image will almost certainly end up in the wrong size. Recently, I've just been keeping the black frame, but I may change cnepspng to use mogrify with the -shave option with specific values instead (it used to work this way). However, it would also be possible to use a color for the frame that's not used elsewhere in your image, and change it to the background color using mogrify with the -opaque and -pen options, after it's been resized. mogrify is part of the ImageMagick package.
It's also possible to use convert (ImageMagick) to make an animated GIF or MNG out of your images. However, I haven't found this to be very practical, because the files can get very large. The compression used for MPEGs is much better.
The 3DLDF code for the examples I've uploaded is here .
I've uploaded a new MPEG movie with a Speech Balloon Test
. The
bottleneck in this method is the conversion of Encapsulated PostScript
(EPS) files to one of the Netpbm formats (PBM == Portable Bitmap, PGM
== Portable Graymap, PPM == Portable Pixelmap, or PNM == Portable
Anymap) so that they can be passed to ppmtompeg.
I've renamed the program cnepspng to conveps (included in GNU
3DLDF) and revised it a bit. It's primary purpose is no longer to
convert EPS files to PNG, so I wanted to give it a more suitable name
(convert EPS
). I've added options for replacing colors and making
them transparent. It was a bit tricky getting this to work, because
ImageMagick's convert and mogrify programs perform pixel aliaising
by default. This can increase the number of colors in an image. If
this happens, it spoils the result if you want to make, for example,
the blue regions in your image transparent, or change the orange
regions to yellow.
Another problem is that I wasn't able to get convert or mogrify to resize the structured PostScript (PS) files. I was able to resize them by hand in display (also part of ImageMagick) and GIMP, but it's not practicable to do this for more than a couple of files. However, there's no reason to use the PS files for making animations, so I don't consider this a serious problem.
Speech Balloon Test 1
has 990 unique frames and 1050 frames in
total. At 30 fps, this is only 35 seconds of animation. I think it
took about 15 minutes to convert all of the EPS files to PNM. I plan
to use POSIX threads to call convert and mogrify. There's always a
certain amount of overhead involved in using threads, but I think it
will increase the speed of conveps considerably.
conveps now uses threads, which increases its speed dramatically. I haven't yet tried using it with more than 51 images, though. It seems to work, but it may require some debugging. Currently, it will only work on systems supporting threads. I may add code for systems that don't support threads, but this doesn't have a high priority for me.
There's a lot of debugging output that I plan to put into conditionals. Since terminal output in one thread will cause other running threads to block if they try to write to the terminal, eliminating output will tend to increase speed.
I've also added the options --new-filename and --renumber, so that it's possible, for example, to generate the PNM files b_12.pnm to b_20.pnm from the EPS files a.0 to a.8.
It is now possible to remove the frame using the --fill and --opaque options, as long as it's in a color not otherwise used in the drawings. It's also possible to make colors transparent using the --transparent option. This will, of course, only work when converting to formats that support transparency.
I think it would be interesting to try to use GNU 3DLDF, MetaPost, and conveps for color separation. Multiple images could be generated where one had only the cyan areas, another the yellow, a third the magenta, and a fourth the black. One could also generate grayscale images and process them in GIMP using the Hue-Saturation-Value color model.
I used the threaded version of conveps to make the new version of the speech balloon test which I've uploaded to my website: http://wwwuser.gwdg.de/~lfinsto1. This animation has 991 unique images and 1050 frames. I have found that trying to do anything with this many images is difficult and time-consuming.
I ran into the unforeseen problem that using multiple threads for this many images caused so many temporary files to pile up that I ran out of disk space. I have now added code to limit the number of active threads at any given time. The default is 100, which seems to work on my system. It can be set to a different value using the command-line option --threads-limit.
Please note that conveps can still be called start and end values for any number of EPS files (subject to the limits of one's system): The threads limit simply limits the number of threads that can be active at one time. If the number of files exceeds the threads limit, one set of threads is created and run, then the main() function joins with them. Subsequently, a second set is created, main() joins with them, etc.
The number of active threads permitted by the system is likely to be large enough for most purposes. However, conveps calls system(), so it also creates heavy-weight processes (by means of fork()). It is quite possible that fork() could fail, because conveps has created too many processes. A threads limit value of 100 or less will probably solve this problem, too.
A frame in a color different from the background color is required in order to use mogrify --crop 0x0. However, this frame can be removed by using the --fill and --opaque options. In order to get rid of it without spoiling the rest of the image, the frame must be in a color not otherwise used in it, because these options will replace all pixels having the --opaque color.
The predefined color names in ImageMagick refer to different colors
than the same names in dvips. It is therefore best to use explicit
values for the arguments to the --opaque and --transparent
options. For example: conveps --fill white --opaque rgb(255,0,0)
3DLDFmp 0 1 will create 3DLDFmp_0.pnm and 3DLDFmp_1.pnm from
3DLDFmp.0 and 3DLDFmp.1, replacing the red pixels with white
ones. This will get rid of the frame, if the frame was drawn in red in
GNU 3DLDF or MetaPost. The --fill option can use ImageMagick's
names, because this is the new color that will appear in the converted
and/or mogrified image or images.
I believe that it would be possible to draw to the edge of the image, i.e., to draw over the frame. However, this would require some extra care. It will be necessary to add a path corresponding to the outer edge of the frame and clip the picture to this path. I've added code to GNU 3DLDF for clipping pictures to paths, namely the clip_to command.
The basic idea behind the scrolling text examples is to make a picture containing the text, output it, and use a path as a mask to clip it. (picture and path are data types in GNU 3DLDF and MetaPost, which they are formatted in bold). Then I use a loop to shift the text picture slightly with each iteration.
The path must be closed, but needn't be rectangular. Nor does it have to be drawn or filled in order to be used for clipping.
The path used for clipping can be the outer frame, which is needed
for aligning the images and ensuring that they're all the same
size. This is the case in the example of vertically scrolling titles
(Scrolling Closing Titles 2
). However, it can also be some other
path, as in the example of the horizontally scrolling text in
Speech Balloon Test 1
.
I thought it would be possible to shift the paths representing the
frame or frames rather than the text picture, but this causes the
images to go out of alignment. You can see this in Speech Balloon
Test 1
. I've uploaded a corrected version to my website (Speech
Balloon Test 0
, at http://wwwuser.gwdg.de/~lfinsto1).
Since the text effects, the frame, and the path used for clipping
are all two-dimensional, it would have been possible to use MetaPost
for the examples I've made so far. There are a few differences between
MP and 3DLDF that would have to be accounted for. For example,
pictures are not output
in MP and the syntax of the label
command (used for including the text) is slightly different.
It would, however, be possible to combine scrolling text with 3D constructions, animated or still, moving along with the text or remaining in one place behind or in front of it. I may try to make examples of this, if I get the chance.
TeX is designed for texts that are divided neatly into lines and
pages, and not for really long, unbroken ones. It is therefore
possible that using such texts will exceed some memory limit or other
in 3DLDF, MetaPost, and/or TeX. In the examples I've done so far, the
entire text has been in a single block, a vbox
, or vertical box, in
Scrolling Closing Titles 2
and an hbox
, or horizontal box, in
Speech Balloon Test 0
.
A better way of doing this would be to divide the text into several
vboxes and/or hboxes and only output the ones that are wholly or
partially within the clipping path. However, in order to ensure that
the texts appear
and disappear
at the right times, and that there
is always the correct distance between them, it must be possible to
determine their size.
MetaPost provides a way of measuring TeX text and this is documented in the manual. I have now added a way of doing the same thing in 3DLDF. This code:
numeric_vector nv; nv := measure_text ABCQabcq
; show nv;
produces the following output:
size: 3 (0) : 1.762184 (1) : 0.244069 (2) : 0.068339
The text ABCQabcq
is put into an hbox and measured. Boxes in TeX
have width, height, and depth. \hbox{ABCQabcq} has width 1.762184cm,
height 0.244069cm, and depth 0.068339cm.
The measurements of the text boxes can be used in the loop controlling the motion of the text picture to include or exclude text boxes according to their positions with respect to the clipping path. I haven't had a chance to try this out yet, though.
The MPEG movie MND Titles 0
doesn't contain any rotations or
translations (shifting), but it is a realistic example that I plan to
actually use, rather than just a test. I also did discover a couple of
things while working on it.
It is possible to write in white on a black background using TeX (and MetaPost and 3DLDF), but I used the -negate argument to mogrify instead. I added a --negate argument to conveps in order to make this possible. I've also added one or two other options, and documented the options at the beginning of conveps.web.
When making the empty (black) image, --negate seemed not to work, unless I made some other mistake. I also had a problem with cropping, which I think may have to do with the way ImageMagick determines the background color. I use crop 0x0 to remove the part of the image in the background color outside of the frame. In an earlier version, the program measured the image and used explicit values to remove the unwanted edges. However, I could only measure in dimemsions of printer's points, and I determined that converting these to pixels was unreliable. I believe the problem is connected with the relationship between image size and resolution, but I haven't had time to look into this.
The order in which options are passed to mogrify and convert also affects the results. I had to make some adjustments, and it may be that I'll have to do so again.
At any rate, I was ultimately able to crop the image, and it did work to negate the pixels in the empty image with an additional, explicit call to mogrify (i.e., not via conveps).
This time, I built the movie up gradually by creating GOPs (GOP == Group of Pictures) and then calling ppmtompeg with the --combine-gops option. This allows more flexibility, because you can save the GOPs, rearrange them, insert new ones, etc., and then regenerate the movie. For example, if you make a GOP with 30 frames of the same image, and you decide you want to show it for two seconds instead of one, all you have to do is copy the GOP and call ppgmtompeg --combine-gops ... again. This is much faster than regenerating the whole movie. The GOPs are compact enough to save.
I did have a couple of problems with the entries in the parameter file (required by ppmtompeg) used for combining the GOPs. It didn't work to specify a different input directory or to list the names of the GOPs. However, I haven't yet read very far into the documentation for ppmtompeg or the MPEG format, so I may yet figure this out.