Film editing is part of the creative post-production process of filmmaking. The term film editing
is derived from the traditional process of working with film, but now increasingly involves the
use of digital technology.
The film editor works with the raw footage, selecting shots and combining them
into sequences to create a finished motion picture. Film editing is described as an art or skill, the
only art that is unique to cinema, separating filmmaking from other art forms that preceded it,
although there are close parallels to the editing process in other art forms
like poetry or novel writing. Film editing is often referred to as the "invisible art"because when it
is well-practiced, the viewer can become so engaged that he or she is not even aware of the
editor's work. On its most fundamental level, film editing is the art, technique, and practice of
assembling shots into a coherent sequence. The job of an editor isn‘t simply to mechanically put
pieces of a film together, cut off film slates, or edit dialogue scenes. A film editor must creatively
work with the layers of images, story, dialogue, music, pacing, as well as the actors'
performances to effectively "re-imagine" and even rewrite the film to craft a cohesive whole.
Editors usually play a dynamic role in the making of a film. Sometimes, auteur film directors edit
their own films. Notable examples are Akira Kurosawa and the Coen brothers.
With the advent of digital editing, film editors and their assistants have become responsible for
many areas of filmmaking that used to be the responsibility of others. For instance, in past years,
picture editors dealt only with just that—picture. Sound, music, and (more recently) visual
effects editors dealt with the practicalities of other aspects of the editing process, usually under
the direction of the picture editor and director. However, digital systems have increasingly put
these responsibilities on the picture editor. It is common, especially on lower budget films, for
the assistant editors or even the editor to cut in music, mock up visual effects, and add sound
effects or other sound replacements. These temporary elements are usually replaced with more
refined final elements by the sound, music, and visual effects teams hired to complete the
picture.
Film editing is an art that can be used in diverse ways. It can create sensually provocative
montages; become a laboratory for experimental cinema; bring out the emotional truth in an
actor's performance; create a point of view on otherwise obtuse events; guide the telling and pace
of a story; create an illusion of danger where there is none; give emphasis to things that would
not have otherwise been noted; and even create a vital subconscious emotional connection to the
viewer, among many other possibilities. Editors can completely control how the audience feels
emotionally throughout a film.
Chronology
Before Editing
Like almost every basic idea about movies, the idea of editing has its precursors. Flashbacks had
existed in novels; scene changes were already part of live theater; even narrated sequences had
been a part of visual culture from medieval altar triptychs to late nineteenth-century comic
strips.But the very earliest filmmakers were afraid to edit film shots together because they
assumed that splicing together different shots of different things from different positions would
simply confuse audiences.
In 1895 the Lumiere Brothers invented Cinematographe.Cinematographe was a three in one
device that recorded, captured and projected motion picture. The Lumiere Brothers films were
single, unedited shots .They found a subject they wanted to film, setup their camera and ran the
camera until the film stock ran out.
Although the Lumiere Brothers had a great invention, Edwin S. Porter came along and showed
that film didn‘t have to be one long still in 1901. Edwin S. Porter also used footage to tell a
different story unrelated to what the footage originally was meant to portray. The one-reel film,
with a running time of twelve minutes, was assembled in twenty separate shots, along with a
startling close-up of a bandit firing at the camera. It used as many as ten different indoor and
outdoor locations and was groundbreaking in its use of "cross-cutting" in editing to show
simultaneous action in different places. No earlier film had created such swift movement or
variety of scene. The Great Train Robbery was enormously popular.
Georges Melies was an illusionist who worked in the theater. He pioneered double exposure
when his camera jammed while filming on the streets of Paris. This evolved into the first
dissolve
In 1908 D.W Griffiths film ―For Love of Gold‖ featured the first ever continuity cut when a
scene cut...Griffiths then realised that emotions could also be portrayed through different camera
angles and pace of editing and it wasn‘t all down to the actors.Griffiths was given credit for the
narrative of a film, the production of the first American feature film and the discovery of the
close up. This is his first directed one reel film...
The Birth of a Nation film included camera techniques such as panoramic long shots, iris effect,
still shots, cross cutting and panning shots. There techniques are widely used today to create
films. The use of sound allowed the film to be more interested and to make the audience feel
more involved.
Discovery of the Kuleshov Effect was Soviet director and film theorist Lev Kuleshov. The
Kuleshov effect was a montage effect on a film which Lev Kuleshov believed the audience
would respond to more. Between 1910 and 1920, Russian Filmmaker Lev Kuleshov and V. I.
Pudovkin experimented with editing and emotional response. They filmed an actors response to
different images; a bowl of soup (hunger), a woman (lust) and a child in a coffin (sadness).
Although it appears that the actors expression doesnot change, when juxtaposed with the three
shots it may suggest to the audience that it does.
Russian filmmaker Sergie Eisenstein believed film montage could create ideas that would have a
greater impact on an audience. This allowed filmmakers to manipulate real time to a greater
degree than just single shots. Note that Eisenstein is simply cutting using the naked eye and a
pair of scissors.
Editing Today
Even in an era of incredibly advanced special effects, some filmmakers are still enamored of the
photographic realism in sustained shots. Perhaps the most conspicuous is Jim Jarmusch, who will
hold his camera on his subjects for an agonizingly hilarious amount of time.But the past 20 or so
years has also seen the rise of "digital editing" (also called nonlinear editing), which makes any
kind of editing easier. The notion of editing film on video originated when films were transferred
to video for television viewing. Then filmmakers used video to edit their work more quickly and
less expensively than they could on film. The task of cleanly splicing together video clips was
then taken over by computers using advanced graphics programs that could then also perform
various special effects functions. Finally, computers convert digital images back into film or
video. These digital cuts are a very far cry from Méliè's editing in the camera.
Digital video
is a representation of moving visual images in the form of encoded digital data. This is in
contrast to analog video, which represents moving visual images with analog signals.
Standard film stocks such as 16 mm and 35 mm record at 24 frames per second. For video, there
are two frame rate standards: NTSC, which shoot at 30/1.001 (about 29.97) frames per second or
59.94 fields per second, and PAL, 25 frames per second or 50 fields per second.
Digital video cameras come in two different image capture formats: interlaced and deinterlaced
/ progressive scan.
Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are
scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are
scanned again, and so on. One set of odd or even lines is referred to as a "field", and a
consecutive pairing of two fields of opposite parity is called a frame. Deinterlaced cameras
records each frame as distinct, with all scan lines being captured at the same moment in time.
Thus, interlaced video captures samples the scene motion twice as often as progressive video
does, for the same number of frames per second. Progressive-scan camcorders generally produce
a slightly sharper image. However, motion may not be as smooth as interlaced video which uses
50 or 59.94 fields per second, particularly if they employ the 24 frames per second standard of
film.
Digital video can be copied with no degradation in quality. No matter how many generations of a
digital source is copied, it will still be as clear as the original first generation of digital footage.
However a change in parameters like frame size as well as a change of the digital format can
decrease the quality of the video due to new calculations that have to be made. Digital video can
be manipulated and edited to follow an order or sequence on an NLE, or non-linear
editingworkstation, a computer-based device intended to edit video and audio. More and more,
videos are edited on readily available, increasingly affordable consumer-grade computer
hardware and software. However, such editing systems require ample disk space for video
footage. The many video formats and parameters to be set make it quite impossible to come up
with a specific number for how many minutes need how much time.
Digital video has a significantly lower cost than 35 mm film. The film stock itself is very
inexpensive. Digital video also allows footage to be viewed on location without the expensive
chemical processing required by film. Also physical deliveries of tapes and broadcasts do not
apply anymore. Digital television (including higher quality HDTV) started to spread in most
developed countries in early 2000s. Digital video is also used in modern mobile
phones and video conferencingsystems. Digital video is also used for Internet distribution of
media, including streaming video and peer-to-peer movie distribution. However even within
Europe are lots of TV-Stations not broadcasting in HD, due to restricted budgets for new
equipment for processing HD.
Many types of video compression exist for serving digital video over the internet and on optical
disks. The file sizes of digital video used for professional editing are generally not practical for
these purposes, and the video requires further compression with codecs such as Sorenson, H.264
and more recently Apple ProRes especially for HD. Probably the most widely used formats for
delivering video over the internet are MPEG4, Quicktime, Flash and Windows Media, while
MPEG2 is used almost exclusively for DVDs, providing an exceptional image in minimal size
but resulting in a high level of CPU consumption to decompress.
Transmission of Color
Color television is a television transmission technology that includes information on the color of
the picture, so the video image can be displayed in color on the television screen. It is an
improvement on the earliest television technology, monochrome or black and white television, in
which the image is displayed in shades of grey .
In its most basic form, a color broadcast can be created by broadcasting three monochrome
images, one each in the three colors of red, green, and blue (RGB). When displayed together or
in rapid succession, these images will blend together to produce a full color image as seen by the
viewer.
The basic idea of using three monochrome images to produce a color image had been
experimented with almost as soon as black-and-white televisions had first been built.
Among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a
color system, including the first mentions in television literature of line and frame scanning,
although he gave no practical details.[5] Polish inventor Jan Szczepanik patented a color
television system in 1897, using aselenium photoelectric cell at the transmitter and an
electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his
system contained no means of analyzing the spectrum of colors at the transmitting end, and could
not have worked as he described it.[6] An Armenian inventor, Hovannes Adamian, also
experimented with color television as early as 1907. The first color television project is claimed
by him,[7] and was patented in Germany on March 31, 1908.
Scottish inventor John Logie Baird demonstrated the world's first color transmission on July 3,
1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures,
each spiral with filters of a different primary color; and three light sources at the receiving end,
with a commutator to alternate their illumination.Baird also made the world's first color
broadcast on February 4, 1938, sending a mechanically scanned 120-line image from
Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre.[11]
Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929
using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with
a series of mirrors to superimpose the red, green, and blue images into one full color image.
Linear VS Non-linear editing
In the past, film editing was done in a linear fashion, where the film was literally cut into long
strips divided by scene and take, and then glued or taped back together to create a film in logical
sequence. This was time-consuming, tedious and highly specialized work. While linear editing is
still relevant today, there is a newer and more user-friendly system available for editors:
nonlinear editing. Curious about what these systems can and can‘t do and the pros and cons each
system has? Well, let‘s take a look…
Linear Video Editing Method
Linear video editing is a process of selecting, arranging and modifying images and sound in a
pre-determined, ordered sequence – from start to finish. Linear editing is most commonly used
when working with videotape. Unlike film, videotape cannot be physically cut into pieces to be
spliced together to create a new order. Instead, the editor must dub or record each desired video
clip onto a master tape.
For example, let‘s say an editor has three source tapes; A, B and C and he decided that he would
use tape C first, B second and A third. He would then start by cutting up tape C to the beginning
of the clip he wants to use, then as he plays tape C, it would also be simultaneously recording the
clip onto a master tape. When the desired clip from tape C is done, the recording is stopped.
Then the whole process is repeated with tapes B and A.
Pros vs Cons
There are a couple of disadvantages one would come across when using the linear video editing
method. First, it is not possible to insert or delete scenes from the master tape without re-copying
all the subsequent scenes. As each piece of video clip must be laid down in real time, you would
not be able to go back to make a change without re-editing everything after the change.
Secondly, because of the overdubbing that has to take place if you want to replace a current clip
with a new one, the two clips must be of the exact same length. If the new clip is too short, the
tail end of the old clip will still appear on the master tape. If it‘s too long, then it‘ll roll into the
next scene. The solution is to either make the new clip fit to the current one, or rebuild the
project from the edit to the end, both of which is not very pleasant. Meanwhile, all that
overdubbing also causes the image quality to degrade.
However, linear editing still has some advantages:
It is simple and inexpensive. There are very few complications with formats, hardware conflicts,
etc.
For some jobs linear editing is better. For example, if all you want to do is add two sections of
video together, it is a lot quicker and easier to edit tape-to-tape than to capture and edit on a hard
drive.
Learning linear editing skills increases your knowledge base and versatility. According to many
professional editors, those who learn linear editing first tend to become better all-round editors.
Nonlinear Video Editing Method
The nonlinear video editing method is a way of random access editing, which means instant
access to whatever clip you want, whenever you want it. So instead of going in a set order, you
are able to work on any segment of the project at any time, in any order you want. In nonlinear
video editing, the original source files are not lost or modified during editing. This is done
through an edit decision list (EDL), which records the decisions of the editor and can also be
interchanged with other editing tools. As such, many variations of the original source files can
exit without needing to store many different copies, allowing for very flexible editing. It is also
easy to change cuts and undo previous decisions simply by editing the EDL, without having to
have the actual film data duplicated. Loss of video quality is also avoided due to not having to
repeatedly re-encode the data when different effects are applied.
Nonlinear editing differs from linear editing in several ways.
First, video from the sources is recorded to the editing computer‘s hard drive or RAID array
prior to the edit session.
Next, rather than laying video to the recorder in sequential shots, the segments are assembled
using a video editing software program. The segments can be moved around at will in a dragand-drop
fashion.
Transitions can be placed between the segments. Also, most of the video editing programs have
some sort of CG or character generator feature built in for lower-thirds or titles.
The work-in-progress can be viewed at any time during the edit in real time. Once the edit is
complete, it is finally laid to video.
Non-linear video editing removes the need to lay down video in real time. It also allows the
individual doing the editing to make changes at any point without affecting the rest of the edit.
Pros vs Cons
There are many advantages a nonlinear video editing system presents. First, it allows you access
to any frame, scene, or even groups of scenes at any time. Also, as the original video footage is
kept intact when editing, you are able to return to the original take whenever you like. Secondly,
nonlinear video editing systems offers the flexibility of editing. You can change your mind a
hundred times over and changes can also be made a hundred times over without having to start
all over again with each change. Thirdly, it is also possible to edit both standard definition (SD)
and high definition (HD) broadcast quality videos very quickly on normal PCs which do not
have the power to do the full processing of the huge full quality high resolution data in real-time.
The biggest downside to nonlinear video editing is the cost. While the dedicated hardware and
software doesn‘t cost much, the computers and hard drives do, from two to five times more than
the gear. . However, as the nonlinear technology pushes forward, count on big gains in digital
video storage and compression, as well as lower prices on computers and hard disks in the very
near future.
No comments:
Post a Comment