T351 Summer - Week 4
Lecture & lab this week:
- Review Midterm quiz
- RGB/COlor Differeence - Color Sampling / Codecs
- We'll finish editing
the Interview/Feature story projects and review them.
- Lighting Exercise - Everyone has turned in Quicktime movies, logs & critiques, yes?
- Final Projects: You are scheduled to shoot these next week and should already have turned in a treatment. Looking through your Final Project materials, I see that not everyone has done this. Please note that a treatment is required- All Final Projects require a treatment (1st) and then a script (2nd). I want to meet with each one of you about your final project this week and review your treatment with you. Please have this uploaded by tomorrow. Your scripts are due by 5 pm Friday (emailed Word
docs, PDF, or RTF are fine) to turn them in. (You can also leave them in my 2nd floor mailbox.) In order to receive a grade for the Pre-Production components of your Final Project, you must turn in your materials this week.
- Please do not begin shooting unless you have met with me about your project and have turned in your pre-production
- Drama/Storytelling projects: How is your planning coming along? (Get status from each group.)
- You should shoot these this week, and edit Thursday & Friday. We'll look at these Tuesday - in one week. Please remember that your projects don't have to be long-but they do need to have a clear
story, beginning and ending. And remember the #1 ingredient for a story is conflict followed
by resolution. It's important to show believable motivation and action.
team should have turned in one script (group grade). I expect everyone to have their own edit. They can work together *if* there is a compelling reason for them to do so. You need to tell me this week if anyone plans to work together on the edit.
- Wednesday - Just a short lab to show you the jib and to meet about Final Projects. You have the rest of the day to work on Drama/Storytelling projects
- Thursday - No Lab - The time is for completing and editing the
- Advanced Production & Editing Concepts
- Off-line / On-line editing
- Time code
- Review Interview/Feature Story projects. NOTE: Students must review and critique everyone's project (goes into the Perr Critique" Oncourse folder. For each story write down a few things that are working and the weakest aspects of the production. Also note your impression and any advice. This will go into your In-class Critique grade.
Stories have a beginning, middle and end.
An interesting scenario is NOT a story. (E.g. a man wakes up in a rowboat.) Once you introduce the prime character and the conflict, you need to resolve it.
Keep your storytelling projects short & sweet! It's much better to have 4 minutes of gold than 10 minutes of yuck.
A good goal is to always try to make the viewer wonder, "What's going to happen next?"
[Look at examples]
Story & character - A story usually involves one or more individuals and the conflict they face. Characters often transform (have a character arc). If there is just a situation spiraling one direction (I feel sad because my boyfriend died and this makes me drink....), that's not really a story. Similarly just having people fight or make love is not storytelling. You need to make us care about the characters and the story. We need to know who basic motivation of our characters and why they are doing what they are doing. It's not what happens to us that defines us or our characters- it's how we deal with it.
Clearly draw your characters - There's a concept called "first action" that addresses the very first time the audience sees a character. The idea is that we find out something about the person that identifies who they are and that provides insight into their character. Maybe the first time we see "Joe" (the hero in our short story) he is coming out of a building and holds the door open for someone coming in. It takes 4 seconds to show this and establishes the fact that Joe is probably an alright guy. Maybe the first time we see "Pete" (the bad guy in our short story) he is honking at a homeless person slowly pushing her shopping cart across the street.
All shows can be scripted. A treatment is the way to work up a story and refine the flow. Once you've worked out a tretament the next step is to create a script Here are a few examples of my documentary treatments:
What makes a first-rate video? (What will we be looking at when we grade these?)
- Video serves a clear purpose. Viewers have many choices on TV and the web. Why should they watch your program? People want content with a point.
- The message and storyline is clear. There should be a clear and logical introduction and a clear and concise ending. (Jim's 10-minute test.)
- Every scene, shot and sound is included for a reason. (Advance the story or
build the character.)
- Technically & aesthetically sound
- Strong composition (depth of frame, rule of thirds, etc.). All shots should be nicely composed. To get more depth remember the Z space (depth). Think of different
camera angles and ways to heighten the sense of depth. Try to block action and compositional features
along the Z axis.
- No long (unmotivated) pans/tilts, shaky shots, bumps or awkward or unnecessary zooms
- Good exposure and tonal balance
- Thoughtful lighting - Good lighting is expected, and not just used to make
your subjects visible- it is to draw attention to what's important,
evoke a feeling, make a statement. One of the worse locations is a small room with light walls. If you just turn on the softbox or bounce light off of the ceiling, it'll light everything, and blow out (overpower) your subject. It is usually better to use a small spot in situations like these (E.g. the 150 watt fresnel).
- Audio - Don't rely on the camera mic (unless you are jsut shooting B-roll).
- Consider the "mis en scene" Everything in the frame should be in
there for a reason.
- Edits are motivated
- Care should be taken to make professional-looking graphics. It helps to have a consistent visual treatment and to use them appropriately.
- Sound design - take time to finesse the soundtrack. Good sound quality with consistent audio levels. You can get many interesting audio elements from the Library / Audio / Apple loops folder.
Video Production Tips and Techniques
- Understand the point of your video. Why are you making it and why will a viewer want to watch it? Understand the context of how your video will be viewed.
- Keep the objective and target
audience in mind when developing the treatment and then the script.
- A strong script and a well-developed preproduction plan is your key
to a sucessful production. When working on any large-budget video or
industrial for corporate clients, always focus on making
This is a script that has been carefully scrutinized and has been checked
off on again and again. If it
isn't in the script it isn't in the video! This way any changes
after the script has been approved are billable.
Children's Museum example (script change after it was approved)
- Know the S.O.P. and conventions of the trade. These exist for a reason. This includes using conventional formats for proposals, treatments and the various types of scripts.
- On scripting non-fiction: See Jim's super secret (NOT) production planning form. Note the "ingredient list". These are the essential ideas that will be embedded into the video. Not all have to be verbally articulated.
- Technical: know your delivery requirements (E.g. 1080i ProRes).
- Graphics - Don't wait until after production! Plan your graphics along with your other shots and imagery. It helps to have a graphic style sheet that you will use
when making graphics. Just as you don't want to search for lost shots while
editing, you don't want to spend time pondering what graphics to have or what fonts to use during an edit session. Prepare graphics in advance. Do they have to be centercut safe?
The image above shows a 4x3 centercut mask placed underneath FCP's 16x9 safe action and safe text guides. Many video cameras have a variety of viewfinder guides you can toggle through.
Here's a link to a 15-second spot that had to fit inside the 4x3 safe action and safe text areas:
- Plan, plan, plan...... There is never too much pre-production.
- It's also important to develop detailed production schedules. Consider
the most efficient way to shoot a production. It's typically not in the
order of the script. Shot sheets tell you what shots you need
at any given location.
When you have multiple locations, talent, a crew and two trucks of people and gear to haul around, it's important to have a good production plan. Otherwise you're going to waste money and people's time. (not good)
Check out this example for a special episode of The Friday Zone, which was all about cicadas:
Establish your scene - We need to know where
we are, what time of day it is, where objects and people are, and the
layout of the space. Always be sure to establish this or you will confuse the viewer.
- Continuity - Unless you have a very good reason for doing so, follow
the rules of continuity.
- Film Style - Repeat the action capturing it with different (cut-able) shots. Works great for fiction and non-fiction (capturing B-roll)
- Rule of threes - When shooting think of three things: the shot we see right before the one you are about to take, what you are about to take, and the shot that will come immediately after.
- It's Art! - Try to make every shot well composed and interesting (a piece of art in itself).
- Think Deep Thoughts - To get the maximum sense of depth try composing your shot with 3 distinct layers: foreground, mid-ground and background. If you're shooting an establishing shot of the exterior of a building, you can setup the camera with something in the forgeound (E.g. branch). You can also frame strong lines so they lead into the frame as opposed to perpindicular to it.
- Be shallow - One of the most definable traits of film is it's shallow depth of field. You can get the most out of a small format camera by using a large aperture (small f-stop) and using the telephoto (larger focal length). If there is plenty of light I'll turn on the ND filters untill I'm able to shoot with something close to an f-2.
- Warm things up - Slide a warming card into your camera case. Warming cards are slightly blue. When you white balance on one you trick the camera into shifting the hue just a little. They make everything look a little rosier (warmer). Alternatively you can cool things down (tint blue) by white balancing on a slightly warmer/rose-colored card. Sure you can color correct in post, but this saves rendering.
Warming WB card
- Get a move on - Move the camera or move the subject. Dollies require a lot of gear and setup time. Often the move we need is only a few feet. Small jibs and sliders offer an inexpensive, simple, and effective solution. Slide rails are inexpensive, transportable, and provide an easy way to move a camera in one direction. They can also get into places where a dolly can't (E.g. on a tabletop). Small jibs can provide both horizontal and vertical movement. Small video cameras and DSLRs are lightweight enough to use the inexpensive variety of a Steadicam (without the counterbalanced/articulating arm). Even a monopod has enough weight to stablize a very light video camera. For these to be most effective you want to setup your shots with foreground, mid-ground and background elements.
Lighting - Except for shooting
B-roll in run & gun situations, always plan on lighting your subject.
I usually try to make the subject about 2 stops brighter
than the background. Yes, there are exceptions, but almost always we
need to make our subject stand out. This is especially true with interviews.
This is why you usually don't want to place subjects against a light
wall- the wall would be brighter than the subject. If nothing else, always
try to get some soft key light on
your subject. (This can sometimes be done with a reflector providing
there's a good light source to use.)
Audio - remember that we often
have to make the audio perspective match the camera perspective. In other
words when we are far from our subject, we expect the sound to sound
like it's further away. When we get close to a subject we expect the
sound to sound like it's closer.
Room tone - Your timeline should always
have something in it - as almost all spaces have some amount of ambient
noise. This includes, fans, air conditioners, electronics, outside background
noises, etc. When shooting interviews or storytelling pieces with
conversation, always record 30-60 seconds of "room tone." This
gives you audio filler to use that can bridge
gaps between exchanges in conversations, or put under additional dialog recorded after the scene has wrapped.
After Production / before Post-Production Activities-
- Log & transcribe footage (it's almost always worth the time)
Post-Production / Advanced Editing Concepts
- Apple Final Cut Pro or Avid? http://vimeo.com/22029233
- Practice, practice, practice - Know your editing app and become comfortable and proficient using it.
- Sometimes it's good to give scenes a distinct look, such as when illustrating dreams or flashbacks. Plug-ins are 3rd party software modules that function within a host
application. For instance Boris Continuum offers a wide assortment of
effects for After Effects, AVID, and FCP. Once you purchase the plug-in
package and install it, you'll find the Boris Effects accessible through
the standard video and audio effects menus within the host application.
- Plug-ins such as Red Giant Looks and Boris Continuum are worth noting. Some can be purchased with an Academic Discount. DOn't have a discount? Get one while you're a student through B&H or JourneyEd.
The concept of advanced editing implies an
inherent need to work efficiently. High-end editing suites can
cost a thousand dollars an hour to rent. Both producer and editor are
under the gun to work quickly and efficiently.
The producer shouldn't be in an edit session wondering what shot to
use. Most major decisions can be planned in advance and made outside
of costly studios or editing suites.
Always be sure to copy any media you want to use in your project into
your master media folder. Do this before importing the media.
If you import directly over the network or from a mounted disc,
your editing software will expect that drive or disc to be mounted in order to access your
So if your client hands you a disc containing their new logo, what will
you do with it?
Using vectorscopes, waveform monitors and TBCs:
Video signals can be broken down into two components: luminance and
chrominance. Luminance is the brightness component & chrominance
is the color component.
A good editing suite will have a vectorscope and waveform monitor set
up, so that the video levels and color can be objectively monitored.
It's easy to make graphics in Photoshop too bright, but if you keep your
eyes on the waveform monitor, you can tell when the signal reaches 100
Waveform Monitor - A device used to examine the luminance portion
of the video signal and its synchronizing pulses. The scale starts at –40 – goes
to 0 then up to 120 IRE Units (IRE = Institute of Radio Engineers). One
f-stop translates into about 20 IRE units The major setting to be aware
Digital black should register at 0 IRE. (Analog/NTSC is be 7.5 IRE)
The brightest white shouldn't be any hotter than 100 IRE on a waveform monitor.
Be sure you know how to monitor & change luma levels in your editing application.
Vectorscope - A vector display measuring device that allows visual
checking of the phase and amplitude of the color components of a video
signal. They are especially useful when used with color bars, as the
display face has targets that show both proper phase and saturation.
TBC (time base corrector) - A piece of equipment used
to correct instabilities in analog video signals, provide synchronization
between video signals, and adjust phase differences in signals to correct
color or make them consistent with other signals. TBCs usually have a "proc
amp" which lets you "tweak" or adjust the video's
brightness, hue, saturation and setup.
- Basic proc amp adjustments include
- Chroma (amount of color)
- Phase / Hue (actual color)
- Brightness (amount of gain or brightness)
- Contrast (on some)
- Setup (aka pedestal) A signal elevating the black level and all other
portions of the video signal
If you have a copy-protected VHS tape or DVD that you need to dub,
you can run the video through a TBC. It strips the old sync, which
has been modified to create dubbing problems, and replace it with new
FCP, Premiere and Avid provide internal waveform monitors and vectorscopes.
This provides an excellent way to check levels for graphics and when
applying video effects (these are often too bright for legal video).
Color Bars are electronic reference signals generated by cameras
or post-production equipment. They should always be recorded at the head
of a videotape to provide a consistent reference in post production.
They can be used for matching the output of two cameras in a multi-camera
shoot and to set up video monitors. In general there are two types of
bars full field and SMPTE (split). The SMPTE bars are more useful.
When digitizing source footage, it's always a good idea to capture some
of the bars from the beginning of each reel. This lets you check the
digitized footage to ensure color accuracy.
Strive to get consistent audio- especially with dialog
and narration. Don't just trust just your ears, but use the audio meter
to make sure all of your clips reach the same level. For example, you
might choose -14 as the average level to reach for your spoken narration.
As you add or edit narration, make sure it reaches -14 consistently.
Use markers to edit video to the beat. Play your
video in real time and press the m key to set markers (BOth Permiere & FCP). This will act
as a visual guide to edit clips to. This is also very useful for cutting out sections of songs (E.g. verses and choruses).
Audio normalizing - Sometime you end up with an audio track that's too soft. You've cranked the gain up and it's still too low. What can you do? Normalize the audio tracks. Normalizing let's you boost the audio signal, increasing the overall amplitude (loudness)
of the track. However it will also bring extraneous noise (background
hums, etc.) up as well.
Audio sweetening - FCP and Avid are great video editing
applications but are not optimized for audio. Sweetening audio is the
process of adding sounds and manipulating the signal in complex ways
that our video application is simply not suited for. For example sweetening
allows audio engineers to add effects, make a surround sound mix, add
foley sounds .
Even simple audio edits are better off carried out in an audio editor.
Consider the time base of video. We have a frame approximately every
30 seconds. When we cut from one image to another it can only happen
at intervals of about 1/30 of a second.
If we cut audio along with our frame of video we run into problems.
The reason is that audio is a waveform oscillating back and forth down
to the frequency of our sampling rate (typically 44.1 kHz or 48 kHz).Ideally
we need to cut the audio when it's at the 0 crossing. That is when it's
at the 0 point as is crosses over from positive to negative or vice versa.
Otherwise, if we cut the audio in the middle of a peak, we will get a
pop. So if we cut the audio along with the video (on an steady interval
of 1/30 a second), we are likely to get a pop, as chances are slim that
the audio will happen to be at the 0 crossing.
Audio editing applications let you cut audio on the 0 crossings.
Check sequence settings - these can differ from the
clip settings. If they are different, you'll have to render your clips.
Know how to add tracks. Adding tracks is easy. Just control click on the background of the timeline (not
in a track) and select "add track". You can also add and
delete tracks in the drop down sequence menu.
Linking and Unlinking tracks - You sometimes want
to link or unlink tracks. Select the tracks and choose link or unlink
from the drop down "Modify" menu..
Splt edits, also known as J or L edits, can be made
in many ways in FCP. The easiest way is to select the Roll Tool (press
R), and while holding down the Option key, drag the edit point left
Extend edit is a quick way to bring a clip to wherever
the timeline is parked. To use it first highlight the transition you
wish to extend, plave the time indicator where you want to extend it
to, then press the e key.
Match frame - Have a frame in your timeline that
you want to find the original clip for? Put the time indicator over
the highlighted frame in the timeline and press the f key. Voila your
clip will load into the viewer. (This also works in Avid) Also, if
you want to locate the clip in its bin, place the time indicator over
the highlighted clip and press the F key. Your clip should be shown
in the Browser.
Applying effects to an entire sequence - There are
many reasons why you might want to apply an effect to an entire sequence.
For instance you might want to add a letterbox, or apply a color treatment
to give it a particular "look". Simply nest your edited sequence into
another, and apply the effect to the new sequence.
EDL (edit decision list) Most editing applciations
can import and export EDLs. In FCP, once your program is complete,
you can generate an EDL through File -> Export EDL. (more on EDLs
Remember the CyberCollege Editing Guidelines!
Additional reminders, tips and things to consider:
- Always start and end in black
- Spend time with the graphics and visual design aspect of your videos. Come up with a "look" or visual treatment for your graphics. Consider the colors and textures you might use for time and place identifiers in a science fiction story. Or the treatment you might give a documentary on a haunted victorian mansion. The look and feel of your graphics should be consistent throughout the entire video.
- Red Giant Magic Bullet "Looks" is an inexpensive plugin for editing system that provides enhanced looks. It can be applied to a single shot or an entire scene.
- Spend time with the sound design aspect of your videos. Be sure your audio levels are consistent throughout your sequence(s).
- I usually suggest editing with audio in mind first. In other words listen carefully to how the inerviews, or other elements flow from one to the next. I also lay down music in my first pass, as I'll often cut to the beat and use it to set important moods.
- Edit B-roll to the natural phrases of dialog, or to the beat of the soundtrack
- Sounds can be used to "justify" video edits. [Pellucid, Gesi's piece, or Run Lola Run are good examples]
- Be sure to avoid jump cuts & flash frames
Basic Color Correction & Finishing
Color correction is the process of enhancing the visual appearance in regard to hue, saturation, and contrast.
If you want to create videos that can be broadcast, it's important to understand at least the basics of color and gamma correction.
For TV, it's imperitive that luma levels don't go above 100 IRE and that the audio is consistent with the delivery specifications of the station. Here are some sample technical delivery specifications for broadcast TV:
Be sure you know how to perform basic color correction in whatever software you use.
Sadly, applying effects to a group of clips is a weakness of Premiere. One can copy and paste attributes but this is kulnky.
RGB (true component) verses the Color Difference System
*Color difference signals are one way to break down the information in
a video signal. (Other ways include composite video, Y/C or S-Video,
and RGB.) The color difference signals can be expressed as R-Y, B-Y or
Cr, Cb or sometimes U, V. This color difference signals are used in the
digitizing process. What the heck
is a color
Color difference signals: TV uses an additive color
system based on RGB as the primary colors. Mix red, green and blue
together and you
get white, right? Well if the RGB data were stored as three
separate signals (plus sync) it would take a lot of room to store all
Fortunately some great technical
minds figured out a way pack this information
into a smaller box (figuratively speaking) devising a way to convert
the RGB information into
two new video signals
that take up less room, with minimal loss in perceived picture
quality. The color difference
signals and are typically represented by UV or Cr Cb. So when you see
YUV it is referring to Y (luminance) and UV (the two color difference
Combining the RGB signals according to the original NTSC broadcast
system standards creates a monochrome luminance signal (Y). So you
can basically pull out the blue and red signals and subtract them from
the total luminance to get the green info.
Interframe verses Intraframe
Only the highest end video is uncompressed. Almost all video (especially HD) uses some sort of compression. When looking at the characteristics of various video recording gear, it's important to understand the basic differences between two general types of compression.
Most standard definition production tv codecs use some type of intraframe compression. This is where we take each individual frame and squeeze it so it all fits onto tape or disk. Examples of intraframe codecs include:
- Apple ProRes
- Avid DNX_HD
- Panasonic D5
However many new HD recording formats use interframe compression. The important thing to understand about interframe compression is that it compresses over time as well as space. In intraframe compression we divide the picture into smaller rectangles called macroblocks. These macroblocks are compressed and tracked over time and placed into a GOP (Group of Pictures) Examples of interframe codecs include:
- HDV (MPEG-2)
- XDCAM (MPEG-2)
MPEG-2 is a popular interframe codec. It is a very efficient in that it can squeeze a high definition video image into the same amount of space that a standard DV stream can occupy. (That's why we can record HDV onto a miniDV tape.) The other interesting thing about MPEG-2 is that it's scalable- we can make the frame dimensions varying sizes (720 x 480, 1440 x 1080 etc.). The down side is that GOPs can be a bit more taxing on GPUs to edit. Deconstructing the GOPs during the edit process tasks the computers to a greater degree than intraframe codecs.
Off-line & On-line
Traditionally one of the purposes of off-line systems were to create
EDLs that could be brought into higher-end on-line systems. The first
non-linear editors (D Vision and early Avids) were sophisticated off-line
systems that could not only generate an EDL, but let the editors work
with VHS like quality. With advances in technology non-linear editing
system got steadily better, and today off-the-shelf PC or Macs are
capable of editing on-line video.
Time code is an electronic numerical signal recorded or embedded into
the signal, which allows video and audio to
be synchronized with frame accuracy. With time code, each frame or
location on a tape is assigned a unique number. This allows us to access
that specific frame or location in the media precisely, again and again with frame accuracy.
Here in the US with our NTSC standard, weve been taught that
video runs at 30 frames per second- actually its 29.97. While
we count it on a 30 frames per second basis, video runs at 29.97 frames
During recording, a unique timecode number is assigned to each frame of video (meta data). Its format is like a 24 hour clock
xx:xx:xx:xx. "hours" range from 00 to 23, "minutes" range
from 00 to 59, "seconds" range from 00 to 59, "frames" ranges
from 00-29 (NTSC)
There are two ways to count or create timecode (which can usually
be selected on the VTR) basic, (non-drop) frame and drop frame.
In basic (non-drop) timecode, each new frame of video is assigned
the next higher number (06:01:00:29 becomes 06:01:01:00)
The problem with basic non-drop timecode is that the frame numbers
drift from the actual elapsed time of a program.
Imagine you've been asked to assemble one day's worth of programming
for a TV station. You could set your timecode display to start at 0,
then assemble your programming. When you got to 24 hours you could
call it a day (har har) & go home. If video actually ran at 30
frames per second you'd be fine & you'd have a job waiting for
you the next day.
Let's assume a 30 frame per second rate as our basic timecode readout
leads us to believe and look at a day:
24 x 60 minutes = 1,440 minutes.
1,440 minutes contains 86,400 seconds.
86,400 seconds x 30 = 2,592,000
But video actually runs at 29.97 frames per second. That's a 3/100ths
of a second difference from 30 frames per second.
There are actually 2,589,408 frames of video in a 24-hour period.
24 x 60 minutes = 1,440 minutes.
1,440 minutes contains 86,400 seconds.
86,400 seconds x 29.97 = 2,589,408
2,589,408 frames of video in a 24 hour period
2,592,000 (using 30)
- 2,589,408 (using 29.97)
+ 2,592 frames of video difference / 29.97 (frames per second) = 86.5 seconds.
That's almost a minute and a half too much programming!
If you went by your counter and stopped when it reached 24 hours,
your program would be cut short almost a minute and a half. However
if you were smart you would use drop frame time code totally bypassing
real time issues.
Drop frame time code is harder to calculate, but it provides a numbering
system that is more accurate, timewise.
In drop frame time code, the frame numbers 0 and 1 are removed from
each minute except for every tenth minute (starting from the first).
That is, minute 00, 10, 20, 30 and so on, do not have any frame numbers
dropped, but all other minutes do.
You can tell when something is drop frame because the time code display
has semicolons (06;01;00;29 becomes 06;01;01;02)
EDL (edit decision list)
Edited programs often need to be rebuilt or re-edited from the raw ingredients
or source files they were created from. When a program is edited, you
can create and save an EDL corresponding to the master tape or master
sequence in the timeline.
An EDL is a simple ASCI text file that describes the events. Once created,
an EDL can be used to re-edit the project. A good EDL allows sequences
to be recreated with frame accuracy, including placement and types of
Most editing systems- both linear and non-linear, create an EDL as you
assemble or create the project. Each edit made adds a decision in the
|Title: Johnny's Big Adventure
|REM: Format: CMX 340/3400
|FCM: Non-drop frame
|REM: Record times are non-drop
What’s in an EDL? (Structure)
The EDL usually starts with a title, date and the time code information
(drop-frame, non drop frame). Comments are preceded by the REM flag.
Each edit becomes an event in the list. A single event describes the
reel names or source, (BL means black), track type (V, A1, A2), and transition
(cut, wipe, dissolve or key) along with the source tape time code in
and out points, followed by the record tape time code in and out points.
Note that events with dissolves take two lines to describe the event.
They might start with a cut, then the dissolve that takes place.
Remember that semicolons (;) denote drop-frame and regular colons (:)
signify non-drop frame.
The EDL created by an editing system is typically unique to that system,
but follows one of a handful of standard formats:
- CMX 340 (2 audio tracks)
- CMX 3600 (4 audio tracks)
- Sony 2000 (4 audio tracks)
- Sony 5000 (2 audio tracks)
- Sony 9000 (4 audio tracks)
- Sony 9100 (4 audio tracks)
- Grass Valley (4 audio tracks)
Vocabulary (Know and be able to define these terms)
- Audio normalizing
- Audio sweetening
- Color bars
- Media Manager
- Pedestal (aka setup)
- Proc Amp
- Setup (aka pedestal)
- Timecode (DF vs NDF)
- Waveform Monitor
- Window dub
Back to Jim Krause's Summer
T351 Home Page