header
gray shim

T351 Summer - Week 4

Lecture & lab this week:

  • Review Midterm quiz
  • Cables and connectors (WooHoo!)
  • Digitizing / Color Sampling / Codecs
  • We'll finish editing the Interview/Feature story projects and review them.

Reality Check

  • Lighting Exercise - Everyone has turned in Quicktime movies, logs & critiques, yes?
  • Final Projects: You are scheduled to shoot these next week and should already have turned in a treatment. Looking through your Final Project materials, I see that not everyone has done this. Please note that a treatment is required- All Final Projects require a treatment (1st) and then a script (2nd). I want to meet with each one of you about your final project this week and review your treatment with you. Please have this uploaded by tomorrow. Your scripts are due by 5 pm Friday (emailed Word docs, PDF, or RTF are fine) to turn them in. (You can also leave them in my 2nd floor mailbox.) In order to receive a grade for the Pre-Production components of your Final Project, you must turn in your materials this week.
    • Please do not begin shooting unless you have met with me about your project and have turned in your pre-production materials.
  • Drama/Storytelling projects: How is your planning coming along? (Get status from each group.)
    • You should shoot these this week, and edit Thursday & Friday. We'll look at these Tuesday - in one week. Please remember that your projects don't have to be long-but they do need to have a clear story, beginning and ending. And remember the #1 ingredient for a story is conflict followed by resolution. It's important to show believable motivation and action.
    • Each team should have turned in one script (group grade). I expect everyone to have their own edit. They can work together *if* there is a compelling reason for them to do so. You need to tell me this week if anyone plans to work together on the edit.
  • Wednesday - Just a short lab to show you the jib and to meet about Final Projects. You have the rest of the day to work on Drama/Storytelling projects
  • Thursday - No Lab - The time is for completing and editing the Drama/Storytelling projects.

Agenda:

  • Storytelling
  • Scripting
  • Advanced Production & Editing Concepts
  • Off-line / On-line editing
  • Time code
  • EDLs
  • Review Interview/Feature Story projects. NOTE: Students must review and critique everyone's project (goes into the Perr Critique" Oncourse folder. For each story write down a few things that are working and the weakest aspects of the production. Also note your impression and any advice. This will go into your In-class Critique grade.

Readings:

Storytelling

Stories have a beginning, middle and end.

An interesting scenario is NOT a story. (E.g. a man wakes up in a rowboat.) Once you introduce the prime character and the conflict, you need to resolve it.

Keep your storytelling projects short & sweet! It's much better to have 4 minutes of gold than 10 minutes of yuck.

A good goal is to always try to make the viewer wonder, "What's going to happen next?"

[Look at examples]

Story & character - A story usually involves one or more individuals and the conflict they face. Characters often transform (have a character arc). If there is just a situation spiraling one direction (I feel sad because my boyfriend died and this makes me drink....), that's not really a story. Similarly just having people fight or make love is not storytelling. You need to make us care about the characters and the story. We need to know who basic motivation of our characters and why they are doing what they are doing. It's not what happens to us that defines us or our characters- it's how we deal with it.

Clearly draw your characters - There's a concept called "first action" that addresses the very first time the audience sees a character. The idea is that we find out something about the person that identifies who they are and that provides insight into their character. Maybe the first time we see "Joe" (the hero in our short story) he is coming out of a building and holds the door open for someone coming in. It takes 4 seconds to show this and establishes the fact that Joe is probably an alright guy. Maybe the first time we see "Pete" (the bad guy in our short story) he is honking at a homeless person slowly pushing her shopping cart across the street.

Scripting

All shows can be scripted. A treatment is the way to work up a story and refine the flow. Once you've worked out a tretament the next step is to create a script Here are a few examples of my documentary treatments:

What makes a first-rate video? (What will we be looking at when we grade these?)

  • Video serves a clear purpose. Viewers have many choices on TV and the web. Why should they watch your program? People want content with a point.
  • The message and storyline is clear. There should be a clear and logical introduction and a clear and concise ending. (Jim's 10-minute test.)
  • Every scene, shot and sound is included for a reason. (Advance the story or build the character.)
  • Technically & aesthetically sound
    • Strong composition (depth of frame, rule of thirds, etc.). All shots should be nicely composed. To get more depth remember the Z space (depth). Think of different camera angles and ways to heighten the sense of depth. Try to block action and compositional features along the Z axis.
    • No long (unmotivated) pans/tilts, shaky shots, bumps or awkward or unnecessary zooms
    • Good exposure and tonal balance
    • Thoughtful lighting - Good lighting is expected, and not just used to make your subjects visible- it is to draw attention to what's important, evoke a feeling, make a statement. One of the worse locations is a small room with light walls. If you just turn on the softbox or bounce light off of the ceiling, it'll light everything, and blow out (overpower) your subject. It is usually better to use a small spot in situations like these (E.g. the 150 watt fresnel).
    • Audio - Don't rely on the camera mic (unless you are jsut shooting B-roll).
    • Consider the "mis en scene" Everything in the frame should be in there for a reason.
    • Edits are motivated
  • Care should be taken to make professional-looking graphics. It helps to have a consistent visual treatment and to use them appropriately.
  • Sound design - take time to finesse the soundtrack. Good sound quality with consistent audio levels. You can get many interesting audio elements from the Library / Audio / Apple loops folder.

Video Production Tips and Techniques

Pre-production

  • Understand the point of your video. Why are you making it and why will a viewer want to watch it? Understand the context of how your video will be viewed.
  • Keep the objective and target audience in mind when developing the treatment and then the script.
  • A strong script and a well-developed preproduction plan is your key to a sucessful production. When working on any large-budget video or industrial for corporate clients, always focus on making a bulletproof script. This is a script that has been carefully scrutinized and has been checked off on again and again. If it isn't in the script it isn't in the video! This way any changes after the script has been approved are billable.

    Children's Museum example (script change after it was approved)

  • Know the S.O.P. and conventions of the trade. These exist for a reason. This includes using conventional formats for proposals, treatments and the various types of scripts.
  • On scripting non-fiction: See Jim's super secret (NOT) production planning form. Note the "ingredient list". These are the essential ideas that will be embedded into the video. Not all have to be verbally articulated.
  • Technical: know your delivery requirements (E.g. 1080i ProRes).
  • Graphics - Don't wait until after production! Plan your graphics along with your other shots and imagery. It helps to have a graphic style sheet that you will use when making graphics. Just as you don't want to search for lost shots while editing, you don't want to spend time pondering what graphics to have or what fonts to use during an edit session. Prepare graphics in advance. Do they have to be centercut safe?

    The image above shows a 4x3 centercut mask placed underneath FCP's 16x9 safe action and safe text guides. Many video cameras have a variety of viewfinder guides you can toggle through.

    Here's a link to a 15-second spot that had to fit inside the 4x3 safe action and safe text areas:

    http://www.youtube.com/watch?v=drGvfSs0cfg

  • Plan, plan, plan...... There is never too much pre-production.
  • It's also important to develop detailed production schedules. Consider the most efficient way to shoot a production. It's typically not in the order of the script. Shot sheets tell you what shots you need at any given location.

    When you have multiple locations, talent, a crew and two trucks of people and gear to haul around, it's important to have a good production plan. Otherwise you're going to waste money and people's time. (not good)

    Check out this example for a special episode of The Friday Zone, which was all about cicadas:

Production

Establish your scene - We need to know where we are, what time of day it is, where objects and people are, and the layout of the space. Always be sure to establish this or you will confuse the viewer.

Shooting Techniques:

  • Continuity - Unless you have a very good reason for doing so, follow the rules of continuity.
  • Film Style - Repeat the action capturing it with different (cut-able) shots. Works great for fiction and non-fiction (capturing B-roll)
  • Rule of threes - When shooting think of three things: the shot we see right before the one you are about to take, what you are about to take, and the shot that will come immediately after.
  • It's Art! - Try to make every shot well composed and interesting (a piece of art in itself).
  • Think Deep Thoughts - To get the maximum sense of depth try composing your shot with 3 distinct layers: foreground, mid-ground and background. If you're shooting an establishing shot of the exterior of a building, you can setup the camera with something in the forgeound (E.g. branch). You can also frame strong lines so they lead into the frame as opposed to perpindicular to it.
  • Be shallow - One of the most definable traits of film is it's shallow depth of field. You can get the most out of a small format camera by using a large aperture (small f-stop) and using the telephoto (larger focal length). If there is plenty of light I'll turn on the ND filters untill I'm able to shoot with something close to an f-2.
  • Warm things up - Slide a warming card into your camera case. Warming cards are slightly blue. When you white balance on one you trick the camera into shifting the hue just a little. They make everything look a little rosier (warmer). Alternatively you can cool things down (tint blue) by white balancing on a slightly warmer/rose-colored card. Sure you can color correct in post, but this saves rendering.
    •  
       

      Normal white

      Warming WB card

  • Get a move on - Move the camera or move the subject. Dollies require a lot of gear and setup time. Often the move we need is only a few feet. Small jibs and sliders offer an inexpensive, simple, and effective solution. Slide rails are inexpensive, transportable, and provide an easy way to move a camera in one direction. They can also get into places where a dolly can't (E.g. on a tabletop). Small jibs can provide both horizontal and vertical movement. Small video cameras and DSLRs are lightweight enough to use the inexpensive variety of a Steadicam (without the counterbalanced/articulating arm). Even a monopod has enough weight to stablize a very light video camera. For these to be most effective you want to setup your shots with foreground, mid-ground and background elements.

Lighting - Except for shooting B-roll in run & gun situations, always plan on lighting your subject. I usually try to make the subject about 2 stops brighter than the background. Yes, there are exceptions, but almost always we need to make our subject stand out. This is especially true with interviews. This is why you usually don't want to place subjects against a light wall- the wall would be brighter than the subject. If nothing else, always try to get some soft key light on your subject. (This can sometimes be done with a reflector providing there's a good light source to use.)

Audio - remember that we often have to make the audio perspective match the camera perspective. In other words when we are far from our subject, we expect the sound to sound like it's further away. When we get close to a subject we expect the sound to sound like it's closer.

Room tone - Your timeline should always have something in it - as almost all spaces have some amount of ambient noise. This includes, fans, air conditioners, electronics, outside background noises, etc. When shooting interviews or storytelling pieces with conversation, always record  30-60 seconds of "room tone." This gives you audio filler to use that can bridge gaps between exchanges in conversations, or put under additional dialog recorded after the scene has wrapped.

After Production / before Post-Production Activities-

  • Log your footage
  • Transcribe interviews (it's almost always worth the time)
  • Make a window Dub / Timecode print - This is a copy of the program or footage which has the timecode numbers superimposed on the screen. I often give my producers a window dub of each reel so they can pick their favorite takes.

Post-Production / Advanced Editing Concepts

  • Apple Final Cut Pro or Avid? http://vimeo.com/22029233
  • Practice, practice, practice - Know your editing app and become comfortable and proficient using it.
  • Sometimes it's good to give scenes a distinct look, such as when illustrating dreams or flashbacks. Plug-ins are 3rd party software modules that function within a host application. For instance Boris Continuum offers a wide assortment of effects for After Effects, AVID, and FCP. Once you purchase the plug-in package and install it, you'll find the Boris Effects accessible through the standard video and audio effects menus within the host application.
    • Plug-ins such as Red Giant Looks and Boris Continuum are worth noting. Some can be purchased with an Academic Discount. DOn't have a discount? Get one while you're a student through B&H or JourneyEd.

http://www.campusmoviefest.com/movies/8741-sparks

The concept of advanced editing implies an inherent need to work efficiently. High-end editing suites can cost a thousand dollars an hour to rent. Both producer and editor are under the gun to work quickly and efficiently.

The producer shouldn't be in an edit session wondering what shot to use. Most major decisions can be planned in advance and made outside of costly studios or editing suites.

Remember: It's possible (and highly recommended) to plan every shot in advance. This is what scripts and storyboards are for.

Importing media

Always be sure to copy any media you want to use in your project into your master media folder. Do this before importing the media. If you import directly over the network or from a mounted disc, your editing software will expect that drive or disc to be mounted in order to access your media.

So if your client hands you a disc containing their new logo, what will you do with it?

 

Drag & drop to Import footage - On most Mac applications you can simply drag your media files over a bin to import them directly into that bin.

Using vectorscopes, waveform monitors and TBCs:

Video signals can be broken down into two components: luminance and chrominance. Luminance is the brightness component & chrominance is the color component.

A good editing suite will have a vectorscope and waveform monitor set up, so that the video levels and color can be objectively monitored. It's easy to make graphics in Photoshop too bright, but if you keep your eyes on the waveform monitor, you can tell when the signal reaches 100 IRE.

Waveform Monitor - A device used to examine the luminance portion of the video signal and its synchronizing pulses. The scale starts at –40 – goes to 0 then up to 120 IRE Units (IRE = Institute of Radio Engineers). One f-stop translates into about 20 IRE units The major setting to be aware of are:

Digital black should register at 0 IRE. Analog/NTSC should be 7.5 IRE
The brightest white shouldn't be any hotter than 100 IRE on a waveform monitor.

Be sure you know how to monitor & change luma levels in your editing application.

Vectorscope - A vector display measuring device that allows visual checking of the phase and amplitude of the color components of a video signal. They are especially useful when used with color bars, as the display face has targets that show both proper phase and saturation.

NOTE: You can't adjust or manipulate a video signal with just a waveform monitor and vectorscope. They simply let you examine the signal. You must use a TBC, a camera control unit or other device to modify the signal.

TBC (time base corrector) - A piece of equipment used to correct instabilities in analog video signals, provide synchronization between video signals, and adjust phase differences in signals to correct color or make them consistent with other signals. TBCs usually have a "proc amp" which lets you "tweak" or adjust the video's brightness, hue, saturation and setup.

  • Basic proc amp adjustments include
  • Chroma (amount of color)
  • Phase / Hue (actual color)
  • Brightness (amount of gain or brightness)
  • Contrast (on some)
  • Setup (aka pedestal) A signal elevating the black level and all other portions of the video signal

If you have a copy-protected VHS tape or DVD that you need to dub, you can run the video through a TBC. It strips the old sync, which has been modified to create dubbing problems, and replace it with new sync.

FCP, Premiere and Avid provide internal waveform monitors and vectorscopes. This provides an excellent way to check levels for graphics and when applying video effects (these are often too bright for legal video).

Color Bars are electronic reference signals generated by cameras or post-production equipment. They should always be recorded at the head of a videotape to provide a consistent reference in post production. They can be used for matching the output of two cameras in a multi-camera shoot and to set up video monitors. In general there are two types of bars full field and SMPTE (split). The SMPTE bars are more useful.

When digitizing source footage, it's always a good idea to capture some of the bars from the beginning of each reel. This lets you check the digitized footage to ensure color accuracy.

Timeline Techniques

Audio

Strive to get consistent audio- especially with dialog and narration. Don't just trust just your ears, but use the audio meter to make sure all of your clips reach the same level. For example, you might choose -14 as the average level to reach for your spoken narration. As you add or edit narration, make sure it reaches -14 consistently.

Use markers to edit video to the beat. Play your video in real time and press the m key to set markers (BOth Permiere & FCP). This will act as a visual guide to edit clips to. This is also very useful for cutting out sections of songs (E.g. verses and choruses).

Audio normalizing - Sometime you end up with an audio track that's too soft. You've cranked the gain up and it's still too low. What can you do? Normalize the audio tracks. Normalizing let's you boost the audio signal, increasing the overall amplitude (loudness) of the track. However it will also bring extraneous noise (background hums, etc.) up as well.

Audio sweetening - FCP and Avid are great video editing applications but are not optimized for audio. Sweetening audio is the process of adding sounds and manipulating the signal in complex ways that our video application is simply not suited for. For example sweetening allows audio engineers to add effects, make a surround sound mix, add foley sounds .

Even simple audio edits are better off carried out in an audio editor. Consider the time base of video. We have a frame approximately every 30 seconds. When we cut from one image to another it can only happen at intervals of about 1/30 of a second.

If we cut audio along with our frame of video we run into problems. The reason is that audio is a waveform oscillating back and forth down to the frequency of our sampling rate (typically 44.1 kHz or 48 kHz).Ideally we need to cut the audio when it's at the 0 crossing. That is when it's at the 0 point as is crosses over from positive to negative or vice versa. Otherwise, if we cut the audio in the middle of a peak, we will get a pop. So if we cut the audio along with the video (on an steady interval of 1/30 a second), we are likely to get a pop, as chances are slim that the audio will happen to be at the 0 crossing.

Audio editing applications let you cut audio on the 0 crossings.

Check sequence settings - these can differ from the clip settings. If they are different, you'll have to render your clips.

Know how to add tracks. Adding tracks is easy. Just control click on the background of the timeline (not in a track) and select "add track". You can also add and delete tracks in the drop down sequence menu.

Linking and Unlinking tracks - You sometimes want to link or unlink tracks. Select the tracks and choose link or unlink from the drop down "Modify" menu..

Splt edits, also known as J or L edits, can be made in many ways in FCP. The easiest way is to select the Roll Tool (press R), and while holding down the Option key, drag the edit point left of right.

Extend edit is a quick way to bring a clip to wherever the timeline is parked. To use it first highlight the transition you wish to extend, plave the time indicator where you want to extend it to, then press the e key.

Match frame - Have a frame in your timeline that you want to find the original clip for? Put the time indicator over the highlighted frame in the timeline and press the f key. Voila your clip will load into the viewer. (This also works in Avid) Also, if you want to locate the clip in its bin, place the time indicator over the highlighted clip and press the F key. Your clip should be shown in the Browser.

Applying effects to an entire sequence - There are many reasons why you might want to apply an effect to an entire sequence. For instance you might want to add a letterbox, or apply a color treatment to give it a particular "look". Simply nest your edited sequence into another, and apply the effect to the new sequence.

EDL (edit decision list) Most editing applciations can import and export EDLs. In FCP, once your program is complete, you can generate an EDL through File -> Export EDL. (more on EDLs below)

File Management and Archiving

We can't keep all of our data forever. There are several scenarios you need to know how to deal with:

  • Deleting the media when finished with a project
  • Freeing up space by deleting unused media
  • Archiving your project so you can access it later.

Become familiar with how to back up and archive your project. It is a good way to condense projects by getting rid of unused media. I've got a short article on how to use FCP's Media Manager on my website.

------------------------------------------------------------------------

Cables and Connectors

How does one get video from one device to another? Typically through a cable and a connector. For analog video, the choices include:

  • Composite (single cable) This is the "lowest common denominator." Every video recording & playback unit has this. Try to avoid as the composite video signal is prone to a variety of artifacts. Connectors: RCA & BNC
  • Y/C (a.k.a. S-Video) The idea is to keep the signal broken down into the luminance & chrominance components. This is much better quality than composite. You can find Y/C ports on everything from consumer camcorders to high-end digital VTRs. Connectors: multi-pin connector.
  • *Color difference The signal is split into three components: Y, R-Y, B-Y.; YUV; or Y'Pb'Pr'. The Y is for luminance, the U is for the blue color difference, the V is for the red color difference. Most high-end VTRs (Beta, MII, DV and digi beta, etc) have these connectors on them. This is the norm for getting Beta SP footage into an editor. It's a tad better than the Y/C system. Connectors: Usually three BNC connectors on each end. RCA connectors are often used on DVD players and projectors.
  • RGB (true component) The Red, Green & Blue signals are kept separate. Not as common as the color difference system. Connectors: BNC & custom multipin connectors (triax).

For digital video you can use:

  • Firewire & USB - These computer busses have been adopted by most of equipment manufacturers.
  • SDI (Serial Digital Interface) Found on high-end digital video devices. Can include embedded audio along with the video.
  • HD-SDI - The high-definition version of the SDI digital interface
  • Fibre, ethernet, wi-fi, etc... Anything that is digital can be transferred over computer networks. Bandwidth is a limiting factor.

*Color difference signals are one way to break down the information in a video signal. (Other ways include composite video, Y/C or S-Video, and RGB.) The color difference signals can be expressed as R-Y, B-Y or Cr, Cb or sometimes U, V. This color difference signals are used in the digitizing process. What the heck is a color difference signal?

Color difference signals: TV uses an additive color system based on RGB as the primary colors. Mix red, green and blue together and you get white, right? Well if the RGB data were stored as three separate signals (plus sync) it would take a lot of room to store all the information. Fortunately some great technical minds figured out a way pack this information into a smaller box (figuratively speaking) devising a way to convert the RGB information into two new video signals that take up less room, with minimal loss in perceived picture quality. The color difference signals and are typically represented by UV or Cr Cb. So when you see YUV it is referring to Y (luminance) and UV (the two color difference signals).

Combining the RGB signals according to the original NTSC broadcast system standards creates a monochrome luminance signal (Y). So you can basically pull out the blue and red signals and subtract them from the total luminance to get the green info.

Video Codecs

Interframe verses Intraframe

Only the highest end video is uncompressed. Almost all video (especially HD) uses some sort of compression. When looking at the characteristics of various video recording gear, it's important to understand the basic differences between two general types of compression.

Most standard definition production tv codecs use some type of intraframe compression. This is where we take each individual frame and squeeze it so it all fits onto tape or disk. Examples of intraframe codecs include:

Apple ProRes
Avid codecs (AVR25, AVR 50, etc.)
DV
DVCProHD
Panasonic D5
However many new HD recording formats use interframe compression. The important thing to understand about interframe compression is that it compresses over time as well as space. In intraframe compression we divide the picture into smaller rectangles called macroblocks. These macroblocks are compressed and tracked over time and placed into a GOP (Group of Pictures) Examples of interframe codecs include:

HDV (MPEG-2)
XDCAM (MPEG-2)
MPEG-4
H.264
MPEG-2 is a popular interframe codec. It is a very efficient in that it can squeeze a high definition video image into the same amount of space that a standard DV stream can occupy. (That's why we can record HDV onto a miniDV tape.) The other interesting thing about MPEG-2 is that it's scalable- we can make the frame dimensions varying sizes (720 x 480, 1440 x 1080 etc.). The down side is that GOPs can be a bit more taxing on GPUs to edit. Deconstructing the GOPs during the edit process tasks the computers to a greater degree than intraframe codecs.

Off-line & On-line

Traditionally one of the purposes of off-line systems were to create EDLs that could be brought into higher-end on-line systems. The first non-linear editors (D Vision and early Avids) were sophisticated off-line systems that could not only generate an EDL, but let the editors work with VHS like quality. With advances in technology non-linear editing system got steadily better, and today off-the-shelf PC or Macs are capable of editing on-line video.

Time code

Time code is an electronic numerical signal recorded or embedded into the signal, which allows video and audio to be synchronized with frame accuracy. With time code, each frame or location on a tape is assigned a unique number. This allows us to access that specific frame or location in the media precisely, again and again with frame accuracy.

Here in the US with our NTSC standard, we’ve been taught that video runs at 30 frames per second- actually it’s 29.97. While we count it on a 30 frames per second basis, video runs at 29.97 frames per second.

During recording, a unique timecode number is assigned to each frame of video (meta data). Its format is like a 24 hour clock xx:xx:xx:xx. "hours" range from 00 to 23, "minutes" range from 00 to 59, "seconds" range from 00 to 59, "frames" ranges from 00-29 (NTSC)

There are two ways to count or create timecode (which can usually be selected on the VTR) basic, (non-drop) frame and drop frame.

In basic (non-drop) timecode, each new frame of video is assigned the next higher number (06:01:00:29 becomes 06:01:01:00)

The problem with basic non-drop timecode is that the frame numbers drift from the actual elapsed time of a program.

Imagine you've been asked to assemble one day's worth of programming for a TV station. You could set your timecode display to start at 0, then assemble your programming. When you got to 24 hours you could call it a day (har har) & go home. If video actually ran at 30 frames per second you'd be fine & you'd have a job waiting for you the next day.

Let's assume a 30 frame per second rate as our basic timecode readout leads us to believe and look at a day:

24 hours
24 x 60 minutes = 1,440 minutes.
1,440 minutes contains 86,400 seconds.
86,400 seconds x 30 = 2,592,000

But video actually runs at 29.97 frames per second. That's a 3/100ths of a second difference from 30 frames per second.

There are actually 2,589,408 frames of video in a 24-hour period.

24 hours
24 x 60 minutes = 1,440 minutes.
1,440 minutes contains 86,400 seconds.
86,400 seconds x 29.97 = 2,589,408
2,589,408 frames of video in a 24 hour period
2,592,000 (using 30)
- 2,589,408 (using 29.97)
+ 2,592 frames of video difference / 29.97 (frames per second) = 86.5 seconds.

That's almost a minute and a half too much programming!

If you went by your counter and stopped when it reached 24 hours, your program would be cut short almost a minute and a half. However if you were smart you would use drop frame time code totally bypassing real time issues.

Drop frame time code is harder to calculate, but it provides a numbering system that is more accurate, timewise.

In drop frame time code, the frame numbers 0 and 1 are removed from each minute except for every tenth minute (starting from the first). That is, minute 00, 10, 20, 30 and so on, do not have any frame numbers dropped, but all other minutes do.

You can tell when something is drop frame because the time code display has semicolons (06;01;00;29 becomes 06;01;01;02)

EDL (edit decision list)

Edited programs often need to be rebuilt or re-edited from the raw ingredients or source files they were created from. When a program is edited, you can create and save an EDL corresponding to the master tape or master sequence in the timeline.

An EDL is a simple ASCI text file that describes the events. Once created, an EDL can be used to re-edit the project. A good EDL allows sequences to be recreated with frame accuracy, including placement and types of transitions.

Most editing systems- both linear and non-linear, create an EDL as you assemble or create the project. Each edit made adds a decision in the list.

EDL example:

Title: Johnny's Big Adventure
REM: Format: CMX 340/3400
FCM: Non-drop frame
REM: Record times are non-drop
001 BL V C   00:00:00:00 00:00:01:26 00:01:00:00 00:01:01:26
002 BL V C   00:00:01:26 00:00:01:26 00:01:01:26 00:01:01:26
002 017 V D 020 17:06:27:21 17:06:34:06 00:01:01:26 00:01:08:11
003 017 V C   17:06:34:06 17:06:34:06 00:01:08:11 00:01:08:11
003 BL V D 030 00:00:00:00 00:00:01:12 00:01:08:11 00:01:09:23
004 BL V C   00:00:01:12 00:00:01:12 00:00:09:23 00:01:09:23
004 017 V D 020 17:21:20:29 17:21:27:23 00:01:09:23 00:01:16:17

What’s in an EDL? (Structure)

The EDL usually starts with a title, date and the time code information (drop-frame, non drop frame). Comments are preceded by the REM flag. Each edit becomes an event in the list. A single event describes the reel names or source, (BL means black), track type (V, A1, A2), and transition (cut, wipe, dissolve or key) along with the source tape time code in and out points, followed by the record tape time code in and out points.

Note that events with dissolves take two lines to describe the event. They might start with a cut, then the dissolve that takes place.

Remember that semicolons (;) denote drop-frame and regular colons (:) signify non-drop frame.

The EDL created by an editing system is typically unique to that system, but follows one of a handful of standard formats:

  • CMX 340 (2 audio tracks)
  • CMX 3600 (4 audio tracks)
  • Sony 2000 (4 audio tracks)
  • Sony 5000 (2 audio tracks)
  • Sony 9000 (4 audio tracks)
  • Sony 9100 (4 audio tracks)
  • Grass Valley (4 audio tracks)

Vocabulary (Know and be able to define these terms)

  • Audio normalizing
  • Audio sweetening
  • Color bars
  • EDL
  • Media Manager
  • Off-line
  • On-line
  • Pedestal (aka setup)
  • Proc Amp
  • Setup (aka pedestal)
  • TBC
  • Timecode (DF vs NDF)
  • Vectorscope
  • Waveform Monitor
  • Window dub

 

 

Back to Jim Krause's Summer T351 Home Page