T351 - Summer Week 5
Agenda / Reality check
- This week:
- Any missing Final Project pre-production work MUST be turned today/ASAP to receive any credit. This is 20 points of your Final Project grade. Don't forget to turn in your Storytelling Project critiques by Thursday. I'm interested to read how the group dynamics worked.
- This week there is no lab Wed or Thursday. The time is dedicated for you to work on Final Projects. However I will be available to meet with you individually. Please contact me today to schedule one-on-one time if you need it.
- Don't forget about the Multimedia Exercise, which is
due by Monday, June 11. I'll cover exporting for this today.
- Next week (week 6) is the last week of class!
- Final Quiz will be
next Tuesday (6/12). The lab will be devoted to editing
- Wednesday - View Final Projects. There are no camera checkouts next week (Sunday is the last day to check out a camera).
Lecture & lab today:
- Codecs & Multimedia Architecture
- Format Conversion
- Film to Video transfer
- Digital Video & High Definition Broadcasting
- Metadata & subtitles
- Resources for Post-production
- Final Exam Review
Final Projects: review
All of your written materials (except for storyboards) should be typed.
amount of detail along with the appearance and presentation of your materials & packet
affects your grade.
- These were due last week, but I extended the time for some of you.
- Yes, you CAN script feature stories and documentaries. Scripts are expected for all projects.
- Talent release forms can be found on the T351 website.
Video & Multimedia
Video from a non-linear editor often needs to be converted for uses
other than television.
Other uses include:
- Video games
- Video conferencing
Each of these applications use different codecs. It's
important to understand the difference between multimedia
systems or architecture (the
container) and codecs, which are specific
file types. Codecs are designed for specific purposes. Some are optimized for delivery, some are best suited for editing.
Two popular mutlimedia systems include
Windows Media and Apple Quicktime. Both of these systems support a number of
The Internet supports many competing architectures: Windows, Quicktime & Real.
Websites often offer video at vaying bandwidths depending on your connection speed, but often use just
one of the three architectures. You've discoveed this if you've ever
found that you couldn't play video from a website untll installing the
right plug-in for your browser
- MTV uses Windows media.
- CNN uses Real
- Apple uses Quicktime
- YouTube uses Flash
Software such as Apple Compressor, Adobe Media Encoder, Sorenson Squeeze, or Flip4Mac can take a movie and output it in
a variety of formats and resolutions. Most allow for batch processing. You could use it to make three different
versions of a movie, each for a different purpose (web, SD DVD & BluRay). In
many cases you can export your video directly from your editing software
into the appropriate codec. Sometimes you need specialized or proprietary
hardware or software in order to make the codec in a different multimedia architexcture.
Exporting for mass consumption: DVD & web.
2 popular codecs:
- Standard-definition Video DVDs use the MPEG-2 codec
- H.264 is a wonderful multipurpose delivery codec
- Blu-ray disks can use MPEG-2, MPEG-4 and SMPTE VC-1
If you are exporting movies for non-broadcast uses (E.g. for DVD or YouTube), always add at least a half second of black at the beginning before the program fades up from black. This gives a chance for the player to lock onto the audio. If your audio starts instantaneously, the first few milliseconds of audio will likely be cut off.
Fade to black at the end and add at least a few more seconds of black. This way your DVD won't immediately jump back to the menu. It gives just a moment to conclude the ending.
In your sequence/timeline, set an in-point at the beginning and an out-point a few seconds after the fade out at the end.
From the "File" menu, choose "Export -> Using Quicktime Conversion"
For the web:
- iPod works OK- usually it seems to automatically get the pixel aspect ratio. But to be safe, I use a custom size, putting it in square pixels.
- I usually use MPEG-4 (or H.264) for at a high quality setting. If the file size is too large, I dial the data rate back
Video Codecs - Interframe verses Intraframe
Only the highest end video is uncompressed. Almost all video (especially
HD) uses some sort of compression. When looking at the characteristics
of various video recording gear, it's important to understand the basic
differences between two general types of compression.
Most standard definition production tv codecs use some type of intraframe
This is where we take each individual frame and squeeze it so
it all fits onto tape or disk. Examples of intraframe codecs include:
- Apple ProRes
- Avid codecs (AVR25, AVR 50, etc.)
- Panasonic D5
The nice thing about most intraframe recording is that it can be compressed
and played back in real-time using inexpensive hardware.
However many new HD recording formats use interframe
or Group of Pictures (GOP) compression. The
important thing to understand about interframe compression is that it
compresses over time as well as space. In intraframe compression we
divide the picture into smaller rectangles called macroblocks. These
macroblocks are compressed and tracked over time and placed into a GOP
(Group of Pictures) Examples of interframe codecs include:
- HDV (MPEG-2)
- XDCAM (MPEG-2)
MPEG-2 is a popular
interframe codec. It is a very efficient in that it can squeeze a high
definition video image into the same amount of space that a standard
DV stream can occupy. (That's why we can record HDV onto a miniDV tape.)
The other interesting thing about MPEG-2 is that it's scalable- we can
make the frame dimensions varying sizes (720 x 480, 1440 x 1080 etc.).
The down side is that GOPS can be difficult to edit. Deconstructing the
GOPs during the edit process tasks the computers to a greater degree
than intraframe codecs.
4:4:4 vs. 4:2:2 vs. 4:1:1
Today’s digital technology provides us with several ways to digitize
video, mainly 4:2:2 and 4:1:1. What do they refer to?
Quite simply, they refer to the ratio of the
number of luminance (Y) samples to the samples of each of the two color
In the video signal, the most important component is the luminance as
it gives us all the detail absolutely necessary in the picture. As a
result, we must sample luminance at a very high rate, 13.5 Megahertz
(million times per second).
Given that the luminance portion is sampled at 13.5 MHz. Let's apply
the before mentioned ratios: 4:2:2 and 4:1:1. In a 4:1:1 component digital
sample, the color information is sampled at 1/4 the luminance rate: 3.375MHz.
In a 4:2:2 system, the color is sampled at 1/2 the rate of the luminance
What about 4:2:0?
The 4:2:0 is used in MPEG-2 sampling. The two color difference signals
are sampled on alternating lines.
What does this mean?
Quite simply, the color depth of a 4:2:2 component digital signal is
twice that of a 4:1:1 signal and, from the standpoint of color bandwidth,
is twice that of today’s popular component analog formats. This
means better color performance, particularly in areas such as special
effects, chromakeying, alpha keying (transparencies) and computer generated
Editors will have to deal with a number of different media formats and
need to understand the physical distinction between them.
Some of the different formats include film, standard definition and
high definition versions of NTSC, PAL, 16 x 9 and 4 x 3.
NTSC (National Television Systems Committee) definition
of standard definition TV broadcast, (used in North America, some of South America,
Japan, etc) uses a frame rate close to 30 (actually 29.97) frames per second.
There are 525 scan lines; approximately 480 of these are visible.
The ATSC (Advanced Television Systems Committee) is a group formed at the urging
of the FCC to help create standards for the new high definition digital broadcast. HD TV sets with digital tuners need to be ATSC-compliant.
PAL (Phase Alternate Line) is used in most of Europe,
Australia, & Asia and runs at 25 frames per second using 625 lines.
SECAM (Sequential Color and Memory) is used in France and its territories and is similar to SECAM.
If possible it’s best to edit in the media’s native format.
If you have high-quality PAL footage, it’s best to try to keep
it in PAL. If you have 24 fps footage, it’s best to keep it in
24 fps. That way you won’t get conversion artifacts from changing
frame rates and generation losses. But while ideal, we can’t always
practice this. Often we’ll get a tape from another country, or
that contains another type of media that must be integrated into our
Film to Video Conversion (or 24P to 60i)
When converting film to video we use a 3:2 Pulldown (or a 2:3 Pulldown)
Film and 24p video runs at 24 frames per second.
Since film or 24p video runs at 24 fps and video runs about 30 fps, the two aren't
directly interchangeable at least on a frame for frame basis. (To be
more precise, 23.976 film frames become 29.97 video frames.) In order
to transfer film to 30 fps video, the film frames must be precisely sequenced
into a combination of video frames and fields.
A telecine is a piece of hardware containing a film
projector sequenced with a video capture system. The telecine process
is a term used to describe the process of converting film to video, also
called a 3 2 pulldown. In the 3-2 pulldown each frame of film gets converted
to 2 or 3 fields of video.
Note how 4 (24fps) frames are converted to 5 interlaced frames (30 fps).
The problem with converting film frames to fields, is that some video
frames have fields from two different film frames. If you think about
it you'll see that this can present all types of problems.
DTV (Digital TV broadcasting)
TV Broadcasters are supposed to broadcast totally in digital by 2007,
so the analog spectrum can be reclaimed for other purposes. DTV doesn't
necessarily mean high-definition.
DTV broadcasts can be either HD (High Definition) or SD (standard
You can squeeze 4 SD programs in the same space used to broadcast one
Both use MPEG-2 compression.
SD vs. HD
SD works in both 4:3 and 16:9. Its video pixel dimensions include:
720 x 486, 720 x 480
HD is 16:9. Its video pixel dimensions include:
1280 x 720, 1920 x 1080
Video frame rates: 24p, 30p, 30i, 60p, or 60i.
HD has 4 discreet channels of audio.
Beside having more pixel resolution, HD can display much more information
in terms of color and brightness. HD has a much larger contrast ration
than SD. It's not film, but it's getting close.
Cybercollege reading: http://www.cybercollege.com/dtv_stans.htm
Metadata & Closed Captioning
Metadata (data about the data) is embedded text and numeric information about the clip or program. It can include, but is not limited to:
- clip name
- running time / duration
- audio levels
- DRM (digital rights management)
It can be stored and accessed in XMP (stands for Extensible Metadata Platform and is based on XML). While data can be embedded in XMP, some media formats do not allow for this so data is written to a separate sidecar file.
This is why it's important to keep the directory structure found on Canon DSLRs and in Sony XDCam storage devices. Key information (such as timecode) is often stored in a separate file.
one type of metadata that can be displayed on screen for the hearing-impaired. Carried
in the vertical blanking interval, the FCC mandates that all stations
data. If you watch closed-captionined programming, you'll see a variety of
variations in readability, placement and duration.
Companies like Soft NI create stand-alone subtitler systems that let
you integrate subtitles into a video stream. Adding subtitles involves
proper placement on the screen. Softel-USA makes products for subtitling
Good metadata overview by Philip Hodgetts: http://www.youtube.com/watch?v=GnPzpPvoyLA
Adobe has speech search capabilities built into SoundBooth. It analyzes the audio and transcribes it into text (speech analysis): http://www.youtube.com/watch?v=5CLqspcNWw0
Capture One photo metadata editor: http://www.youtube.com/watch?v=DbtfBAliHTw
Focus Enhancements, Firestore is capable of setting up and recording custom metadata in the field. http://www.youtube.com/watch?v=FUcAQQyz_Mg
Avid makes Metasync, a product which let's editors work with data
right in the timeline. It can be used to link to other types of evidence and forensic documentation right in the timeline: http://www.youtube.com/watch?v=NGp0dc6yVWQ
Vocabulary (Know these terms)
- Closed Captioning
- Codec (short for coder/decoder or compressor/decompressor)
- DTV - Digital Television
- HD - High Definition
- SD - Standard Definition
- Color bars
- Pedestal (aka Setup)
- Proc Amp
- Setup (aka Pedestal)
- Waveform Monitor
- Window dub
Final Test Review
Final Exam is worth 70 points! The best way to review for it is to study
the class notes and the midterm (expect everything
you got wrong on the midterm to be on the final). The final will be true/false,
multple choice, and short answer. It will cover the following areas:
- Shooting/Editing Techniques
- Cybercollege editing guidelines
- Edits work best when motivated
- Whenever possible cut on subject movement.
- Keep in Mind the Strengths and Limitations of the Medium
(TV is a close-up medium)
- Cut away from the scene the moment the visual statement
has been made.
- Emphasize the B-Roll
- If in doubt, leave It out
- Technical continuity vs production (shooting/editing) continuity
- Continuity editing
- Acceleration editing
- Expanding time
- Causality & Motivation (Must have in order to be successful)
- Relational editing (Shots gain meaning when juxtaposed with other
images. Pudovkin's experiment)
- Thematic editing (montage)
- Parallel editing
- Imaging devices: CCDs and CMOS
- resolution - How do we determine horizontal resolution
- zebra stribes - What are they good for? What would you set them
- viewfinders: LCD/color vs B&W
- Gain - What does this do?
- Shutter speeds - what is this good for (2 good uses)
- Depth of Field - what affects this?
- Rack focus - How can you achieve this?
- Angle of view & focal length - How are they related?
- f-stops - Know your f-stops & what they mean
- ND filters - What are they good for? (2 good uses)
- types of mics
- cabling & connectors
- balanced v unbalanced
- line v mic level
- +4 v -10
- Graphics (Review Jim's Graphic Tips)
- types of lighting instruments
- color temp
- Lux vs footcandles
- soft vs hard key
- broad vs narrow lighting
- Video signal / technology
- ATSC, NTSC v DTV, SDTV & HDTV
- HDTV pixel dimensions (1920 x 1080 or 1280 x 720)
- progressive v interlace
- Ways to transfer video digitally (cabling/connectors)
- Ways to transfer analog video (cabling/connectors)
- Color difference v RGB
- timecode (difference between drop & non-drop)
- waveform monitors & vectorscopes
(What do they show?)
- what are the important IRE levels?
- TBC - What is this? What does it do?
- Video architecture vs.video codecs
- Color sampling (4:2:2 v 4:1:1)
- 3-2 pulldown
- Know the main principles of troubleshooting & how to go about
finding problems (not guessing but deducing )
Back to Jim Krause's Summer T351 Home Page