T351 - Summer Week 5
Agenda / Reality check
- This week:
- For the Drama/Storytelling critiques, I'm interested to read how the group dynamics worked.
- This week there is no lab Wed or Thursday. The time is dedicated for you to work on Final Projects. However I will be available to meet with you individually. Please contact me today to schedule one-on-one time if you need it.
- Don't forget about the Multimedia Exercise, which is
due by Monday, June 16. I'll cover exporting for this today.
- Next week (week 6) is the last week of class!
- Final Quiz will be
next Tuesday (6/17). The lab will be devoted to editing
- Wednesday - View Final Projects. There are no camera checkouts next week (Sunday is the last day to check out a camera).
Lecture & lab today:
- Review timecode (from last week) understand difference between DF & NDF, Free Run & Record Run
- Codecs & Multimedia Architecture
- Format Conversion
- Film to Video transfer
- Digital Video & High Definition Broadcasting
- Metadata & subtitles
- 2K, 4K & Ultra HD
- Resources for Post-production
- Final Exam Review
unit 16 (Waveform monitors and vectorscopes)
- Cybercollege Module
9 (Part 1 and 2. World TV Standards)
- Cybercollege DTV
- Also check out the embedded links in the text below (not on quiz)
Final Projects: review
All of your written materials (except for storyboards) should be typed.
amount of detail along with the appearance and presentation of your materials & packet
affects your grade.
- These were due last week, but I extended the time for some of you.
- Yes, you CAN script feature stories and documentaries. Scripts are expected for all projects.
- Talent release forms can be found on the T351 website.
Video Codecs - Interframe verses Intraframe
Only the highest end video is uncompressed. Almost all video (especially
HD) uses some sort of compression. When looking at the characteristics
of various video recording gear, it's important to understand the basic
differences between two general types of compression.
compression - This is where we take each individual frame and squeeze it so
it all fits onto tape or disk. Examples of intraframe codecs include:
- Apple ProRes
- Avid codecs (DNxHD)
- Panasonic D5
or Group of Pictures (GOP) compression - The
important thing to understand about interframe compression is that it
compresses over time as well as space. In intraframe compression we
divide the picture into smaller rectangles called macroblocks. These
macroblocks are compressed and tracked over time and placed into a GOP
(Group of Pictures) Examples of interframe codecs include:
- HDV (MPEG-2)
- XDCAM (MPEG-2)
MPEG-2 is a popular
interframe codec. It is a very efficient in that it can squeeze a high
definition video image into the same amount of space that a standard
DV stream can occupy. (That's why we can record HDV onto a miniDV tape.)
The other interesting thing about MPEG-2 is that it's scalable- we can
make the frame dimensions varying sizes (720 x 480, 1440 x 1080 etc.).
The down side is that GOPS can be difficult to edit. Deconstructing the
GOPs during the edit process tasks the computers to a greater degree
than intraframe codecs.
4:4:4 vs. 4:2:2 vs. 4:1:1 & 4:2:0
Today’s digital technology provides us with several ways to digitize
video, mainly 4:2:2 and 4:1:1. What do they refer to?
Quite simply, they refer to the ratio of the
number of luminance (Y) samples to the samples of each of the two color
difference signals (Y, Cb, Cr)
In the video signal, the most important component is the luminance as
it gives us all the detail absolutely necessary in the picture. As a
result, we must sample luminance at a very high rate, 13.5 Megahertz
(million times per second).
Given that the luminance portion is sampled at 13.5 MHz. Let's apply
the before mentioned ratios: 4:2:2 and 4:1:1. In a 4:1:1 component digital
sample, the color information is sampled at 1/4 the luminance rate: 3.375MHz.
In a 4:2:2 system, the color is sampled at 1/2 the rate of the luminance
What about 4:2:0?
The 4:2:0 is used in MPEG-2 sampling. The two color difference signals
are sampled on alternating lines.
What does it all mean?
Quite simply, the color depth of a 4:2:2 component digital signal is
twice that of a 4:1:1 signal. This
means better color performance, particularly in areas such as special
effects, chromakeying, alpha keying (transparencies) and computer generated
Analog World TV Standards
There are three analog world standards, still in use today:
NTSC (National Television Systems Committee) definition
of standard definition TV broadcast, (used in North America, some of South America,
Japan, etc) uses a frame rate close to 30 (actually 29.97) frames per second.
There are 525 scan lines; approximately 480 of these are visible.
The ATSC (Advanced Television Systems Committee) is a group formed at the urging
of the FCC that helped create standards for the new HD digital broadcast. HD TV sets with digital tuners need to be ATSC-compliant.
PAL (Phase Alternate Line) is used in most of Europe,
Australia, & Asia and runs at 25 frames per second using 625 lines.
SECAM (Sequential Color and Memory) is used in France and its territories and is similar to SECAM.
If possible it’s best to edit in the media’s native format.
If you have high-quality PAL footage, it’s best to try to keep
it in PAL. If you have 24 fps footage, it’s best to keep it in
24 fps. That way you won’t get conversion artifacts from changing
frame rates and generation losses. But while ideal, we can’t always
practice this. Often we’ll get a tape from another country, or
that contains another type of media that must be integrated into our
Film to Video Conversion (or 24P to 60i)
When converting film to video we use a 3:2 Pulldown (or a 2:3 Pulldown)
Film and 24p video runs at 24 frames per second.
Since film or 24p video runs at 24 fps and video runs about 30 fps, the two aren't
directly interchangeable at least on a frame for frame basis. (To be
more precise, 23.976 film frames become 29.97 video frames.) In order
to transfer film to 30 fps video, the film frames must be precisely sequenced
into a combination of video frames and fields.
A telecine is a piece of hardware containing a film
projector sequenced with a video capture system. The telecine process
is a term used to describe the process of converting film to video, also
called a 3 2 pulldown. In the 3-2 pulldown each frame of film gets converted
to 2 or 3 fields of video.
Note how 4 (24fps) frames are converted to 5 interlaced frames (30 fps).
The problem with converting film frames to fields, is that some video
frames have fields from two different film frames. If you think about
it you'll see that this can present all types of problems.
Exporting for DVD and the web
If you are exporting movies for non-broadcast uses (E.g. for DVD or YouTube), always add at least a half second of black at the beginning before the program fades up from black. This gives a chance for the player to lock onto the audio. If your audio starts instantaneously, the first few milliseconds of audio will likely be cut off.
Fade to black at the end and add at least a few more seconds of black. This way your DVD won't immediately jump back to the menu. It gives just a moment to conclude the ending.
In your sequence/timeline, set an in-point at the beginning and an out-point a few seconds after the fade out at the end. Alternatively in Premiere you can set a work area.
From the "File" menu, choose "Export -> Media". Be sure to select the range, work area, or in to out points.
For the web:
- iPod works OK- usually it seems to automatically get the pixel aspect ratio. But to be safe, I use a custom size, putting it in square pixels.
- I usually use H.264 for at the "good" quality setting. If the file size is too large, I dial the data rate back or scale it down to 1280 x 720.
For Standard definition DVD:
Dedicated Signal Monitoring
Professional editors used to have to spend $5,000-$10,000 for dedicated monitoring gear. Now it can be had for about $700 (+ the price of a PC) with Black Magic Design's Ultrascope. There are two versions: a PCI card version and a dongle which you can use with a laptop in the field. The display looks like this:
- RGB Parade
- waveform monitor
- audio levels & spectrum
- video monitor
- error logging
Error logging is a huge feature. It automatically looks for non-legal video and audio elements. You essentially: turn on the logging, start playing your footage, and go to lunch (let the entire program play and be logged). It records the errors and the time they happened.
An inexpensive alternative to this is getting a monitor that has this. I use an ikan D7w. It's a 7" portable monitor with a built-in waveform monitor and vectorscope. It has both HDMI and HD-SDI loop-through inputs. It's large enough to provide critical info for focusing but small enough that you can attach it to your camera.
Jim's Portable Edit Setup:
When I'm traveling light and need to edit, I rely on my MacBook Pro with an additional monitor- in this case the ikan D7w.
Premiere lets you add additional monitors. Once one is plugged in go to Premiere Preferences / Playback. You'll want to check the box under "video device" next to your monitor.
Under the "Window" menu you can choose "Reference Monitor" to open an additional monitor if needed.
Remember the CyberCollege Editing Guidelines!
Additional reminders, tips and things to consider:
- Always start and end in black
- Spend time with the graphics and visual design aspect of your videos. Come up with a "look" or visual treatment for your graphics. Consider the colors and textures you might use for time and place identifiers in a science fiction story. Or the treatment you might give a documentary on a haunted victorian mansion. The look and feel of your graphics should be consistent throughout the entire video.
- Red Giant Magic Bullet "Looks" is an inexpensive plugin for editing system that provides enhanced looks. It can be applied to a single shot or an entire scene.
- Spend time with the sound design aspect of your videos. Be sure your audio levels are consistent throughout your sequence(s).
- I usually suggest editing with audio in mind first. In other words listen carefully to how the inerviews, or other elements flow from one to the next. I also lay down music in my first pass, as I'll often cut to the beat and use it to set important moods.
- Edit B-roll to the natural phrases of dialog, or to the beat of the soundtrack
- Sounds can be used to "justify" video edits. [Pellucid, Gesi's piece, or Run Lola Run are good examples]
- Be sure to avoid jump cuts & flash frames
Basic Color Correction & Finishing
Color correction is the process of enhancing the visual appearance in regard to hue, saturation, and contrast.
If you want to create videos that can be broadcast, it's important to understand at least the basics of color and gamma correction.
For TV, it's imperitive that luma levels don't go above 100 IRE and that the audio is consistent with the delivery specifications of the station. Here are some sample technical delivery specifications for broadcast TV:
Be sure you know how to perform basic color correction in whatever software you use.
Sadly, applying effects to a group of clips is a weakness of Premiere. One can copy and paste attributes but this is kulnky.
2K, 4K & Ultra HD
HD is great but there's something even better: 2K and 4K. Check out the wikipedia entry on it.
Here's a pretty good visual comparison of the various formats: http://www.manice.net/index.php/glossary/34-resolution-2k-4k
2K provides only slightly more information than HD. 2048 pixels per line compared with 1920. But the format was embraced by the digital cinema industry. The Phantom Menace introduced the world to Digital Cinema. Digital Cinema is not about production- but the distribution of theatrical content. Digital Cinematography is about film production using digital tools.
Most have ignored 2K and focused on 4K, which essentially provides 4 times the information as HD.
Just as HD comes in varying pixel dimensions for broadcast and recording 4 comes in different sizes as well. Most variations of 4K have 4096 pixels per line.
Back Magic Designs unveiled a few new cameras at NAB which created some excitement.
4K might already be heading towards obsolescence. 8K is right around the corner.
Ultra HD - Defined as any "ultra" high-definition formats, which currently includes 4K and 8K.
Metadata (data about the data) is embedded text and numeric information about the clip or program. It can include, but is not limited to:
- clip name
- running time / duration
- audio levels
- DRM (digital rights management)
It can be stored and accessed in XMP (stands for Extensible Metadata Platform and is based on XML). While data can be embedded in XMP, some media formats do not allow for this so data is written to a separate sidecar file.
This is why it's important to keep the directory structure found on Canon DSLRs and in Sony XDCam storage devices. Key information (such as timecode) is often stored in a separate file.
one type of metadata that can be displayed on screen for the hearing-impaired. Carried
in the vertical blanking interval, the FCC mandates that all stations
data. If you watch closed-captionined programming, you'll see a variety of
variations in readability, placement and duration.
Companies like Soft NI create stand-alone subtitler systems that let
you integrate subtitles into a video stream. Adding subtitles involves
proper placement on the screen. Softel-USA makes products for subtitling
Good metadata overview by Philip Hodgetts: http://www.youtube.com/watch?v=GnPzpPvoyLA
Adobe has speech search capabilities built into SoundBooth. It analyzes the audio and transcribes it into text (speech analysis): http://www.youtube.com/watch?v=5CLqspcNWw0
Capture One photo metadata editor: http://www.youtube.com/watch?v=DbtfBAliHTw
Focus Enhancements, Firestore is capable of setting up and recording custom metadata in the field. http://www.youtube.com/watch?v=FUcAQQyz_Mg
Avid makes Metasync, a product which let's editors work with data
right in the timeline. It can be used to link to other types of evidence and forensic documentation right in the timeline: http://www.youtube.com/watch?v=NGp0dc6yVWQ
Vocabulary (Know these terms)
- Closed Captioning
- Codec (short for coder/decoder or compressor/decompressor)
- Color bars
- DTV - Digital Television
- Digital Cinema
- HD - High Definition
- SD - Standard Definition
- Pedestal (aka Setup)
- Proc Amp
- Setup (aka Pedestal)
- Waveform Monitor
- Window dub
Final Test Review
Final Exam is worth 70 points! The best way to review for it is to study
the class notes and the midterm (expect everything
you got wrong on the midterm to be on the final). The final will be true/false,
multple choice, and short answer. It will cover the following areas:
- Shooting/Editing Techniques
- Cybercollege editing guidelines
- Edits work best when motivated
- Whenever possible cut on subject movement.
- Keep in Mind the Strengths and Limitations of the Medium
(TV is a close-up medium)
- Cut away from the scene the moment the visual statement
has been made.
- Emphasize the B-Roll
- If in doubt, leave It out
- Technical continuity vs production (shooting/editing) continuity
- Continuity editing
- Acceleration editing
- Expanding time
- Causality & Motivation (Must have in order to be successful)
- Relational editing (Shots gain meaning when juxtaposed with other
images. Pudovkin's experiment)
- Thematic editing (montage)
- Parallel editing
- Imaging devices: CCDs and CMOS
- resolution - How do we determine horizontal resolution
- zebra stribes - What are they good for? What would you set them
- viewfinders: LCD/color vs B&W
- Gain - What does this do?
- Shutter speeds - what is this good for (2 good uses)
- Depth of Field - what affects this?
- Rack focus - How can you achieve this?
- Angle of view & focal length - How are they related?
- f-stops - Know your f-stops & what they mean
- ND filters - What are they good for? (2 good uses)
- types of mics
- cabling & connectors
- balanced v unbalanced
- line v mic level
- +4 v -10
- Graphics (Review Jim's Graphic Tips)
- types of lighting instruments
- color temp
- Lux vs footcandles
- soft vs hard key
- broad vs narrow lighting
- Video signal / technology
- ATSC, NTSC v DTV, SDTV & HDTV
- HDTV pixel dimensions (1920 x 1080 or 1280 x 720)
- progressive v interlace
- Ways to transfer video digitally (cabling/connectors)
- Ways to transfer analog video (cabling/connectors)
- Color difference v RGB
- timecode (difference between drop & non-drop)
- waveform monitors & vectorscopes
(What do they show?)
- what are the important IRE levels?
- TBC - What is this? What does it do?
- Video architecture vs.video codecs
- Color sampling (4:2:2 v 4:1:1)
- 3-2 pulldown
- Know the main principles of troubleshooting & how to go about
finding problems (not guessing but deducing )
Back to Jim Krause's Summer T351 Home Page