header
gray shim

T351 - Summer Week 5

Agenda / Reality check

  • This week:
    • Cover new material, review Drama/Storytelling Projects, meet with you individually about your final projects.
    • For the Drama/Storytelling critiques, I'm interested to read how the group dynamics worked.
    • Wednesday's lab will be very short (if at all). There is no lab Thursday. The time is dedicated for you to work on Final Projects. I want to meet with you individually Tuesday or Wednesday to walk through your Final Projects. Please contact me today to schedule one-on-one time if you need it.
    • Don't forget about the Multimedia Exercise, which is due by Monday, June 15. I'll cover exporting for this today.
  • Next week (week 6) is the last week of class!
    • Final Quiz will be next Tuesday (6/16). The lab will be devoted to editing
    • Wednesday - View Final Projects. There are no camera checkouts next week (Sunday is the last day to check out a camera).

Lecture & lab today:

  • Timecode understand difference between DF & NDF, Free Run & Record Run
  • Codecs & Multimedia Architecture/Wrappers
  • Format Conversion
  • Film to Video transfer
  • Digital Video & High Definition Broadcasting
  • Metadata & subtitles
  • 2K, 4K & Ultra HD
  • Resources for Post-production
  • Final Exam Review

Readings:

NOTE: Some are doing good production work but are not getting a good grade due to missing paperwork.

Final Projects: review criteria

All of your written materials (except for storyboards) should be typed. The amount of detail along with the appearance and presentation of your materials & packet affects your grade.

  • These were due last week, but I extended the time for some of you.
  • Yes, you CAN script feature stories and documentaries. Scripts are expected for all projects.
  • Talent release forms can be found on the T351 website.

Codec vs. Container (aka Wrapper) --------------

Know the difference between a container (E.g. Quicktime or Windows Media) and a codec (E.g. ProRes or H.264).

Containers (also known as multimedia wrappers) are designed to be multipurpose- serving a variety of different users with different needs. A few popular containers include:

  • Quicktime (Apple)
  • ASF (Advanced Systems Format) & WMV (Windows Media)
  • MXF (Material Exchange Format)

Codec is an acronym that stands for coder/decoder or compressor/decompressor. A codec is a method for compressing and decompressing digital information. It can use specialized hardware, software or a combination of both.

Containers support a variety of different codecs.

When you see a _____.mov it could be anything from an AIFF audio file to a feature film in ProRes HQ. All you know from the .mov is that it's a Quicktime Movie. You need to take a closer look at the file in order to see what it really is.

Who knows how to determine what the codec is in Quicktime Player?

Here's a pretty good explanation of the difference between containers and codecs from Videomaker.

In order to play back multimedia files, you need the matching player, which is sometimes refered to as a "component". For instance if you have a new PC with a fresh version of Vista, you'll need to buy the MPEG-2 decoder component in order to play back DVDs.

Which Codec is Best?

Videographers have more potential production codecs on hand than ever before. Some codecs are optimized for efficient capturing and distribution (E.g. H.264). Others provide for better color depth and editing. Only the highest end video is uncompressed. Almost all video uses some sort of compression. It's the only way we can reasonably store it and edit it.

The more we compress the fle, the more quality we lose.

Essentially the highest-quality codecs come with a price: a higher bit-rate gives you better quality but requires more storage space.

Most DSLRs capture video in H.264. While an efficient acquisition codec, it is NOT optimized for editing. This is why sometimes it's a good idea to convert theme to something else before you edit. If you are editing on an Avid, you might want to convert tme to DNxHD. If you're editing on an Apple with Premiere or Final Cut you might want to use Apple ProRes.

 

Video Codecs - Interframe verses Intraframe (review)

Only the highest end video is uncompressed. Almost all video (especially HD) uses some sort of compression. When looking at the characteristics of various video recording gear, it's important to understand the basic differences between two general types of compression.

Intraframe compression - This is where we take each individual frame and squeeze it so it all fits onto tape or disk. Examples of intraframe codecs include:

  • Apple ProRes
  • Avid codecs (DNxHD)
  • DV
  • DVCProHD
  • Panasonic D5

interframe or Group of Pictures (GOP) compression - The important thing to understand about interframe compression is that it compresses over time as well as space. In intraframe compression we divide the picture into smaller rectangles called macroblocks. These macroblocks are compressed and tracked over time and placed into a GOP (Group of Pictures) Examples of interframe codecs include:

  • HDV (MPEG-2)
  • XDCAM (MPEG-2)
  • MPEG-4
  • H.264

MPEG-2 is a popular interframe codec. It is a very efficient in that it can squeeze a high definition video image into the same amount of space that a standard DV stream can occupy. (That's why we can record HDV onto a miniDV tape.) The other interesting thing about MPEG-2 is that it's scalable- we can make the frame dimensions varying sizes (720 x 480, 1440 x 1080 etc.). The down side is that GOPS can be difficult to edit. Deconstructing the GOPs during the edit process tasks the computers to a greater degree than intraframe codecs.

4:4:4 vs. 4:2:2 vs. 4:1:1 & 4:2:0

Today’s digital technology provides us with several ways to digitize video, mainly 4:2:2 and 4:1:1. What do they refer to?

Quite simply, they refer to the ratio of the number of luminance (Y) samples to the samples of each of the two color difference signals (Y, Cb, Cr)

In the video signal, the most important component is the luminance as it gives us all the detail absolutely necessary in the picture. As a result, we must sample luminance at a very high rate, 13.5 Megahertz (million times per second).

Given that the luminance portion is sampled at 13.5 MHz. Let's apply the before mentioned ratios: 4:2:2 and 4:1:1. In a 4:1:1 component digital sample, the color information is sampled at 1/4 the luminance rate: 3.375MHz. In a 4:2:2 system, the color is sampled at 1/2 the rate of the luminance or 6.75MHz.

What about 4:2:0?

The 4:2:0 is used in MPEG-2 sampling. The two color difference signals are sampled on alternating lines.

What does it all mean?

Quite simply, the color depth of a 4:2:2 component digital signal is twice that of a 4:1:1 signal. This means better color performance, particularly in areas such as special effects, chromakeying, alpha keying (transparencies) and computer generated graphics.

Film to Video Conversion (or 24P to 60i)

When converting film to video we use a 3:2 Pulldown (or a 2:3 Pulldown)

See

Film and 24p video runs at 24 frames per second.

Since film or 24p video runs at 24 fps and video runs about 30 fps, the two aren't directly interchangeable at least on a frame for frame basis. (To be more precise, 23.976 film frames become 29.97 video frames.) In order to transfer film to 30 fps video, the film frames must be precisely sequenced into a combination of video frames and fields.

A telecine is a piece of hardware containing a film projector sequenced with a video capture system. The telecine process is a term used to describe the process of converting film to video, also called a 3 2 pulldown. In the 3-2 pulldown each frame of film gets converted to 2 or 3 fields of video.

Note how 4 (24fps) frames are converted to 5 interlaced frames (30 fps).

3-2 Pulldown

The problem with converting film frames to fields, is that some video frames have fields from two different film frames. If you think about it you'll see that this can present all types of problems.

 

Exporting for DVD, Blu-ray, and the web

If you are exporting movies for non-broadcast uses (E.g. for DVD or YouTube), always add at least a half second of black at the beginning before the program fades up from black. This gives a chance for the player to lock onto the audio. If your audio starts instantaneously, the first few milliseconds of audio will likely be cut off.

Fade to black at the end and add at least a few more seconds of black. This way your DVD won't immediately jump back to the menu. It gives just a moment to conclude the ending.

In your sequence/timeline, set an in-point at the beginning and an out-point a few seconds after the fade out at the end. Alternatively in Premiere you can set a work area.

From the "File" menu, choose "Export -> Media". Be sure to select the range, work area, or in to out points.

For the web:

  • H.264 is a good web delivery format. The "best" setting proivdes the highest quality but sometimes the file size is too large. If so, try using about 80% compression at the "good" quality setting. If the file size is still too large, dial the data rate back even more or scale it down to 1280 x 720.

For Standard definition DVD:

  • Use MPEG-2

For Blu-ray:

  • Use H.264

Dedicated Signal Monitoring

Professional editors used to have to spend $5,000-$10,000 for dedicated monitoring gear. Now it can be had for about $700 (+ the price of a PC) with Black Magic Design's Ultrascope. There are two versions: a PCI card version and a dongle which you can use with a laptop in the field. The display looks like this:

Ultrascope

It provides:

  • RGB Parade
  • waveform monitor
  • vectorscope
  • histogram
  • audio levels & spectrum
  • video monitor
  • error logging

Error logging is a huge feature. It automatically looks for non-legal video and audio elements. You essentially: turn on the logging, start playing your footage, and go to lunch (let the entire program play and be logged). It records the errors and the time they happened.

An inexpensive alternative to this is getting a monitor that has this. I use an ikan D7w. It's a 7" portable monitor with a built-in waveform monitor and vectorscope. It has both HDMI and HD-SDI loop-through inputs. It's large enough to provide critical info for focusing but small enough that you can attach it to your camera.

ikan D7w image

Jim's Portable Edit Setup:

When I'm traveling light and need to edit, I rely on my MacBook Pro with an additional monitor- in this case the ikan D7w.

Premiere lets you add additional monitors. Once one is plugged in go to Premiere Preferences / Playback. You'll want to check the box under "video device" next to your monitor.

Under the "Window" menu you can choose "Reference Monitor" to open an additional monitor if needed.

 

2K, 4K & Ultra HD

HD is great but there's something even better: 2K and 4K. Check out the wikipedia entry on it.

Here's a pretty good visual comparison of the various formats: http://www.manice.net/index.php/glossary/34-resolution-2k-4k

2K provides only slightly more information than HD. 2048 pixels per line compared with 1920. But the format was embraced by the digital cinema industry. The Phantom Menace introduced the world to Digital Cinema. Digital Cinema is not about production- but the distribution of theatrical content. Digital Cinematography is about film production using digital tools.

Most have ignored 2K and focused on 4K, which essentially provides 4 times the information as HD.

Just as HD comes in varying pixel dimensions for broadcast and recording 4 comes in different sizes as well. Most variations of 4K have 4096 pixels per line.

Back Magic Designs unveiled a few new cameras at NAB which created some excitement.

4K might already be heading towards obsolescence. 8K is right around the corner.

Ultra HD - Defined as any "ultra" high-definition formats, which currently includes 4K and 8K.

 

Metadata

Metadata (data about the data) is embedded text and numeric information about the clip or program. It can include, but is not limited to:

  • timecode
  • camera
  • exposure
  • gain
  • clip name
  • running time / duration
  • latitude/longitude
  • audio levels
  • DRM (digital rights management)

It can be stored and accessed in XMP (stands for Extensible Metadata Platform and is based on XML). While data can be embedded in XMP, some media formats do not allow for this so data is written to a separate sidecar file.

This is why it's important to keep the directory structure found on Canon DSLRs and in Sony XDCam storage devices. Key information (such as timecode) is often stored in a separate file.

Closed-captioning is one type of metadata that can be displayed on screen for the hearing-impaired. Carried in the vertical blanking interval, the FCC mandates that all stations broadcast programming with closed captioning data. If you watch closed-captionined programming, you'll see a variety of variations in readability, placement and duration.

Companies like Soft NI create stand-alone subtitler systems that let you integrate subtitles into a video stream. Adding subtitles involves proper placement on the screen. Softel-USA makes products for subtitling HD programming.

Good metadata overview by Philip Hodgetts: http://www.youtube.com/watch?v=GnPzpPvoyLA

Adobe has speech search capabilities built into SoundBooth. It analyzes the audio and transcribes it into text (speech analysis): http://www.youtube.com/watch?v=5CLqspcNWw0

Capture One photo metadata editor: http://www.youtube.com/watch?v=DbtfBAliHTw

Focus Enhancements, Firestore is capable of setting up and recording custom metadata in the field. http://www.youtube.com/watch?v=FUcAQQyz_Mg

Avid makes Metasync, a product which let's editors work with data right in the timeline. It can be used to link to other types of evidence and forensic documentation right in the timeline: http://www.youtube.com/watch?v=NGp0dc6yVWQ

 

Vocabulary (Know these terms)

  • 2K
  • 4K
  • ATSC
  • Closed Captioning
  • Codec (short for coder/decoder or compressor/decompressor)
  • Color bars
  • DTV - Digital Television
  • Digital Cinema
  • Digital Cinematography
  • EDL
  • HD - High Definition
  • SD - Standard Definition
  • Telecine
  • Off-line
  • On-line
  • Pedestal (aka Setup)
  • Proc Amp
  • Setup (aka Pedestal)
  • TBC
  • Vectorscope
  • Waveform Monitor
  • Window dub

 

 

Final Test Review

Final Exam is worth 70 points! The best way to review for it is to study the class notes and the midterm (expect everything you got wrong on the midterm to be on the final). The final will be true/false, multple choice, and short answer. It will cover the following areas:

  • Shooting/Editing Techniques
    • Cybercollege editing guidelines
      • Edits work best when motivated
      • Whenever possible cut on subject movement.
      • Keep in Mind the Strengths and Limitations of the Medium (TV is a close-up medium)
      • Cut away from the scene the moment the visual statement has been made.
      • Emphasize the B-Roll
      • If in doubt, leave It out
    • Technical continuity vs production (shooting/editing) continuity
    • Continuity editing
    • Acceleration editing
    • Expanding time
    • Causality & Motivation (Must have in order to be successful)
    • Relational editing (Shots gain meaning when juxtaposed with other images. Pudovkin's experiment)
    • Thematic editing (montage)
    • Parallel editing
  • Cameras
    • Imaging devices: CCDs and CMOS
    • resolution - How do we determine horizontal resolution
    • zebra stribes - What are they good for? What would you set them for?
    • viewfinders: LCD/color vs B&W
    • Gain - What does this do?
    • Shutter speeds - what is this good for (2 good uses)
  • Lenses
    • Depth of Field - what affects this?
    • Rack focus - How can you achieve this?
    • Angle of view & focal length - How are they related?
    • f-stops - Know your f-stops & what they mean
    • ND filters - What are they good for? (2 good uses)
  • Audio
    • types of mics
    • Hz
    • cabling & connectors
    • balanced v unbalanced
    • line v mic level
    • +4 v -10
  • Graphics (Review Jim's Graphic Tips)
  • Lighting
    • types of lighting instruments
    • color temp
    • HMI
    • Lux vs footcandles
    • soft vs hard key
    • broad vs narrow lighting
  • Video signal / technology
    • ATSC, NTSC v DTV, SDTV & HDTV
    • HDTV pixel dimensions (1920 x 1080 or 1280 x 720)
    • progressive v interlace
    • Ways to transfer video digitally (cabling/connectors)
    • Ways to transfer analog video (cabling/connectors)
    • Color difference v RGB
    • timecode (difference between drop & non-drop)
    • waveform monitors & vectorscopes (What do they show?)
      • what are the important IRE levels?
    • TBC - What is this? What does it do?
    • Video architecture vs.video codecs
    • Color sampling (4:2:2 v 4:1:1)
    • 3-2 pulldown
    • Know the main principles of troubleshooting & how to go about finding problems (not guessing but deducing )

 

 

Back to Jim Krause's Summer T351 Home Page