header
gray shim

T351 Week 12 - Spring 2015

Misc Announcements

  • Lab this week: Review storytelling exercises and meet with students regarding final projects.
  • No labs next week - This time is dedicated for YOU to work on Final Projects.
  • Final Projects. Yur completed Final Project pre-production materials are due this week (via Oncourse). This includes:
    • Proposal & treatment (due a few weeks ago)
    • Script - Due by Friday AT THE VERY LATEST.
  • 2 full weeks left to work on final projects
  • Please try to have the field production component finished next week and try to have a rough cut for me/Todd to review by Week 14. We'll review final projects starting promptly at the start of our week 15 lab.
  • FINAL EXAM: The registrar has scheduled our Spring 2015 T351 class with a final exam time of 12:30 - 2:30 Friday, May 8th.

Agenda:

  • Video codecs (cont.)
  • Format Conversion (including 3:2 pull down, etc)
  • Digital Video & HD Broadcasting
  • Metadata (timecode, closed-captioning, etc.)

Readings:

  • Cybercollege Module 9 (part 1 and 2)
  • Cybercollege DTV standards
  • Also check out the embedded links in the text below (not on quiz)

Codec vs. Container --------------

First and foremost understand the difference between a container (E.g. Quicktime or Windows Media) and a codec (E.g. ProRes or H.264).

Containers (also known as multimedia architectures) are designed to be multipurpose- serving a variety of different users with different needs. A few popular containers include:

  • Quicktime (Apple)
  • ASF (Advanced Systems Format) & WMV (Windows Media)
  • MXF (Material Exchange Format)

Codec is an acronym that stands for coder/decoder or compressor/decompressor. A codec is a method for compressing and decompressing digital information. It can use specialized hardware, software or a combination of both.

Containers support a variety of different codecs.

When you see a _____.mov it could be anything from an AIFF audio file to a feature film in ProRes HQ. All you know from the .mov is that it's a Quicktime Movie. You need to take a closer look at the file in order to see what it really is.

Who knows how to determine what the codec is in Quicktime Player?

Here's a pretty good explanation of the difference between containers and codecs from Videomaker.

In order to play back multimedia files, you need the matching player, which is sometimes refered to as a "component". For instance if you have a new PC with a fresh version of Vista, you'll need to buy the MPEG-2 decoder component in order to play back DVDs.

Which Codec is Best?

Videographers have more potential production codecs on hand than ever before. Some codecs are optimized for efficient capturing and distribution (E.g. H.264). Others provide for better color depth and editing. Only the highest end video is uncompressed. Almost all video uses some sort of compression. It's the only way we can reasonably store it and edit it.

The more we compress the fle, the more quality we lose.

Essentially the highest-quality codecs come with a price: a higher bit-rate gives you better quality but requires more storage space.

Most DSLRs capture video in H.264. While an efficient acquisition codec, it is NOT optimized for editing. This is why sometimes it's a good idea to convert theme to something else before you edit. If you are editing on an Avid, you might want to convert tme to DNxHD. If you're editing on an Apple with Premiere or Final Cut you might want to use Apple ProRes.

Transcoding is when we convert from one codec to another.

Here's a short video primer on codecs.

Intraframe vs. Interframe (GOP)-based codecs

Codecs like DV, DNxHD, and Apple ProRes compress and treat every frame individually. These are known as intraframe codecs. These types of codecs generally take up more room than a interframe (GOP) based codec, but they are often a better choice for editing.

Examples of intraframe codecs include:

  • Apple ProRes
  • Avid DNX codecs (and SD AVR codecs)
  • DV
  • DVCProHD
  • Panasonic D5

Codecs such as MPEG-2 (used in HDV and SD DVD-Video) break the image down into macro-blocks and compress over time as well as spatially. These are known as interframe codecs. The important thing to understand about interframe compression is that it compresses over time as well as space. In intraframe compression we divide the picture into smaller rectangles called macroblocks. These macroblocks are compressed and tracked over time and placed into a GOP (Group of Pictures). These GOP-based formats use I, P & B frames.

Examples of interframe codecs include:

  • HDV (MPEG-2)
  • XDCAM (MPEG-2)
  • MPEG-4
  • H.264

Generally speaking, intraframe codecs are easier for non-linear editors to process. Interframe codecs such as HDV do a nice job of compression, but the GOP-based structure is more taxing for non-linear editing systems.

Some popular video codecs:

Apple Pro-Res (variable compression, Intraframe codec)

Avid DNxHD (variable compression, intraframe codec)

DV - Uses 5:1 compression Other variants of DV include DVCAM (Sony) and DVCPRO (Panasonic).

Flash – This is an excellent video codec optimized for distribution.

H.261 & H.263 - Video-conferencing codecs

H.264 - A version of MPEG-4 used for mass distribution and Blu-ray

MPEG - (Moving Picture Experts Group) uses interframe compression and can store audio, video, and data. The MPEG standard was originally divided into four different types, MPEG-1 through MPEG-4.

MPEG-2 is widely used and is found in standard definition DVD-Video and in HDV.

MPEG-4 is a good all-purpose multimedia codec. The H.264 variant is also used in Blu-ray HD DVDs.

XDCam – Developed by Sony

SMPTE VC-1 - Developed by Microsoft for Blu-ray authoring

Sorenson – Well supported by a number of platforms.

 

Visit http://www.adamwilt.com/DV-FAQ-tech.html#colorSampling for a more detailed explanation of this.

Digitizng & Color Sampling:

When we digitize video, we sample it. We take a digital snapshot and convert it into 0s and 1s. This is true for audio, video or a combination of the two.

  • Sample rate is how many times per second we take a snapshot.
  • Quantifying (bit depth/color depth) is how many variations within the sample we have. (How good the snapshot is.)

Here are a few different ways audio can be digitized:

  • 8 bit at 22KHz (low end, computer alert sounds)
  • 16 bit at 44.1 kHz (DAT, CD, MP3 at high quality)
  • 16-bit at 48 kHz (DV, DAT/mini disc)
  • 24 bit at 48 kHz (High-end DATs & workstations)

The higher (faster) the sampling rate, the better the quality. The larger or deeper the bit depth, the better the quality is.

RGB vs Color Difference System

Video cameras capture in RGB but we often convert to thea color difference system before broadcasting or editing. Color difference is a an alternate way to break down the brightness and color information in a video signal. The color difference signals can be expressed as R-Y, B-Y or Cr, Cb or sometimes U, V. This color difference signals are used in the digitizing process. What the heck is a color difference signal?

Color difference signals: TV uses an additive color system based on RGB as the primary colors. Well if the RGB data were stored as three separate signals (plus sync) it would take a lot of room to store all the information. Fortunately some great technical minds figured out a way pack this information into a smaller box (figuratively speaking) devising a way to convert the RGB information into two new video signals that take up less room, with minimal loss in perceived picture quality. The color difference signals and are typically represented by UV or Cr Cb. So when you see YUV it is referring to Y (luminance) and UV (the two color difference signals).

Combining the RGB signals according to the original NTSC broadcast system standards creates a monochrome luminance signal (Y). So you can basically pull out the blue and red signals and subtract them from the total luminance to get the green info.

So instead of three component color signals (R G B) we process video as a luminance signal and 2 color difference signals (Y Cb Cr)

4:4:4 vs. 4:2:2 vs. 4:1:1 (Chroma subsampling)

Today’s digital technology provides us with several ways to digitize video, mainly 4:2:2 and 4:1:1. What do they refer to?

Quite simply, they refer to the ratio of the number of luminance (Y) samples to the samples of each of the two color difference signals.

In the video signal, the most important component is the luminance as it gives us all the detail absolutely necessary in the picture. As a result, we must sample luminance at a very high rate, 13.5 Megahertz (million times per second).

Given that the luminance portion is sampled at 13.5 MHz. Let's apply the before mentioned ratios: 4:2:2 and 4:1:1. In a 4:1:1 component digital sample, the color information is sampled at 1/4 the luminance rate: 3.375MHz. In a 4:2:2 system, the color is sampled at 1/2 the rate of the luminance or 6.75MHz.

What about 4:2:0?

The 4:2:0 is used in MPEG-2 sampling. The two color difference signals are sampled on alternating lines.

What does this mean?

Quite simply, the color depth of a 4:2:2 component digital signal is twice that of a 4:1:1 signal and, from the standpoint of color bandwidth, is twice that of today’s popular component analog formats. This means better color performance, particularly in areas such as special effects, chromakeying, alpha keying (transparencies) and computer generated graphics.

Importing & Exporting for other Applications & Purposes

Outputting for DVD & Blu-ray optical discs

Most DVD/Blu-ray authoring software comes with built-in encoding software.

Standard Definition Video DVDs require the MPEG-2 codec.

Blu-ray disks can use MPEG-2, MPEG-4/H.264, and SMPTE VC-1

Outputting for use Overseas - World Analog TV & Color Encoding Standards

NTSC (National Television Systems Committee) definition of standard definition TV, (used in North America, some of South America, Japan, etc) uses a frame rate close to 30, roughly 29.97 frames per second. There are 525 scan lines; approximately 480 of these are visible. The HD (high definition) standard for broadcast has been created by the ATSC, the Advanced Television Systems Committee, which was formed at the urging of the FCC to establish standards for the new high definition formats.

PAL (Phase Alternate Line) is used in most of Europe, Australia, & Asia and runs at 25 frames per second using 625 lines.

SECAM (Sequential Color and Memory)

If possible it’s best to shoot and edit in the native delivery format. If you are working on a PAL project, shoot and edit in PAL. If you have 24 fps footage, it’s best to keep it in 24 fps. That way you won’t get conversion artifacts from changing frame rates and generation losses. But while ideal, we can’t always practice this. Often we’ll get a tape from another country, or that contains another type of media that must be integrated into our existing content.

Converting 24p video & film to interlaced 60i video

When converting film or 24p video to 30/60i (29.97) video we use a 3:2 Pulldown

See

Film runs at 24 frames per second.

24p refers to video shot at 24 frames per second progressive- that means there are no fields.

Since film runs at 24 fps and video runs about 30 interlaced fps, the two aren't directly interchangeable at least on a frame for frame basis. (To be more precise, 23.976 film frames become 29.97 video frames.) In order to transfer film to 30 fps video, the film frames must be precisely sequenced into a combination of video frames and fields.

A telecine is a piece of hardware containing a film projector sequenced with a video capture system. The telecine process is a term used to describe the process of converting film to video, also called a 3 2 pulldown. In the 3-2 pulldown each frame of film gets converted to 2 or 3 fields of video.

Note how four (24p fps) frames are converted to five interlaced frames (30i fps).

The problem with converting film frames to fields, is that some video frames have fields from two different film frames. If you think about it you'll see that this can present all types of problems.

Apple makes a nice product that works with Final Cut Pro, Apple Cinema Tools, which includes a number of tools that can help convert 24 to 30 and back.

Another method is to transfer film to 24p video.

Metadata & Closed Captioning

Metadata (data about the data) is embedded text and numeric information about the clip or program. It can include, but is not limited to:

  • timecode
  • camera
  • exposure
  • gain
  • clip name
  • running time / duration
  • latitude/longitude
  • audio levels
  • DRM (digital rights management)

It can be stored and accessed in XMP (stands for Extensible Metadata Platform and is based on XML). While data can be embedded in XMP, some media formats do not allow for this so data is written to a separate sidecar file.

This is why it's important to keep the directory structure found on Canon DSLRs and in Sony XDCam storage devices. Key information (such as timecode) is often stored in a separate file.

Closed-captioning is one type of metadata that can be displayed on screen for the hearing-impaired. Carried in the vertical blanking interval, the FCC mandates that all stations broadcast programming with closed captioning data. If you watch closed-captionined programming, you'll see a variety of variations in readability, placement and duration.

Companies like Soft NI create stand-alone subtitler systems that let you integrate subtitles into a video stream. Adding subtitles involves proper placement on the screen. Softel-USA makes products for subtitling HD programming.

Good metadata overview by Philip Hodgetts: http://www.youtube.com/watch?v=GnPzpPvoyLA

Interesting Adobe/Lynda video about speech search capabilities: http://www.youtube.com/watch?v=HO5SyTPz6ZY

Focus Enhancements, Firestore is capable of setting up and recording custom metadata in the field. http://www.youtube.com/watch?v=FUcAQQyz_Mg

Adobe Prelude - Pre-editing software that lets you work with clips and their metadata. Sample video

Vocabulary (Know these terms)

  • ATSC
  • Closed Captioning
  • Codec
  • Container
  • DTV
  • GOP (I, P & B frames)
  • HD
  • interframe compression
  • intraframe compression
  • Macroblock
  • Metadata
  • NTSC
  • PAL
  • SD
  • SECAM
  • Sidecar file
  • Telecine

 

 

Back to Jim Krause's T351 Home Page