header
gray shim

T351 Week 12 - Spring 2014

Misc Announcements

  • Lab this week: Review storytelling exercises and (for Friday's lab) meet with students regarding final projects.
  • Final Projects. According to the syllabus, your completed Final Project pre-production materials are due this week (via Oncourse). This includes:
    • Proposal & treatment (due a few weeks ago)
    • Script - Due by Friday AT THE VERY LATEST.
  • 2 full weeks left to work on final projects
  • Final Projects should be finished by the end of week 14. We will review final projects starting promptly at the start of our week 15 lab.
  • Note: The registrar has assigned our Spring 2014 T351 class with a final exam time of 5 PM Wednesday, May 7th.

Agenda:

  • Lighting techniques
  • Video codecs
  • Format Conversion (including 3:2 pull down, etc)
  • Digital Video & High Definition Broadcasting
  • Metadata (timecode, closed-captioning, etc.)

Readings:

  • Cybercollege Module 9 (part 1 and 2)
  • Cybercollege DTV standards
  • Also check out the embedded links in the text below (not on quiz)

 

Video Codecs

Interframe verses Intraframe

Only the highest end video is uncompressed. Almost all video (especially HD) uses some sort of compression. When looking at the characteristics of various video recording gear, it's important to understand the basic differences between two general types of compression.

Most standard definition production tv codecs use some type of intraframe compression. This is where we take each individual frame and squeeze it so it all fits onto tape or disk. Examples of intraframe codecs include:

  • Apple ProRes
  • Avid codecs (AVR25, AVR 50, etc.)
  • DV
  • DVCProHD
  • Panasonic D5

However many new HD recording formats use interframe compression. The important thing to understand about interframe compression is that it compresses over time as well as space. In intraframe compression we divide the picture into smaller rectangles called macroblocks. These macroblocks are compressed and tracked over time and placed into a GOP (Group of Pictures) Examples of interframe codecs include:

  • HDV (MPEG-2)
  • XDCAM (MPEG-2)
  • MPEG-4
  • H.264

MPEG-2 is a popular interframe codec. It is a very efficient in that it can squeeze a high definition video image into the same amount of space that a standard DV stream can occupy. (That's why we can record HDV onto a miniDV tape.) The other interesting thing about MPEG-2 is that it's scalable- we can make the frame dimensions varying sizes (720 x 480, 1440 x 1080 etc.). The down side is that GOPs can be a bit more taxing on GPUs to edit. Deconstructing the GOPs during the edit process tasks the computers to a greater degree than intraframe codecs.

Converting Video for Multiple Purposes

Video from an editing system often needs to be converted to a different format before it can be used. Often it will need to be converted before it can be broadcast. Other than television, other uses include:

  • DVD (SD and HD/Blu-ray)
  • Internet (websites, YouTube, Vimeo, etc.)
  • iPods, iPhones and other personal media devices
  • Video games
  • Software
  • Video conferencing

Some of these applications use specialized codecs and technologies that have been developed specifically for them. In many cases you can convert your video through Apple Compressor or Adobe Media Encoder to the appropriate codec. Flip-for-Mac provides an alternate solution that can help with Mac to Windows media conversion.

Outputting for DVDs

Most DVD authoring software comes with encoding software.

Standard Definition Video DVDs require the MPEG-2 codec.

Blu-ray disks can use MPEG-2, MPEG-4/H.264, and SMPTE VC-1

Outputting for the Web (Youtube, Vimeo, etc.)

Most video sharing sites can take a variety of different formats which are outlined on their websites. In general, I've found H.264 is an excellent codec to deliver web-based videos. For HD videos I've found that lightly compressed (large) HD files can hiccup. So for HD videos instead of using the "best" setting, try "good" instead.

Sometimes you need a smaller video to fit inside a blog or a website. A little math can help make an appropriately-sized movie.

  • For 4x3 (1.33) video you can use 720 x 540, 640x480 or 400x300 (or any dimensions you like as long as you divide the first by the second and get something close to 1.33)
  • For 16x9 (1.78) video you can use 1920x1080, 1280x720 or 640x360 (or any dimensions you like as long as you divide the first by the second and get something close to 1.78)

Importing & Exporting for other Applications

In many communication projects, video is only one part of a media production plan. It may be used concurrently with print, web, CD-ROM or other media.

If you're an editor, sooner or later you'll be asked to import or export media files from or for other types of applications. Be sure you know how to export stills, audio files and video files.

Typical scenarios include:

  • Outputting still images for print or Web applications
  • Outputting audio files for CD, radio broadcast, or the Web
  • Normalizing and processing audio tracks
  • Audio sweetening
  • Animation & Special effects
  • Exporting videos for Web, videoconferencing, or DVD Format Conversion

Editors will have to deal with a number of different media formats and need to understand the physical distinction between them.

Some of the different formats include film, standard definition and high definition versions of NTSC, PAL, 16 x 9 and 4 x 3.

NTSC (National Television Systems Committee) definition of standard definition TV, (used in North America, some of South America, Japan, etc) uses a frame rate close to 30, roughly 29.97 frames per second. There are 525 scan lines; approximately 480 of these are visible. The HD (high definition) standard for broadcast has been created by the ATSC, the Advanced Television Systems Committee, which was formed at the urging of the FCC to establish standards for the new high definition formats.

PAL (Phase Alternate Line) is used in most of Europe, Australia, & Asia and runs at 25 frames per second using 625 lines.

SECAM (Sequential Color and Memory)

If possible it’s best to edit in the media’s native format. If you have high-quality PAL footage, it’s best to try to keep it in PAL. If you have 24 fps footage, it’s best to keep it in 24 fps. That way you won’t get conversion artifacts from changing frame rates and generation losses. But while ideal, we can’t always practice this. Often we’ll get a tape from another country, or that contains another type of media that must be integrated into our existing content.

Converting 24p video & film to interlaced 60i video

When converting film or 24p video to 30/60i (29.97) video we use a 3:2 Pulldown

See

Film runs at 24 frames per second.

24p refers to video shot at 24 frames per second progressive- that means there are no fields.

Since film runs at 24 fps and video runs about 30 interlaced fps, the two aren't directly interchangeable at least on a frame for frame basis. (To be more precise, 23.976 film frames become 29.97 video frames.) In order to transfer film to 30 fps video, the film frames must be precisely sequenced into a combination of video frames and fields.

A telecine is a piece of hardware containing a film projector sequenced with a video capture system. The telecine process is a term used to describe the process of converting film to video, also called a 3 2 pulldown. In the 3-2 pulldown each frame of film gets converted to 2 or 3 fields of video.

Note how four (24p fps) frames are converted to five interlaced frames (30i fps).

The problem with converting film frames to fields, is that some video frames have fields from two different film frames. If you think about it you'll see that this can present all types of problems.

Apple makes a nice product that works with Final Cut Pro, Apple Cinema Tools, which includes a number of tools that can help convert 24 to 30 and back.

Another method is to transfer film to 24p video.

DTV (Digital TV broadcasting)

DTV broadcasts can be either HD (High Definition) or SD (standard definition).

You can squeeze 4 SD programs in the same space used to broadcast one HD program.

Both use MPEG-2 compression.

SD vs. HD

SD works in both 4:3 and 16:9. Its video pixel dimensions include:
720 x 486, 720 x 480

HD is 16:9. Its video pixel dimensions include:
1280 x 720, 1920 x 1080

Video frame rates: 24p, 30p, 30i, 60p, or 60i.

Want to edit in in HD?

Editing - Almost all professional editing software (Media Composer, Final Cut Pro, Premiere, etc.) can work in a variety of HD formats. However don't expect to be able to view your work in true high-definition unless you have a dedicated HD monitor connected to your system.

Audio/video hardware interfaces provide input and output options (HD-SDI, RGB, HDMI, etc.). Using these, one can hook up an external HD display to monitor the editing or compositing software output in real-time. A few systems out there include:

  • AJA's Kona
  • Blackmagic Decklink

Storage - Low data rate formats such as DVCPro100 and HDV can readily be edited using common internal and external hard drives. Higher-data rate formats (E.g. Apple ProRes HQ or HDCAM) require more storage and a much faster access (E.g. a RAID accessed via SATA).

Metadata & Closed Captioning

Metadata (data about the data) is embedded text and numeric information about the clip or program. It can include, but is not limited to:

  • timecode
  • camera
  • exposure
  • gain
  • clip name
  • running time / duration
  • latitude/longitude
  • audio levels
  • DRM (digital rights management)

It can be stored and accessed in XMP (stands for Extensible Metadata Platform and is based on XML). While data can be embedded in XMP, some media formats do not allow for this so data is written to a separate sidecar file.

This is why it's important to keep the directory structure found on Canon DSLRs and in Sony XDCam storage devices. Key information (such as timecode) is often stored in a separate file.

Closed-captioning is one type of metadata that can be displayed on screen for the hearing-impaired. Carried in the vertical blanking interval, the FCC mandates that all stations broadcast programming with closed captioning data. If you watch closed-captionined programming, you'll see a variety of variations in readability, placement and duration.

Companies like Soft NI create stand-alone subtitler systems that let you integrate subtitles into a video stream. Adding subtitles involves proper placement on the screen. Softel-USA makes products for subtitling HD programming.

Good metadata overview by Philip Hodgetts: http://www.youtube.com/watch?v=GnPzpPvoyLA

Interesting Adobe/Lynda video about speech search capabilities: http://www.youtube.com/watch?v=HO5SyTPz6ZY

Focus Enhancements, Firestore is capable of setting up and recording custom metadata in the field. http://www.youtube.com/watch?v=FUcAQQyz_Mg

Adobe Prelude - Pre-editing software that lets you work with clips and their metadata. Sample video

Vocabulary (Know these terms)

  • ATSC
  • Closed Captioning
  • DTV
  • GOP (I, P & B frames)
  • HD
  • interframe compression
  • intraframe compression
  • Macroblock
  • Metadata
  • NTSC
  • PAL
  • SD
  • SECAM
  • Sidecar file
  • Telecine

 

 

Back to Jim Krause's T351 Home Page