T351 Week 11 - Spring 2013
- Review art videos
- Legal Issues
- Codecs / Digitizing / Color Sampling
- Production tips
Cybercollege.com units 66 & 67
Announcements / Reality Check
- The production application deadline for fall 400-level classes is this Thursday. Fill out an application if you want to take a 400-level production class this coming fall.
- The MultiVisions Media Showcase deadline has been extended! To apply visit the MultiVisions page.
- Art Video Critiques - Remember to submit one this week.
- Storytelling exercises: Your pre-production materials
(manily your script) are due by the start of lab. We will not accept these after this week. (That
is why they are pre-production). You will shoot your projects
this week. Completed
projects are due next week before 10:30 AM during your lab time. If your project is edited and already in the Dropbox, you can sleep in and come in by 10:30 AM. We're all going to review and critique the Storytelling Videos, just like we did with the Interview Feature Stories.)
- Final Projects: Your final project pre-production materials are due next week in lab. This includes a program proposal, treatment, script and any other supporting material, such as storyboards, shot sheets, etc. This is about 30% of your Final Project Grade. It's a good idea to put thought and time into this. I also sugest putting them into a Final Project Packet. I'm happy to meet with you individually to discuss your project if you'd like some help.
We had some creative and outstanding Art Videos. Here's a nice one by Carter Ross.
In terms of the law, news & entertainment programming are viewed differently
and afforded different restrictions.
Laws and cases are constantly getting challenged. The lines are fuzzy and constantly getting re-drawn.
For instance in shooting news, if a popular song happens to be playing
on someone's radio while giving a sound bite, or if a copyrighted piece
of artwork is visible in the background of a shot, chances are very slim
that legal action will be taken.
However if you were shooting a movie and your scene had a popular song playing on the radio, you are opening yourself up to legal action if you didn't clear the synch rights.
As producers, videographers, or editors, it's important to understand
the basics of law as it pertains to TV.
Privacy - everyone is entitled to this. However those in the public
spotlight are given less protection.
Intrusion - Varying limits on the level you can intrude into a person's
Access - Generally shooting on public property is OK. Private
property for news is another matter.
Commercial appropriation - It is NOT OK to use someone else's likeness
to further your own cause.
Staging - Can't "stage" or reenact events unrealistically
for news or documentary purposes. Be careful with using comparable
footage as well.
Fair Use - Allows existing intellectual property to be used in teaching,
news and other applications. (Not well defined)
Shield Laws - Protecting sources. States offer differing
protection than the Feds.
Defamation (libel & slander) - Presenting content that
lowers the public's estimation of a person. Negligence (not
bothering to check facts).
Public Domain - Copyright has expired.
There are three types of legal contracts you should be familiar with:
- Model Releases
- Location Releases
- License Agreements
Model/Talent Releases: These agreements outline the
conditions of which the talent will appear in a program. In order to be legally binding, they must specify a time period (duration) and some form of compensation.
Location Releases: These agreements outline the conditions
of which a certain location is used in a program.
License Agreements provide
for the limited use of someone else's copyrighted material (intellectual property). Anything that has been created, written, composed
etc is given some level of protected by Federal copyright law. Music is usually
the easiest thing to procure a clearance for (most TV & radio
stations have blanket licenses with BMI and ASCAP). Prints, photos, paintings & other
visual items are much trickier.
You must be very careful with what you have as a backdrop behind an
interview or on a set. Avoid any identifiable artwork or commercial
Liability - This is the basic insurance all videographers should have if they are doing professional work.
E & O Insurance - Errors
and Omissions insurance is a sort of "catch-all" type of insurance that
protects you against many unforeseeable issues. All producers should
Video Codecs vs. architecture/container
A few popular multimedia systems include:
- Quicktime (Apple)
- Windows (Windows)
Don’t confuse codecs with the
container or architecture. QuickTime is
multimedia architecture or container created by Apple. It supports many different
file types and codecs. Similarly,
Windows Media is Microsoft’s audio/video architecture. Another popular
Internet architecture is Real Systems.
Files that end in ".mov" are supported by QuickTime, but you have to take a closer look to see what it actually is and what codec it uses. It might be an AIF audio file or a DV video clip. All you can determine from seeing the "mov" is that it's a QuickTime file of some sort.
uses the QuickTime architecture. If you see media ending in .asf, it's supported
by Windows Media.
In order to play back multimedia files, you need the matching player, which is sometimes refered to as a "component". For instance if you have a new PC with a fresh version of Vista, you'll need to buy the MPEG-2 decoder component in order to play back DVDs.
Each architecture supports a variety of different codecs.
is a codec?
Codec is an acronym that stands for coder/decoder or compressor/decompressor. A codec is a method for compressing and decompressing
digital information. It can use specialized hardware, software or a combination
Videographers have more potential production codecs on hand than ever before. Some codecs are optimized for efficient capturing and distribution (E.g. H.264). Others provide for better color handling and editing.
Most DSLRs capture video in H.264. WHile an efficient acquisition codec, it is NOT optimized for editing. This is why it's good to convert these files to something else before you edit. If you are editing on an Avid, you likely should convert these files on import to DNxHD. If you are editing on an Apple with Premiere or Final Cut you might want to use Apple ProRes.
Here's a short video primer on codecs.
Intraframe vs. Interframe (GOP)-based codecs
Codecs like DV, DNxHD, and Apple ProRes compress and treat every frame individually. These are known as intraframe codecs. These types of codecs generally take up more room than a interframe (GOP) based codec, but they are better for editing.
Codecs such as MPEG-2 (used in HDV and SD DVD-Video) break the image down into macro-blocks and compress over time as well as spatially. These are known as interframe codecs. The data is compressed into a group of pictures (GOP).
Generally speaking, intraframe codecs are much easier for non-linear editors to process. Interframe codecs such as HDV do a nice job of compression, but the GOP-based structure is taxing for non-linear editing systems.
Some popular video codecs:
Apple Pro-Res (variable compression, Intraframe codec)
Avid DNxHD (variable compression, intraframe codec)
DV - Uses 5:1 compression Other variants of DV include
DVCAM (Sony) and DVCPRO (Panasonic).
Flash – This is an excellent video codec optimized for distribution.
H.261 & H.263 - Video-conferencing codecs
H.264 - A version of MPEG-4 used for mass distribution and Blu-ray
MPEG - (Moving Picture Experts Group) uses interframe
compression and can store audio, video, and data. The MPEG standard was
originally divided into four different types, MPEG-1 through MPEG-4.
MPEG-2 is widely used and is found in standard definition DVD-Video and in HDV.
MPEG-4 is a good all-purpose multimedia codec. The H.264 variant is also used in Blu-ray HD DVDs.
RealVideo – Developed by Real, it’s a
streaming video codec for use on the web. It's lost ground due to other codecs (E.g. flash). For more info visit: http://www.real.com/
SMPTE VC-1 - Developed by Microsoft for Blu-ray authoring
Sorenson – Well supported by a number of platforms.
To convert video into a digital signal for any of the above
mentioned codecs, we need to first digitize it.
Digitizing is the process of converting an analog signal into digital
form. We do this to create digital video. Digital video is video that
has been digitized and is now represented by binary code- 1s and 0s.
When we digitize video, we have to store the data somewhere- onto magnetic tape,
disk, or solid state memory. While it is possible to record forms of raw, uncompressed video data, compression using a specialized encoding method (codec) is
usually employed, in order to make the data small enough to write it to storage.
Visit http://www.adamwilt.com/DV-FAQ-tech.html#colorSampling for
a more detailed explanation of this.
How It Works
A video signal consists of luminance (black and white) and chrominance
(color) information. While the luminance and chrominance are combined
to create a TV display, the two signals are treated differently. TV works
sort of like a coloring book. The luminance draws the outlines (define
darks and lights) and then the color is applied.
You can see the luminance portion of the signal on a TV monitor by turning
the color (chrominance) all the way down.
Most of the important information is in the luminance portion of the
Sampling (frequency) & Quantizing (bit-depth):
When we digitize video, we sample it. We take a digital snapshot and
convert it into 0s and 1s. This is true for audio, video or a combination
of the two.
- Sample rate is how many times per second we take a picture.
- Quantifying is how many variations within the sample we have.
Here are a few different ways audio can be digitized:
- 8 bit at 22KHz (low end, computer alert sounds)
- 16 bit at 44.1 kHz (DAT, CD, MP3 at high quality)
- 16-bit at 48 kHz (DV, DAT/mini disc)
- 24 bit at 48 kHz (High-end DATs & workstations)
The higher (faster) the sampling rate, the better the quality. The larger
or deeper the bit depth, the better the quality is.
The digitizing process:
1. Capture the original signal from an analog source (tape or live)
2. Sample the input signal. This simulates the analogue signal in the
3. Quantize the signal. This gives each sample a numeric value.
4. Compress the signal. The overall amount of data is reduced to a more
5. Record the signal. Once digitized, the signal may be recorded on
a tape, RAM, optical disk or computer disk.
Color difference signals are one way to break down the information in
a video signal. (Other ways include composite video, Y/C or S-Video,
and RGB.) The color difference signals can be expressed as R-Y, B-Y or
Cr, Cb or sometimes U, V. This color difference signals are used in the
digitizing process. What the heck
is a color
Color difference signals: TV uses an additive color
system based on RGB as the primary colors. Mix red, green and blue
together and you
get white, right? Well if the RGB data were stored as three
separate signals (plus sync) it would take a lot of room to store all
Fortunately some great technical
minds figured out a way pack this information
into a smaller box (figuratively speaking) devising a way to convert
the RGB information into
two new video signals
that take up less room, with minimal loss in perceived picture
quality. The color difference
signals and are typically represented by UV or Cr Cb. So when you see
YUV it is referring to Y (luminance) and UV (the two color difference
Combining the RGB signals according to the original NTSC broadcast
system standards creates a monochrome luminance signal (Y). So you
can basically pull out the blue and red signals and subtract them from
the total luminance to get the green info.
4:4:4 vs. 4:2:2 vs. 4:1:1
Today’s digital technology provides us with several ways to digitize
video, mainly 4:2:2 and 4:1:1. What do they refer to?
Quite simply, they refer to the ratio of the
number of luminance (Y) samples to the samples of each of the two color
In the video signal, the most important component is the luminance as
it gives us all the detail absolutely necessary in the picture. As a
result, we must sample luminance at a very high rate, 13.5 Megahertz
(million times per second).
Given that the luminance portion is sampled at 13.5 MHz. Let's apply
the before mentioned ratios: 4:2:2 and 4:1:1. In a 4:1:1 component digital
sample, the color information is sampled at 1/4 the luminance rate: 3.375MHz.
In a 4:2:2 system, the color is sampled at 1/2 the rate of the luminance
What about 4:2:0?
The 4:2:0 is used in MPEG-2 sampling. The two color difference signals
are sampled on alternating lines.
What does this mean?
Quite simply, the color depth of a 4:2:2 component digital signal is
twice that of a 4:1:1 signal and, from the standpoint of color bandwidth,
is twice that of today’s popular component analog formats. This
means better color performance, particularly in areas such as special
effects, chromakeying, alpha keying (transparencies) and computer generated
Quantizing / Bit Depth
The next step in the digitizing process is quantizing. In digital video
we do not record video as we do in the analogue world, but rather a series
of numbers which give us a reference as to what the initial analogue
video signal was.
- 8 bit: DV, DVCAM, DVCPRO, Digital S
- 10 bit: Digital Betacam, D1
Here are a few
general production tips:
- Establish your location early in the scene. Viewers
need to know where we are (who and when too).
- When shooting B-roll or any footage to drop into anything, shoot
continuity style. Shoot every action (eg. someone on phone) with
a WS, a MS, an OTS, and a CU
following the 180-rule. This way you can provide meaningful
B-roll which edits nicely.
- When using B-roll use a sequence
of 3 or more shots. In other words don't just drop in one shot,
but use a sequence of at least 3 shots.
- Always start and end your sequence in black. You can start with a
fade up from black and end your sequence with a fade down to black.
(Some audio should accompany the video as it does this. Don't cut your
audio off at the end - fade it out or backtime it so the song ends.)
- Don't use hand held shots - unless you have a good
reason to. The worse thing you can do is hand-held zooms - they
look terrible. If you are shooting hand-held keep the lens wide and
move closer to your subject, don't zoom.
- TV is a close-up medium. Shoot close-ups! Avoid
excessive long and medium shots to tell your story. While they are
scenes, embrace medium close-ups and close-ups.
- Lenses: Occasionally check to see if the
lens has water drops on it and check
your back focus. You
see these through the viewfinder. Never touch the lens with anything
other than special lens cleaning paper.
Back to Jim Krause's T351 Home Page