Avid Editing Guide (User Manual) & Camera Formats

Click here to download the Avid 6.5 Editing Guide (User Manual)

Camera Controls & Settings

  1. Before You Pick Up the Camera

Recording Format

  1. Resolution
  2. Frame Rate
  3. Progressive & Interlaced
  4. File Format

Click here to download a list of SEDT Camera formats

Click here to download a (complicated) list of all video formats supported by Avid Media Composer


Know Your Panasonic Codecs! DVCPRO HD, AVC Intra & AVC Ultra

To understand these codecs you need the brief history:

DVCPRO HD: The Panasonic DVCPRO codec was developed as a standard definition competitor to Sony DVCAM. When Sony stepped into high definition they did so with their HDCAM codec, but Panasonic decided not to rebrand so they called their high definition codec DVCPRO HD.

AVC-Intra: Panasonic developed the AVC Intra Codec for two main reasons. First, they wanted to rebrand their codec for a tapeless specific codec. AVC Intra can only be shot on P2 cards, whereas DVCPRO HD which can be shot on P2 cards as well as tape. The second reason was to develop a new codec that used H.264 MPEG-4 compression standards.

AVC-Ultra: Panasonic made a major display of AVC Ultra at the 2012 NAB Show. Like many in the video production world the thought of memorizing one more codec makes my stomach churn. But with the rise of cinema-quality video Panasonic developed AVC-Ultra to handle 2K and 4K shooting, as well as develop a codec that could handle more information with better compression.

Now that the brief history of the modern Panasonic codec is out of the way, you, like many corporate producers and freelance directors of photography may ask, “What’s the difference?” Well, before handling that question we must go down that second path, the terminology that is important when talking about codecs.

Resolution: This is the most common term thrown around the video world. Pop terms such as 1080p, 720i are included in the nomenclature of retail sales retail sales and highly experienced DPs. When looking at codecs, remember it’s extremely important to know the FULL resolution numbers rather than just than the partial 720p, 1080i, 1080p, etc., because 720p can mean both 960 x 720p or 1280 x 720p, and the more lines the better the resolution.

Chroma Subsampling: Here is where it can get a little complicated, so I’ll try to simplify as much as possible. Color sampling is a ratio that looks like 4:4:4, which is basically 1 channel on luminance (brightness) the first number, and 2 channels of chrominance (the color), the final 2 numbers. The human eye detects brightness better than it does color, so your luminance will always be the same at four, but you can subsample the color for better compression. This ration can go from 4:4:4, which is digital cinema quality, to 4:1:1, which is DV quality.

Megabits per Second (Mbps): Megabits per second is the amount of information a codec can record in a time frame.

Now to the question at hand, what’s the difference in the codecs?

From the naked eye the differences seem minute but, in the video codec world and for video crews, the better each aspect of codec is from its predecessor the more versatile the codec will be. For Panasonic, DVCPRO HD has been a strong horse for many years, but you don’t get true 1080p and it has become a relatively rigid codec. AVC Intra will get you true 1080p without consuming more Mbps so a better image for a similar size but up until recently only come ENG cameras. But, Panasonic has recently release prosumer version of AVC-Intra cameras like the Panasonic HPX-250 that gives you the size of their prosumer DVCPRO HD cameras, like the Panasonic HVX-200, with the updated, higher quality codec. AVC Ultra biggest selling point is its versatility. From digital cinema quality 4K resolution it competes with the recently wealth of 4K cameras, as well as give you 25 Mbps with 4:2:2 sampling when you are looking for great resolution while giving you low data consumption.

When someone mentions Panasonic codecs don’t feel like the words are foreign, know what you need and you will get the best image possible.

Chroma (color) Sub Sampling (4:2:0 vs 4:1:1 vs 4:2:2 vs 4:4:4)

For a nice chart of different formats and their chroma subsampling schemes, Click here to see the list maintained by Wikipedia

Written Version

Alex, can you explain the difference between 4:4:4, 4:2:2 and 4:2:0?

What we’re talking about here is called Chroma Subsampling, and if your eyes glaze over when you hear that, you’re not alone. There’s a LOT of confusion about this topic, and most of it stems from the fact that there have been two different approaches to chroma subsampling, and both of them are written out the same way: 4:x:x. However, I’m only going to cover the more modern and prevalent system.

Let’s start at the beginning. An electronic image is composed of little squares called pixels. Each pixel can have luminance – luma – which tells the pixel how bright or dark to be, and chrominance – chroma – which tells the pixel what color to be. If you don’t have any chroma data, your image will be grayscale – black and white. But if you don’t have any luma data, you won’t have any image at all.

Now, to have an reasonably good picture, every pixel needs to have its own luma data. But some clever engineers figured out a long time ago that every pixel does NOT need to have its own chroma data. You can save a lot of space by forcing chunks of pixels to share the same chroma sample – basically, to be the same color. And that process is called chroma subsampling. Now, let’s look at how this is written out.

The first number “J” tells us how many pixels wide the the reference block for our sampling pattern is going to be. Sometimes it’s eight or three, but usually it’s four pixels wide.

The second number tells us how many pixels in the top row get chroma samples.

And the third number tells us how many pixels in the bottom row get chroma samples.

As you can see here, if every pixel in the 4×2 grid gets a chroma sample, there’s actually no subsampling going on, and the scheme is 4:4:4. This is what’s used in high end HD cameras like the Panavision Genesis and Sony F35.

Now let’s take a look at 4:2:2. Every two pixels on the top row share a chroma sample, and every two pixels on the bottom row share a chroma sample.

We’ve definitely lost a lot of detail, but we can still get an idea of the original image. This is the subsampling used in Panasonic cameras that record in DVCPRO HD, and Sony cameras that record in XDCAM HD422, as well as in editing codecs like Apple ProRes 422.

Now let’s take another step down and look at 4:2:0.

Our “a” number is still 2, so every two pixels on the top row still share a chroma sample. But the “b” number is zero, which means that the pixels in the bottom row don’t get anything of their own. So, they have to share with whatever’s above them.

You can see how much information is lost here. This is the subsampling used in DVCam, HDV, Apple Intermediate Codec, and most flavors of MPEG, including the ones generated by Canon DSLRs.

Looking at this diagram, you can see one of the main reasons why formats with heavy chroma subsampling give you blocky artifacts. What you’re seeing is actually chunks of pixels that are sharing chroma data and being forced to be the same color, to save space. And, of course, this isn’t even taking into account the other aspects of image compression, which can make this blockiness even worse.

This really becomes an issue when you talk about pulling a chromakey. Think about trying to pull the green pixels out of a shot of smoke, or wispy hair. It would be fairly easy if each pixel has its own chroma sample.

But it gets much harder when pixels are sharing samples, because the green pixels aren’t necessarily at the exact edge anymore. This is why you get those jagged lines around the edges of chromakeys with subsampled footage.

Now, There are a lot of other factors that figure into the quality of an image, and chroma subsampling is only one of them. I’ll address some of those other issues in future tutorials.