AVCHD Interlacing and Frame Rates Revisited (ad nauseum)
Mizamook
14 Jan 2014 08:11
All this talk of 4K is great, but some of us are still stuck with good old HD. And in-camera compression. Make the best of it, eh?
I was looking through some older stuff of mine shot a couple years ago, and I am kinda mad at me. Then I look at other stuff, and I give myself for sticking it through and learning things. The regret I have felt at not fully understanding settings or technique, or having the gear to match my vision, or vice-versa - it can all be chalked up to learning. OK...I don't need to keep kicking myself, but learn to recognize when to step back and question what I think I know so I don't miss more opportunity.
This is one of those times.
Interlacing. AVCHD. Frame Rates
Questions and Test Results
Abstract: I've got two cams that shoot using AVCHD, which is, for all intensive purposes, a pretty awesome codec. If the bit rate were doubled, it would be scary good, and doesn't eat hard drives. But it is what it is. On the RX100m2 and the FS700, it comes in flavors, 60P, 60i, 24P, (Fs700 only: 30P).
Concept: Because 60P is slated at 28Mb/s, and 60i, 30P and 24P are at 24Mb/s, the amount of actual data recorded per frame per second is considered GREATER at the lower frame rates, and, theoretically, motion blur and reality interpretation arguments notwithstanding, quality should be better at these lower frame rates.
However, every test I've done, like with precisely shot random frames compared at 400% crop for example, 60P yields the same, and/or better results, plus gives the constant option of conforming to lower frame rates for 50% smooth slow motion without flicker or judder.
Why?
All tests with interlaced footage yielded a softer image. Less noise, too, but that's because it was lost in the softness. I could make interlaced clips look as noisy as progressive, but only if allowing interlace artifacts to shine through. This with motion, and without. I have promoted progressive as superior, but I am willing to question things. I've always hated interlacing once I learned of it...it really has no place in my world. You can interlace progressive (for broadcast, ostensibly), but you can't properly make progressive of interlaced without losing half your temporal resolution - am I wrong?
(edit: I am fully aware that 60i is basically equivalent to 30p)
Now, this is NOT opportunity for enthusiasts of 24/25fps to chime in...this is a different argument. The Lord of the Rings/Hobbit et al was not unsatisfying, in my opinion, because of the frame rate, but because of the direction.... but I digress. I don't know about that stuff except that I've never seen an improvement in image quality per frame in any test I've done, and I like to be able to track fast moving objects without worrying about judder. I like compelling content at almost any frame rate. That is a different argument. I'm talking absolute, measurable image quality.
It makes sense that a 60i shot at 24Mb/s "should" be better that a 60P shot at 28Mbps. Why is it the other way around? Sharpness, compression artifacts, seem to be either equal, or worse, in interlaced (at high crops)
Could be my tests need revamping. But I came back from what I thought was a cool shoot yesterday, with lots of detail, movement, and disparate light, and I was disappointed in the results. I want to know the why of it, and to avoid repeating that experience.
Discussion, please!
I was looking through some older stuff of mine shot a couple years ago, and I am kinda mad at me. Then I look at other stuff, and I give myself for sticking it through and learning things. The regret I have felt at not fully understanding settings or technique, or having the gear to match my vision, or vice-versa - it can all be chalked up to learning. OK...I don't need to keep kicking myself, but learn to recognize when to step back and question what I think I know so I don't miss more opportunity.
This is one of those times.
Interlacing. AVCHD. Frame Rates
Questions and Test Results
Abstract: I've got two cams that shoot using AVCHD, which is, for all intensive purposes, a pretty awesome codec. If the bit rate were doubled, it would be scary good, and doesn't eat hard drives. But it is what it is. On the RX100m2 and the FS700, it comes in flavors, 60P, 60i, 24P, (Fs700 only: 30P).
Concept: Because 60P is slated at 28Mb/s, and 60i, 30P and 24P are at 24Mb/s, the amount of actual data recorded per frame per second is considered GREATER at the lower frame rates, and, theoretically, motion blur and reality interpretation arguments notwithstanding, quality should be better at these lower frame rates.
However, every test I've done, like with precisely shot random frames compared at 400% crop for example, 60P yields the same, and/or better results, plus gives the constant option of conforming to lower frame rates for 50% smooth slow motion without flicker or judder.
Why?
All tests with interlaced footage yielded a softer image. Less noise, too, but that's because it was lost in the softness. I could make interlaced clips look as noisy as progressive, but only if allowing interlace artifacts to shine through. This with motion, and without. I have promoted progressive as superior, but I am willing to question things. I've always hated interlacing once I learned of it...it really has no place in my world. You can interlace progressive (for broadcast, ostensibly), but you can't properly make progressive of interlaced without losing half your temporal resolution - am I wrong?
(edit: I am fully aware that 60i is basically equivalent to 30p)
Now, this is NOT opportunity for enthusiasts of 24/25fps to chime in...this is a different argument. The Lord of the Rings/Hobbit et al was not unsatisfying, in my opinion, because of the frame rate, but because of the direction.... but I digress. I don't know about that stuff except that I've never seen an improvement in image quality per frame in any test I've done, and I like to be able to track fast moving objects without worrying about judder. I like compelling content at almost any frame rate. That is a different argument. I'm talking absolute, measurable image quality.
It makes sense that a 60i shot at 24Mb/s "should" be better that a 60P shot at 28Mbps. Why is it the other way around? Sharpness, compression artifacts, seem to be either equal, or worse, in interlaced (at high crops)
Could be my tests need revamping. But I came back from what I thought was a cool shoot yesterday, with lots of detail, movement, and disparate light, and I was disappointed in the results. I want to know the why of it, and to avoid repeating that experience.
Discussion, please!
danielschweinert
14 Jan 2014 13:17
Made the same tests 2 years ago for my new camera and found out that 50p was far superior compared to 25p. I can only suppose that in 50p mode the effective datarate is much higher than 28Mbps.
I've recorded two clips with same settings and same pattern. Results:
10sec clip with 50p is approx. 33MB
10sec clip with 25p is approx. 7MB
Update:
I've also found out that the sensor works natively with 50p! But if you set your camera to 25p or i it will use incamera technology to make a stream out of that 50p. This results in soft image. Checked it on 400% and the 25 stream looks soft without details. The 50p native stream is full of details!
I've recorded two clips with same settings and same pattern. Results:
10sec clip with 50p is approx. 33MB
10sec clip with 25p is approx. 7MB
Update:
I've also found out that the sensor works natively with 50p! But if you set your camera to 25p or i it will use incamera technology to make a stream out of that 50p. This results in soft image. Checked it on 400% and the 25 stream looks soft without details. The 50p native stream is full of details!
Normstock
14 Jan 2014 13:31
It goes further than just a progressive v interlaced, you also have the quality of sensor and how the signal is processed in camera, you have line skipping or having whole data taken off the sensor. Sony in the past when I shot Nikon stills camera made noisier sensors, then Canon came out with the 5D MK2 and had an almost noiseless sensor. Then look at the Canon 7D that had poor noise off the sensor. I like Panasonic and 60p that they put in most of their cameras where you get 2 frames for each frame instead of half a frame.
Interesting discussion.
Interesting discussion.
danielschweinert
14 Jan 2014 14:08
@Normstock
I have a Panasonic 50p camera and if it's set to 25p it does indeed "some sort of" line skipping and not scaling. "Usually" you wont notice it in normal use, only if zoomed up to 400%. Seems like they use a slower and cheaper CPU therefore they have to skip data instead of high quality downscaling.
My conclusion was always shoot in 50p (if camera is natively 50p) and use software like After Effects to convert it to high quality 25p or something else.
I have a Panasonic 50p camera and if it's set to 25p it does indeed "some sort of" line skipping and not scaling. "Usually" you wont notice it in normal use, only if zoomed up to 400%. Seems like they use a slower and cheaper CPU therefore they have to skip data instead of high quality downscaling.
My conclusion was always shoot in 50p (if camera is natively 50p) and use software like After Effects to convert it to high quality 25p or something else.
Normstock
14 Jan 2014 14:11
All my Panasonic cameras are set to shoot 60p (Canada) and the images are very sharp, it's also the highest data rate.
Mizamook
14 Jan 2014 22:09
Thanks for your input Gentlemen. I was wondering if I was missing something or doing something wrong - your comments corroborate my findings.
I'll stick with 60p - with the RX100m2, it seems to hold true with the softness at higher crops in interlaced and lower frame progressive. Kinda sad, considering that given the bit rate being higher "per frame" which, at least in the case of progressive, "should" yield sharper images, but does not.
Glad I don't have to go back and reshoot all my footage done at 60p!
I'll stick with 60p - with the RX100m2, it seems to hold true with the softness at higher crops in interlaced and lower frame progressive. Kinda sad, considering that given the bit rate being higher "per frame" which, at least in the case of progressive, "should" yield sharper images, but does not.
Glad I don't have to go back and reshoot all my footage done at 60p!
OverheadProductions
16 Jan 2014 01:39
Avoiding interlacing at all costs, I use Panasonic @50p, where I'm still not quite sorted in my head, is the conversion from AVCHD H264 codec, to QuickTime using the jpg codec, pr other options from QuickTime. Current conventions suggest it's the choice of consumers. As I'm sure everyone realizes, you end up with a significantly larger clip than the original. However, this doesn't reduce or increase the quality of the clip, that is dependent on the settings you put during export; so a clip recorded in H264 and converted to QuickTime can't be improved, i.e Rubbish in rubbish out or excellent in, excellent out.
danielschweinert
16 Jan 2014 06:50
I've worked for a major broadcast company and there they avoided H.264 material at all costs! Therefore I encode all my clips with ProRes422 or Photo JPEG. I know the filesizes are bigger but also more editing friendly.
Mizamook
16 Jan 2014 07:19
Clarification? They avoided h.264 material, or h.264 sourced (which is what AVCHD is, right?)
Also, speaking of ProRes: I've always found ProRes (any flavor) raises luma values across the board. Just saw this again when messing with the transcoding program 5dtoRGB. Nothing I did changed this. Similar transcodes to DNxHD do not exhibit this disturbing phenomenon.
Also, speaking of ProRes: I've always found ProRes (any flavor) raises luma values across the board. Just saw this again when messing with the transcoding program 5dtoRGB. Nothing I did changed this. Similar transcodes to DNxHD do not exhibit this disturbing phenomenon.
danielschweinert
16 Jan 2014 12:33
They avoided H.264 material. Saw it myself with H.264 you get hickups on the timeline and the application is not very stable. Imagine hundreds of stacked video layers. This is just an example from google images to see what I mean but literally hundreds of video layers stacked. It looked monstrous.
http://blog.georgiew.de/wp-content/uploads/2012/07/Screen_Luca_Schnitt-653x408.jpg
https://s3.amazonaws.com/pbblogassets/uploads/2013/08/Full-Timeline-1024x4131.jpg
Only then you will see that a editing friendly codec is a must. Everything was transcoded to an editing friendly codec. Of course native broadcast certified footage (from Canon XF305, ...) was favoured.
If your luma increases check your output settings (DATA or VIDEO levels 0-255 vs. 16-235).
http://blog.georgiew.de/wp-content/uploads/2012/07/Screen_Luca_Schnitt-653x408.jpg
https://s3.amazonaws.com/pbblogassets/uploads/2013/08/Full-Timeline-1024x4131.jpg
Only then you will see that a editing friendly codec is a must. Everything was transcoded to an editing friendly codec. Of course native broadcast certified footage (from Canon XF305, ...) was favoured.
If your luma increases check your output settings (DATA or VIDEO levels 0-255 vs. 16-235).