I know that the idea of proxy recordings and how to create them in editing software such as Adobe Premiere and Divinci Resolve has been talked about a hundred times, but did you know you can generate them in-camera for even more time savings?
There’s a feature that has been well known in the film and broadcast world for some time now that more and more camera manufacturers are starting to add to modern mirrorless cameras that shouldn’t be ignored. If used correctly, it can save you a great deal of time on your workflow, and maybe even save you from feeling like you need to get a whole new computer just because your current one struggles with the files from your newer mirrorless camera.
What is a Proxy?
Proxies are versions of video clips that are generated in a lower resolution or are otherwise easier for a computer to process and allow editors to work with video files that might not play back smoothly otherwise.
As Sony explains: “Their viewing and marking in the installation line of the program on the computer requires much less resources, so the proxy files are convenient to use for comfortable viewing. Most popular video editors can create proxy files themselves when downloading original files.”
Why Should You Use Proxies Anyway?
Let me quickly explain the problem to those who might not be familiar.
Very likely, if your computer is having a hard time playing back or “scrubbing” video footage of your mirrorless camera, the problem likely isn’t your computer, it’s the codec. Most modern mirrorless cameras use codecs like h.264 and h.265. While these codecs allow you to shoot extremely high-quality footage, they have a dark side — compression.
H.265 especially, is a very compressed codec. The more compressed a codec is, the more difficulty your computer will have in uncompressing it, especially when the footage is at 4K resolution or higher. That’s part of the reason why professionals love codecs like the different flavors of ProRes and Blackmagic RAW. Those codecs result in very large file sizes but are much less compressed. The difference is so drastic, in fact, that it would be easier for your NLE (non-linear editing software) to playback 12K Blackmagic RAW footage than it would be to decode 4K h.265 footage from a Canon R5 or the Sony Alpha 1.
Mirrorless camera manufacturers use these compressed codecs primarily for two reasons: storage space and processing power/heat dissipation. We’ve actually seen the downsides in recent years of what happens when camera manufacturers design cameras that push this to the limit: the Canon R5 is a prime example and it was plagued with well-documented overheating issues at release. Canon redesigned the R5C to have better cooling in order to counteract the problem.
Concerning storage space, if you’re used to h.264 and h.265 file sizes, where bitrates are often found in 50 MB/s to 600 MB/s, then the likes of higher-bitrate ProRes and RAW could be quite shocking. The 8K RAW of the Canon R5 is a staggering 2,600 MB/s. That means you’d fill up a 128GB card in about six minutes.
For all of the wedding filmmakers out there who like to record an entire ceremony, that translates to over 1TB of footage per hour, per camera. Depending on if you’re running multiple cameras and want to get the reception speeches too, that means you could end up with over 10TB of footage per wedding.
Numbers like that are perfectly acceptable, and even expected, on larger commercial and film sets, especially when dealing with motion tracking or ultra slow motion workflows (like the Phantom Flex cameras), but for indie filmmakers, content creators, documentary shooters, and even many small to medium-sized corporate and commercial productions, dealing with file sizes that large just isn’t practical.
When you factor those things in, h.264 and h.265 make a great deal of sense for mirrorless cameras, but that doesn’t mean they’re easier to work with in the post-production process. For all the space saved in not shooting prores or RAW, you lose in time while editing.
NLE providers like Adobe have made progress with editing software in recent years and can decode h.265 more quickly, and even Apple has made moves to increase the efficiency of h.264 and h.265 in its computers, but that doesn’t mean they’re anywhere near perfect.
That loops us back around to proxies and why many will create them using their editing software of choice. Unfortunately, even that can take a lot of time, and further, the proxies that most people create are too low in resolution to even tell if your shot is properly in focus.
In an industry where time often is money, loss of time creating proxies or waiting for footage to scrub in a timeline can mean a decent amount of money lost.
That’s why the ability to do in-camera proxy recordings is so important. Unfortunately, at the moment, not every mirrorless camera offers in-camera proxy generation, and the ones that do often have framerate or resolution limitations. Nikon offers in-camera proxies for its newly released v2 firmware of the Z9, but only when recording RAW. Canon is a similar story. Fujifilm and Panasonic currently don’t offer it at all.
But Sony does, and if you shoot video with your Sony camera, you should be using the feature.
Generating Proxy Footage In-Camera
Sony has by far my favorite option for in-camera proxy recording on the market at the time of publication. The company allows for 10-bit and 8-bit proxy options, even as high as 1080p resolution. Sony’s proxies are still h.264 and h.265, but they’re remarkably small file sizes (between 6MB/s and 16MB/s depending on what configuration you choose), and since they’re either 720p or 1080p, they are much easier for your computer to process.
The proxies coming out of the Alpha 7S III, Alpha 7 IV, and Alpha 1 have completely changed my workflow. Being 1080p 10-bit, the proxies have a high enough resolution and color depth to check focus and even do some quick color grading. In truth, there have even been a few projects I’ve done (that were going to be used primarily for social media) that required such a quick turnaround time that I not only edited the entire project with the proxies, but I actually exported the proxies themselves in the final 1080p deliverable. Please note that I’m not actually encouraging that you do this, but only mentioned it to make a case for the quality of the proxies themselves.
When working with remote editors, these in-camera proxies allow me to quickly and efficiently send them the footage right after the shoot so they can start working immediately.
No more waiting overnight to upload footage just to have them wait to download it again. No more having to create proxies myself and then send them those while the other footage is still uploading.
As a Director of Photography, I’ve introduced these in-camera proxies to other production companies, directors, and agencies, amd since then, the majority have begun to transition their workflows to this in-camera proxy system when applicable.
The consensus among them is unanimous: it’s a game-changer. Some of them have literally cut down their deliverable timelines by several days because of how much time they save. Others were on the verge of spending money on difficult-to-find graphics cards to now feeling like they can easily edit on the machines they have.
Companies Need to Invest More in this Outstanding Feature
Unfortunately, even on Sony, the experience is still not perfect: for the likes of cameras like the Alpha 1, Sony doesn’t support the 8K 24p or 4K 120p modes in proxy generation, arguably the video formats that need in-camera proxies the most.
In addition, Sony’s proxies put a variation of three numbers and letters at the end of each file name to signify that it’s a proxy which makes sense from an engineering perspective but not a real-world usability one. For example, if the original 4K file is “20220328_C4284,” the proxy file name (which resides in a separate folder called “Sub”) reads as “20220328_C4284S03.” Your mileage may vary, but depending on your NLE of choice, and depending on your firmware version of that NLE, the software may not allow you to link your proxies due to this naming discrepancy. To get around this, you’ll have to use something like the Bulk Rename Utility software or rely on the MacOS bulk rename feature offered by the Finder. It’s an extra step that I wish could have been avoided.
But these issues aside, in-camera proxies have made such an impact on my workflow that I now have a hard time recommending any camera that doesn’t offer them. This is not because cameras that don’t offer in-camera proxies can’t produce great results, but because I believe the workflow to achieve those results to be vitally important to the creative process as a whole.
If I can produce more content for my clients, or get the final deliverable to them sooner, simply because I or my editors can edit more effectively and efficiently, then I would be hard-pressed to go back to a camera that took away that option.
To anyone looking to buy into a camera system for video, I highly recommend checking to see if that camera offers in-camera proxies.
To camera manufacturers, I appreciate the effort that you are putting in to increase the quality of mirrorless cameras. Adding internal RAW options is great. Having multiple levels of codec and bitrate options is great. But please don’t forget about the importance of the overall workflow.
It’s one of the greater distinguishers between “video” cameras and “cinema” cameras in 2022: nearly every camera that comes out these days is capable of producing “cinematic” content, but the list of cameras that are capable of cinema-level workflows is much smaller. I understand that “workflow” isn’t a great marketing pitch, but it is something that makes someone want to stay loyal to a brand and I really hope more emphasis is placed on it going forward.
Image credits: Header image via Sony.
Author: Michael Marrah