It’s one of the easiest things to test, and yet I rarely see it being done.
I’ve always tried to test the actual ND filters I’m using or might be using every time on every new show. The combination of camera / sensor and lens not to mention shooting circumstance can mean an ND filter you might be happy with previously, might not work the way you expect.
The more I do these kinds of test, the more I also realise how much VARIATION there can be even from the same manufacturer. Both in colour AND in density. More often than not when you get to 6 stops there can often be a 1/2 stop variation.
Typically you also see more variation at the deeper filter factor filters, like the 1.8 or 2.1 because that’s where it’s much harder to be consistent.
Anyhow, here is one of my latest ND filter tests. I’m testing here on an Arri Alexa Mini ahead of the feature film “Taurus” and you can see there’s a huge difference between these two brands of ND filter.
It’s been a few weeks since Blackmagic Design announced their radical new sensor design packaged in the form of the Ursa Mini Pro 12K camera.
I’m in pre-production on a film shooting on northern Ontario Canada. In the story weather plays a big part and I’ve started gathering shots of weather and sunsets to use in the film.
There are some pretty spectacular sunsets in the town I’m based in, North Bay and I thought this might be a good example to show you the great look this sensor can do with colour in the natural environment and into low light. I want to share with you all the spectacular images and colour the 12K sensor is capable of doing.
But then I also realised it’s also a pretty cool case study or example of just how fast the BMD 12k RAW workflow can be.
A lot of the comments I’ve seen online have been scoffing at the need for 12K and making comments about needing a massive super computer to work with the files and insane amounts of storage.
Let me blow those misconceptions entirely out of the water.
I spent about 2 hours shooting this one sunset. My 150Wh AB battery still was at 30% when I wrapped.
Shooting 12K @ Q3 I shot about 57 mins of footage and that footage took up about 227Gb across two old 128 Gb CFAST cards that I have.
I copied those files onto a 2TB T5 SSD in about 12 mins, and I started cutting on my 2017 MacBook Pro. After about 3 hours I had this cut that’s about 5 mins. The shots are shown in the same order they are shot. I was able to cut this before my battery from the shoot finished recharging.
If you want to see some spectacular colour, then skip to about 2 mins in.
Here’s another thing. These aren’t graded. This is what came out of camera with “extended video” applied to them in the RAW tab. No other grade.
It took about 1 hour to render the 4K H264 file from Resolve. And then my slow internet to upload it.
2 hours shooting, for 57 mins footage totally 220Gb in 12K Q3 using a single battery.
Edited on a 3 year old MacBook and with the default grade applied.
So yeah. No huge hard drives, no massive computer required. If you can edit 4K now, you can also edit 12K BRAW.
Shot with Zeiss Compact Primes.
So remember, this is cut in the same order that it was shot. If 5 mins of sunset isn’t too interesting to you, then the sun sets at about 2 mins and the colour starts to get really interesting at about the 4 min mark.
On with the show.
OH. Also, related to the same film, but unrelated footage for anyone looking for some more RAW files, here you go.
One of the amazing developments with this new sensor is the ability to shoot the same full sensor raster or size at different resolutions without having to crop.
Typically when shooting at different resolutions we’d expect that the sensor would have to window or crop.
Here’s some examples shot using the in-camera scaling, shooting at 8K, 4K with some frame rate variations as well.
I also included a 4K Super 16 crop shot that was shot at 220 FPS.
Apologies for my grading skills, there’s a little bit of a difference shot to shot, but that’s down to my lack of skill in these things more than anything.
Like any new camera, it does take a while to learn how to get the most from the grading of these shots. Remember that this was shot with an early pre-production prototype, using older boards so the noise and colour isn’t final.
I also tried a couple of different looks with the last few sequences. I personally think the 8K 60 FPS material was especially lovely.
I really enjoyed the colour and nuance. I feel like I am seeing more colour variations in the actors faces. Seeing the blues and pinks of the late sky, the colours from their clothes and environments.
Let me know your thoughts, ask some questions and enjoy !
You probably know the sketch from Spinal Tap ? Where Nigel Tufnal, lead guitarist of Spinal Tap shows off his specially made Marshall guitar amp where all the dials go to 11 ?
You may then have had a similar reaction to me when I heard that Blackmagic Design were planning to make a camera that could record 12K files.
My first reaction was wow.
Quickly followed by the thoughts….
Why do I need 12K ? Can I even work with 12k files ? Why do I want 12k ever ?
After picking my jaw up from the floor and asking a bunch of questions I’ve started to realise it’s not really such a silly idea at all. It’s all in how you USE those pixels and photosites and the entire way that BMD have gone about designing this. The meme is out there, but it’s about having better pixels.
I encourage you reader, to see beyond the headline grabbing 12k number and take a deeper dive with me. I’ve come to understand the pixel count is just part of a bigger story here. This is so much more to this than just that seemingly ridiculous number.
First though, lets do the numbers.
Yes, it has a sensor that’s 12288 x 6480.
This is a 79.6MP cinema camera.
It can do 8K Blackmagic RAW at up to 110 FPS (soon to be 120) with the same full sensor field of view. Or 4K. No cropping or windowing with resolution or frame rate changes.
Your 18mm lens is still looking like a 18 mm lens when you shoot at 12K, 8K or 4K and independent of the frame rate, which is 60FPS, 110 (120 soon) and 120 respectively.
It also has a super 16 @ 6K RAW window mode with frame rates to 120 FPS
It can do also do 4K RAW up to 220 FPS in a super 16 window, or as I mentioned, 4K RAW of the whole sensor up to 120 FPS.
It’s ISO is 800 native.
It’s a Super 35 sized sensor. The sensor is identical in height to the G2 but a little wider to accommodate 17:9 DCI sizes. The sensor is 27.034mm x 14.256mm.
It has an interchangeable mount.
It has internal ND’s, 2 stops, 4 stops, 6 stops and clear.
It will record Blackmagic RAW only.
It has dual CFAST cards, dual SD cards or a USB-C connected drive. Some very high data rate modes will require dual card recording (Such as above 60 FPS in 8K)
The 12K camera is a result of three years of work on a custom sensor design. Think about that for a moment. BMD threw millions of dollars at developing their own 12K sensor three years ago, from scratch, when you would struggle to even gotten an 8K TV.
Somehow they managed to keep this a secret for all that time.
While BMD have highly customised and adapted other sensors previously, this is the first sensor that BMD have designed completely from scratch with their own IP and design, it is the culmination of a strategy that unifies a few other technologies that BMD have also been quietly working on and I think shows their ongoing maturation as an imaging company.
This sensor and what it can do wouldn’t be possible without the Blackmagic RAW codec. They go hand in hand. When the sensor development started three years ago, so did the development of Blackmagic RAW. The sensor and codec are highly integrated.
Firstly, let’s consider the 12K sensor itself.
Here’s a radical thought I’ll leave with you for a moment….It’s not a BAYER sensor.
The BAYER filter on a CMOS sensor has been what has driven digital photography and cinematography sensor developments for the last 15-20 years. It’s almost ubiquitous.
A CMOS sensor without a CFA or Bayer filter would be black and white. Leica made an exotic M series rangefinder camera without any CFA. The upsides ? Way more resolution, far less aliasing and also much more sensitivity or better native ISO. The downside ? The image is Black & White !
So, a CMOS sensor is a grid of photo sites or pixels that are only sensitive to light, but have no colour. It’s black and white until you add some coloured filters on top of those photosites.
In a Bayer sensor, what we do is make an array coloured filters over each photosite, where each grid of 2×2 pixels gets two green filters, a blue filter and a red filter. This is called the Colour Filter Array or the CFA. Every photosite is now is either a green, red or blue pixel and there’s twice as many green photosites as any other colour.
You can vary the intensity of the colour or alter the quality of the filter itself to customise the colour result. Part of the colour science story. Using a deeper dye colour means you get less sensitivity (lower ISO) for example, so it’s always a bit of a balance between the size of the pixels / photosites themselves, and the quality of spectral design of those coloured filters.
Green is the part of the spectrum the human visual system is most sensitive too and this tends to be why the green channel is considered the most important and has the extra photosite.
Using maths based on clever algorithms, called a de-bayer or de-mosaic algorithm, each pixel can be mathematically “averaged” or interpolated to have a colour based on the photosite itself AND the mix of photosite colours surrounding it.
Depending on how clever the algorithm can be, the maths behind this means we can pretty accurately “guess” or interpolate the colour of each pixel. So each photosite starts as either green, red or blue, it get’s transformed into an average of it’s own value and it’s nearby neighbours. It’s not 100% accurate though and there are some issues that come up.
One downside is around colour. We now have twice as many green photosites as red and blue, so there’s some discussion to be had around how colour accurate this model is, because the photosites are weighted so heavily to the green channel.
Bayer sensors can also have a lot of issues around aliasing or the stair step pattern you see on a diagonal edge or fine detail when you zoom into high magnification. Sometimes when the detail you’re capturing is so fine it’s approaching the same size as the pixels optically, you can get colour errors on the edges and transitions and another effect called false colour moire, which is a kind of orange / cyan halo around some of these very fine details. If something is the size of a single photosite, then it has to be either a green, red or blue colour. Sometimes those edges are so fine because of distance, that the edge colours can get confused and voila, you get false colour aliasing on edges.
Traditional logic and the way the maths works out here is that you can reduce this by supersampling, or using higher resolutions in the sensor than what you’re targeting for output. More pixels means less chance of that fine detail being only a pixel sized artifact. You can also use an optical filter, called an OLPF that specifically try’s to target those frequencies that are beyond what the sensor can resolve. Kind of a diffusion filter that only works on very fine details.
For a really nice 1920 image with the “right” amount of pixels and colour you often want to add some overhead to the resolution as oversampling to make up for the shortfall of each pixel being made up by “maths”. The very first BMD cinema camera was designed to make a nicely oversampled 1920 HD image by using a nearly 2400 pixel wide sensor. To make a nice 4K DCI image with a Bayer sensor, you really want closer to 5K worth of BAYER pixels to be able to cover any gaps in the sensor maths.
But what if we didn’t adhere to the “Bayer” pattern ?
BMD’s 12K sensor doesn’t use a Bayer pattern CFA. It uses a brand new custom filter array that has an equal number of red green and blue photosites as well as the addition of clear or “white” photosites.
If you create a sensor with a LOT MORE photosites at a much smaller physical size, like say 12288 x 6480 worth, and then you start to alter the ratio of filter colours and maybe even just have some of them with just no filter at all you can start to manipulate the maths in different ways.
Now we have a sensor with a massively increased number of pixels, many of which have no colour filter at all. BMD call these “W pixels” for white.
Using white pixels in addition to the RGB pixels isn’t exactly a new idea, but what’s new here is the way they are read and mathematically arranged, something that BRAW is turning out to be very good at doing because of the way the codec profiles the sensor itself.
Here’s the important difference though. The 12K sensor has an EQUAL number of photosites in each colour channel and not only that, there’s now the addition of clear or W photosites that are much more light sensitive.
Kind of like HDR images can be created by combining two brightness values, we can now combine the brightness and extra sensitivity of the W photosites along with the colour pixels to get an extended drynamic range, and helping to overcome the issue of making these new better pixels so much smaller.
Instead of the typical 2×2 grid size that you have on a Bayer sensor of GRBG we now have in the same space a 6×6 grid that has 6G, 6R and 6B as well as 18W photosites.
Equal numbers of photosites for each colour and the addition of clear photosites. This is something very new and very not Bayer.
Yes, that does mean the pixel pitch is a dramatically small number of 2.2 microns. Previously the G2 has a 5.5 micron pixel pitch. Alexa has always had very large pixels at 8 microns on their sensor, part of the reason they’ve not been able to make a 4K S35 camera using that sensor design.
To give you an idea of how small 2.2 microns is, Corona virus is stopped with the use of an N95 mask which block particles down to 0.3 microns.
These are very small pixels.
Normally reducing the pixel pitch or pixel size to be so small means the light sensitivity is sacrificed, but this is more than compensated for by using the unfiltered W pixels. By combining the brightness values of those W photosites with the coloured photosites, it also greatly increases the dynamic range compared to a standard Bayer array.
Instead of a processed de-mosaiced large pixel grid being GRBG, they now become a grid of 6 x 6 pixels is 6G, 6R, 6B and 18W pixels.
By so massively increasing the pixel density and then having an equal number of RGB pixels, we start to see a lot less of the artefacts of a Bayer sensor, such as the aliasing and moire I was mentioning earlier, as well as more evenly balancing the chroma resolution in each channel.
I don’t want to come off like a shrill. I think the best way to judge what BMD have done is to download some clips and try them for yourself. That 12K number is a big hurdle to get past and creates expectations. I wasn’t looking forward to what 12K would look like on a face.
Here’s what I see.
Smoothness. Despite the terrifying number, this camera is still incredibly flattering. It’s the opposite of what I thought would happen before I started shooting with the camera. Paired with lenses that can keep up with this sensor, it feels like someone just came and cleaned my glasses. And polished out all the tiny scratches and blemishes that I couldn’t even see and didn’t even know I wasn’t seeing. There’s a kind of transparency that’s hard to explain. A clarity that is still flattering, yet revealing. There’s a wonderful subtleness in the way tone is rendered in the human face with a lot of nuanced skin tones that are readily discernible but not in a distracting way or with artefact. I’ve found myself really scrutinising the colours of the human face, noticing the very intricate nuance and details, especially around lips and eyes.
I did an extensive test with a few popular diffusion filters assuming we’re going to need to use some kind of softening to take the edge off, but I feel like all I see when I apply most of the commonly used diffusion filters is the obvious effect of using a diffusion filter !
I don’t think they’re needed. I’ll try to post that test shortly for you to judge.
It’s hard to describe these 12k images. The first images I shot with this sensor were quite a while ago and when I knew I’d be testing this brand new ultra high resolution sensor I figured I should get some good lenses.
At first I was thinking Master Primes would be good, but my local vendor CVP in London offered me some Zeiss Supremes and I figured, they would do the job nicely. I hadn’t shot them before, but my understanding was that they were like master primes for larger format.
That was wrong.
They are something very special, and nothing at all like Masterprimes. I’m still not sure if it’s just these lenses on this sensor that makes such a nice combination, but there’s a kind of transparency that goes with these lenses that is a joy to behold. Right away, even on the small built in screen of the camera I could tell they were something special.
They are very straight and neutral, but they also have a lovely way of drawing a face. I found they looked really pretty and was able to squeeze a few shots while I was shooting on The Great in the UK.
When it came to shooting with the next build of the camera, I again reached for those supremes. There’s an internal joke nickname at BMD about this camera. They call it the lens checker.
It’s a somewhat apt description. I’ve tried a few different lenses and here’s what I’ve learned. This camera shows you more very clearly the difference between the higher price point lenses and the lower price point lenses. On other cameras these lens differences are far les obvious.
I had an interesting discussion too with my very experienced focus puller who happened to own the Supremes I was shooting with, so he knows them very well. He was using his regular 13” small HD monitor that he’s been using for the past couple of years and he felt like he could kind of see a difference, or more that he could see that there was more information there that his monitor WASN’T seeing. I think we’re going to want more 4K field monitors soon.
So focus becomes a bit more of a critical issue. I guess there will be many that see 12K as a chance to crop and zoom the image a lot more than they used to be able to, but if you’re planning to do that, you better have some decent glass on front. And it needs to be 1000% in focus.
Many of these demo shots I actually captured at a much deeper stop than I usually would. The wider shots were typically shot at T2.8 or T4 using the internal NDs, and only in the very extreme close ups did I use them wide open.
You tell me. I think it this camera looks pretty damn special.
I edited and graded this material using my 2017 15” Macbook Pro in Resolve 16.2.2 and a custom version of the BRAW SDK which is now in the currently available download of Resolve.
I have a Drobo that I use for archiving my photos, and they are probably one of the slowest RAID systems around, and yet it kept up just fine on my system editing 12K material.
The codec is smart enough to dynamically adjust itself on playback so you can still work with it even on an underperforming machine like mine. I did have a BMD eGPU PRO as well, which I mainly use as a way of having a HDMI out for my grading on a Television.
My instinct is that this codec is smart enough, that if you already have a system that can do 4K, then you should be able to edit 12K as well, without proxies or caching.
The result ?
The intent here was to make better pixels. The best way to address the mathematical issues of Bayer sensors and having less chroma information was to even out the colour channel information more and combine them with the clear pixels.
When combined with a smart codec like Blackmagic RAW, you can also be really smart in the way the pixels are grouped and this then enables you to create a super smart in-camera scaling that avoids the usual side effects of line skipping and pixel binning.
My takes away is that using this sensor I can make really awesome 8K or 4K files, supersampled from a 12K sensor with an great excess of colour fidelity. 12K recording too.
Though BRAW has gotten some criticism from some for “not being RAW” because of the partial de-mosiac step that’s performed in-camera, it’s because of this very fact that that BMD have solved a major problem that has plagued other RAW shooting cameras and have incredible compression efficiency.
You can shoot 8K, 6K or 4K super sampled RAW images, without windowing or losing any field of view, and it can do 12K @ 60 FPS, with 8K being able to eventually hit 120 FPS, and again, those frame rates don’t incur a cropping penalty. It’s all still the exact same field of view.
So, now we have a 12K sensor, that can record BRAW files to 8K, 6K and 4K as well as 12K of course.
So far I’ve been very impressed. With the lockdown, I’ve not been able to investigate this footage in a “real” HDR environment, but I can already see in my home REC 709 / 2020 setup that things are looking great.
Here’s some more numbers. My entire day of shooting with these 4 models in a few scenarios was contained on 6 x 256GB AngelBird CFAST cards. I actually shot mostly in 8K @ 60FPS and a few takes of each setup in 12K. Total space on my drive ? 1.06 TB for the whole day, shooting indiscriminately.
For another project yesterday, I had to shoot an interview of myself. Set up the 12K, used the compression setting of Q5 and got nearly 70mins on a 256Gb card in 12k.
I can’t recommend Anglebird cards enough by the way. They are the fastest cards currently available for the way this camera records, with the 256GB size having a slight edge over the 512GB cards. I paid for my own cards, just reporting as a happy user.
I think it’s remarkable what BMD have done with their first ever from scratch custom sensor. So far the images have been really impressive considering the camera is still very young and in development. I’ve really liked where the look sits straight out of camera and there’s plenty of depth to move the images around.
So in summary, we have a 12K 60 FPS RAW shooting camera, that has a similar workflow to anyone doing 4K today, offering in-camera scaling and resolution independence that isn’t tied to windows and crops, built using a revolutionary new sensor design that I think has already blitzed.
I love the subtlety and nuance I’m seeing. I was worried about what 12K resolution on someones face would do, but I actually love the transparency it brings. It’s still an incredibly flattering image on faces and in fact it doesn’t enhance blemishes at all. There’s a kind of silky smoothness to the image and the motion, it’s still very filmic and cinematic but there’s a lot of complexity available as well.
In the end though the pictures do the talking right ?