It goes to 12K

This was the prototype URSA Mini Pro 12K camera I shot the publicly available footage with. It was “disguised” as an URSA Mini Pro G2, and even used G2 hardware inside. The shipping camera will have totally different innards. There are also some other small changes, like a more convenient position for the USB-C port.

You probably know the sketch from Spinal Tap ? Where Nigel Tufnal, lead guitarist of Spinal Tap shows off his specially made Marshall guitar amp where all the dials go to 11 ?

You may then have had a similar reaction to me when I heard that Blackmagic Design were planning to make a camera that could record 12K files.

Same simple and elegant menu layout.

My first reaction was wow. 

Quickly followed by the thoughts….

Why do I need 12K ? Can I even work with 12k files ? Why do I want 12k ever ?

After picking my jaw up from the floor and asking a bunch of questions I’ve started to realise it’s not really such a silly idea at all. It’s all in how you USE those pixels and photosites and the entire way that BMD have gone about designing this. The meme is out there, but it’s about having better pixels.

I encourage you reader, to see beyond the headline grabbing 12k number and take a deeper dive with me. I’ve come to understand the pixel count is just part of a bigger story here. This is so much more to this than just that seemingly ridiculous number.

First though, lets do the numbers.

Blackmagic RAW 12K files

Yes, it has a sensor that’s 12288 x 6480.

This is a 79.6MP cinema camera.

It can do 8K Blackmagic RAW at up to 110 FPS (soon to be 120) with the same full sensor field of view. Or 4K. No cropping or windowing with resolution or frame rate changes.

Your 18mm lens is still looking like a 18 mm lens when you shoot at 12K, 8K or 4K and independent of the frame rate, which is 60FPS, 110 (120 soon) and 120 respectively. 

It also has a super 16 @ 6K RAW window mode with frame rates to 120 FPS

It can do also do 4K RAW up to 220 FPS in a super 16 window, or as I mentioned, 4K RAW of the whole sensor up to 120 FPS.  

It’s ISO is 800 native.

It’s a Super 35 sized sensor.  The sensor is identical in height to the G2 but a little wider to accommodate 17:9 DCI sizes.  The sensor is 27.034mm x 14.256mm.

It has an interchangeable mount.

It has internal ND’s, 2 stops, 4 stops, 6 stops and clear.

It will record Blackmagic RAW only.

It has dual CFAST cards, dual SD cards or a USB-C connected drive. Some very high data rate modes will require dual card recording (Such as above 60 FPS in 8K)

The 12K camera is a result of three years of work on a custom sensor design. Think about that for a moment. BMD threw millions of dollars at developing their own 12K sensor three years ago, from scratch, when you would struggle to even gotten an 8K TV.

Somehow they managed to keep this a secret for all that time.

While BMD have highly customised and adapted other sensors previously, this is the first sensor that BMD have designed completely from scratch with their own IP and design, it is the culmination of a strategy that unifies a few other technologies that BMD have also been quietly working on and I think shows their ongoing maturation as an imaging company.

This sensor and what it can do wouldn’t be possible without the Blackmagic RAW codec. They go hand in hand. When the sensor development started three years ago, so did the development of Blackmagic RAW. The sensor and codec are highly integrated.

Firstly, let’s consider the 12K sensor itself. 

Here’s a radical thought I’ll leave with you for a moment….It’s not a BAYER sensor.

50mm Supreme

Sensor primer

The Bayer filter is named for the Kodak scientist Bryce Bayer who created it in the first place.

The BAYER filter on a CMOS sensor has been what has driven digital photography and cinematography sensor developments for the last 15-20 years.  It’s almost ubiquitous.

A CMOS sensor without a CFA or Bayer filter would be black and white.  Leica made an exotic M series rangefinder camera without any CFA. The upsides ?  Way more resolution, far less aliasing and also much more sensitivity or better native ISO.  The downside ? The image is Black & White !

So, a CMOS sensor is a grid of photo sites or pixels that are only sensitive to light, but have no colour. It’s black and white until you add some coloured filters on top of those photosites.

In a Bayer sensor, what we do is make an array coloured filters over each photosite, where each grid of 2×2 pixels gets two green filters, a blue filter and a red filter. This is called the Colour Filter Array or the CFA. Every photosite is now is either a green, red or blue pixel and there’s twice as many green photosites as any other colour.

You can vary the intensity of the colour or alter the quality of the filter itself to customise the colour result. Part of the colour science story. Using a deeper dye colour means you get less sensitivity (lower ISO) for example, so it’s always a bit of a balance between the size of the pixels / photosites themselves, and the quality of spectral design of those coloured filters.

21mm Supreme

Green is the part of the spectrum the human visual system is most sensitive too and this tends to be why the green channel is considered the most important and has the extra photosite.

Using maths based on clever algorithms, called a de-bayer or de-mosaic algorithm, each pixel can be mathematically “averaged” or interpolated to have a colour based on the photosite itself AND the mix of photosite colours surrounding it.

Depending on how clever the algorithm can be, the maths behind this means we can pretty accurately “guess” or interpolate the colour of each pixel. So each photosite starts as either green, red or blue, it get’s transformed into an average of it’s own value and it’s nearby neighbours. It’s not 100% accurate though and there are some issues that come up.

One downside is around colour. We now have twice as many green photosites as red and blue, so there’s some discussion to be had around how colour accurate this model is, because the photosites are weighted so heavily to the green channel.

Bayer sensors can also have a lot of issues around aliasing or the stair step pattern you see on a diagonal edge or fine detail when you zoom into high magnification. Sometimes when the detail you’re capturing is so fine it’s approaching the same size as the pixels optically, you can get colour errors on the edges and transitions and another effect called false colour moire, which is a kind of orange / cyan halo around some of these very fine details. If something is the size of a single photosite, then it has to be either a green, red or blue colour. Sometimes those edges are so fine because of distance, that the edge colours can get confused and voila, you get false colour aliasing on edges.

Traditional logic and the way the maths works out here is that you can reduce this by supersampling, or using higher resolutions in the sensor than what you’re targeting for output. More pixels means less chance of that fine detail being only a pixel sized artifact. You can also use an optical filter, called an OLPF that specifically try’s to target those frequencies that are beyond what the sensor can resolve. Kind of a diffusion filter that only works on very fine details.

50mm Supreme

For a really nice 1920 image with the “right” amount of pixels and colour you often want to add some overhead to the resolution as oversampling to make up for the shortfall of each pixel being made up by “maths”. The very first BMD cinema camera was designed to make a nicely oversampled 1920 HD image by using a nearly 2400 pixel wide sensor. To make a nice 4K DCI image with a Bayer sensor, you really want closer to 5K worth of BAYER pixels to be able to cover any gaps in the sensor maths.

But what if we didn’t adhere to the “Bayer” pattern ?

BMD’s 12K sensor doesn’t use a Bayer pattern CFA. It uses a brand new custom filter array that has an equal number of red green and blue photosites as well as the addition of clear or “white” photosites.

If you create a sensor with a LOT MORE photosites at a much smaller physical size, like say 12288 x 6480 worth, and then you start to alter the ratio of filter colours and maybe even just have some of them with just no filter at all you can start to manipulate the maths in different ways.

65mm Supreme

Now we have a sensor with a massively increased number of pixels, many of which have no colour filter at all. BMD call these “W pixels” for white.

Using white pixels in addition to the RGB pixels isn’t exactly a new idea, but what’s new here is the way they are read and mathematically arranged, something that BRAW is turning out to be very good at doing because of the way the codec profiles the sensor itself.

Here’s the important difference though. The 12K sensor has an EQUAL number of photosites in each colour channel and not only that, there’s now the addition of clear or W photosites that are much more light sensitive.

Kind of like HDR images can be created by combining two brightness values, we can now combine the brightness and extra sensitivity of the W photosites along with the colour pixels to get an extended drynamic range, and helping to overcome the issue of making these new better pixels so much smaller.

Instead of the typical 2×2 grid size that you have on a Bayer sensor of GRBG we now have in the same space a 6×6 grid that has 6G, 6R and 6B as well as 18W photosites.

Equal numbers of photosites for each colour and the addition of clear photosites. This is something very new and very not Bayer.

35mm Supreme

Yes, that does mean the pixel pitch is a dramatically small number of 2.2 microns. Previously the G2 has a 5.5 micron pixel pitch. Alexa has always had very large pixels at 8 microns on their sensor, part of the reason they’ve not been able to make a 4K S35 camera using that sensor design.

To give you an idea of how small 2.2 microns is, Corona virus is stopped with the use of an N95 mask which block particles down to 0.3 microns.

These are very small pixels.

Normally reducing the pixel pitch or pixel size to be so small means the light sensitivity is sacrificed, but this is more than compensated for by using the unfiltered W pixels. By combining the brightness values of those W photosites with the coloured photosites, it also greatly increases the dynamic range compared to a standard Bayer array.

Instead of a processed de-mosaiced large pixel grid being GRBG, they now become a grid of 6 x 6 pixels is 6G, 6R, 6B and 18W pixels.

By so massively increasing the pixel density and then having an equal number of RGB pixels, we start to see a lot less of the artefacts of a Bayer sensor, such as the aliasing and moire I was mentioning earlier, as well as more evenly balancing the chroma resolution in each channel.

100mm Supreme

The Look

I don’t want to come off like a shrill. I think the best way to judge what BMD have done is to download some clips and try them for yourself. That 12K number is a big hurdle to get past and creates expectations. I wasn’t looking forward to what 12K would look like on a face.

Here’s what I see.

Smoothness. Despite the terrifying number, this camera is still incredibly flattering. It’s the opposite of what I thought would happen before I started shooting with the camera. Paired with lenses that can keep up with this sensor, it feels like someone just came and cleaned my glasses. And polished out all the tiny scratches and blemishes that I couldn’t even see and didn’t even know I wasn’t seeing. There’s a kind of transparency that’s hard to explain. A clarity that is still flattering, yet revealing. There’s a wonderful subtleness in the way tone is rendered in the human face with a lot of nuanced skin tones that are readily discernible but not in a distracting way or with artefact. I’ve found myself really scrutinising the colours of the human face, noticing the very intricate nuance and details, especially around lips and eyes.

I did an extensive test with a few popular diffusion filters assuming we’re going to need to use some kind of softening to take the edge off, but I feel like all I see when I apply most of the commonly used diffusion filters is the obvious effect of using a diffusion filter !

I don’t think they’re needed. I’ll try to post that test shortly for you to judge.


It’s hard to describe these 12k images. The first images I shot with this sensor were quite a while ago and when I knew I’d be testing this brand new ultra high resolution sensor I figured I should get some good lenses.

At first I was thinking Master Primes would be good, but my local vendor CVP in London offered me some Zeiss Supremes and I figured, they would do the job nicely. I hadn’t shot them before, but my understanding was that they were like master primes for larger format.

That was wrong.

They are something very special, and nothing at all like Masterprimes. I’m still not sure if it’s just these lenses on this sensor that makes such a nice combination, but there’s a kind of transparency that goes with these lenses that is a joy to behold. Right away, even on the small built in screen of the camera I could tell they were something special.

They are very straight and neutral, but they also have a lovely way of drawing a face. I found they looked really pretty and was able to squeeze a few shots while I was shooting on The Great in the UK.

When it came to shooting with the next build of the camera, I again reached for those supremes. There’s an internal joke nickname at BMD about this camera. They call it the lens checker.

It’s a somewhat apt description. I’ve tried a few different lenses and here’s what I’ve learned. This camera shows you more very clearly the difference between the higher price point lenses and the lower price point lenses. On other cameras these lens differences are far les obvious.

I had an interesting discussion too with my very experienced focus puller who happened to own the Supremes I was shooting with, so he knows them very well. He was using his regular 13” small HD monitor that he’s been using for the past couple of years and he felt like he could kind of see a difference, or more that he could see that there was more information there that his monitor WASN’T seeing. I think we’re going to want more 4K field monitors soon.

So focus becomes a bit more of a critical issue. I guess there will be many that see 12K as a chance to crop and zoom the image a lot more than they used to be able to, but if you’re planning to do that, you better have some decent glass on front. And it needs to be 1000% in focus.

Many of these demo shots I actually captured at a much deeper stop than I usually would. The wider shots were typically shot at T2.8 or T4 using the internal NDs, and only in the very extreme close ups did I use them wide open.

You tell me. I think it this camera looks pretty damn special.


I edited and graded this material using my 2017 15” Macbook Pro in Resolve 16.2.2 and a custom version of the BRAW SDK which is now in the currently available download of Resolve.

Here’s my modest editing setup.
2017 MacBook Pro. Drobo 5 and an EGPU, more so i can use my home samsung monitor.

I have a Drobo that I use for archiving my photos, and they are probably one of the slowest RAID systems around, and yet it kept up just fine on my system editing 12K material.

The codec is smart enough to dynamically adjust itself on playback so you can still work with it even on an underperforming machine like mine. I did have a BMD eGPU PRO as well, which I mainly use as a way of having a HDMI out for my grading on a Television.

My instinct is that this codec is smart enough, that if you already have a system that can do 4K, then you should be able to edit 12K as well, without proxies or caching.

The result ?

The intent here was to make better pixels. The best way to address the mathematical issues of Bayer sensors and having less chroma information was to even out the colour channel information more and combine them with the clear pixels.

When combined with a smart codec like Blackmagic RAW, you can also be really smart in the way the pixels are grouped and this then enables you to create a super smart in-camera scaling that avoids the usual side effects of line skipping and pixel binning.

My takes away is that using this sensor I can make really awesome 8K or 4K files, supersampled from a 12K sensor with an great excess of colour fidelity. 12K recording too.

Though BRAW has gotten some criticism from some for “not being RAW” because of the partial de-mosiac step that’s performed in-camera, it’s because of this very fact that that BMD have solved a major problem that has plagued other RAW shooting cameras and have incredible compression efficiency.

You can shoot 8K, 6K or 4K super sampled RAW images, without windowing or losing any field of view, and it can do 12K @ 60 FPS, with 8K being able to eventually hit 120 FPS, and again, those frame rates don’t incur a cropping penalty.  It’s all still the exact same field of view.

So, now we have a 12K sensor, that can record BRAW files to 8K, 6K and 4K as well as 12K of course.

So far I’ve been very impressed. With the lockdown, I’ve not been able to investigate this footage in a “real” HDR environment, but I can already see in my home REC 709 / 2020 setup that things are looking great.

Here’s some more numbers. My entire day of shooting with these 4 models in a few scenarios was contained on 6 x 256GB AngelBird CFAST cards. I actually shot mostly in 8K @ 60FPS and a few takes of each setup in 12K. Total space on my drive ? 1.06 TB for the whole day, shooting indiscriminately.

For another project yesterday, I had to shoot an interview of myself. Set up the 12K, used the compression setting of Q5 and got nearly 70mins on a 256Gb card in 12k.

I can’t recommend Anglebird cards enough by the way. They are the fastest cards currently available for the way this camera records, with the 256GB size having a slight edge over the 512GB cards. I paid for my own cards, just reporting as a happy user.

I think it’s remarkable what BMD have done with their first ever from scratch custom sensor. So far the images have been really impressive considering the camera is still very young and in development. I’ve really liked where the look sits straight out of camera and there’s plenty of depth to move the images around.

My friends at Keslow helped out with some of the minimal equipment I had with me. On the Ronin 2 Gimbal here which I used mainly as a remote head, sitting atop a CamTram slider, carrying the mighty 65mm Supreme
Here’s a shot of the 12k body fitted on a Ronin R2 as a remote head with a Zeiss supreme 21mm out front. Also riding on a slider called a Camtram, designed to use almost anything as a track. In this case an extension ladder. That’s me blueing through the BG with mask on. We tried as much as possible to work as safely as possible to shoot this.

So in summary, we have a 12K 60 FPS RAW shooting camera, that has a similar workflow to anyone doing 4K today, offering in-camera scaling and resolution independence that isn’t tied to windows and crops, built using a revolutionary new sensor design that I think has already blitzed.

I love the subtlety and nuance I’m seeing. I was worried about what 12K resolution on someones face would do, but I actually love the transparency it brings. It’s still an incredibly flattering image on faces and in fact it doesn’t enhance blemishes at all. There’s a kind of silky smoothness to the image and the motion, it’s still very filmic and cinematic but there’s a lot of complexity available as well.

In the end though the pictures do the talking right ?

Download your own 12k frames from here and here

Special thanks to the models here,

Robert Hamilton  @roberthamilton_official 

Stephen mascoe

Miriam Miano  @miriamiano 

Kat green @katharinegreen

Also to my friends at Keslow for helping me out with many accessories in a tough time. Believe we were the first job to Prep from Keslow Atlanta since lock down.

About johnbrawley

Director Of Photography striving to create compelling images
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

24 Responses to It goes to 12K

  1. dfarris2013 says:

    Link goes to “page not found”.

    I look forward to your thoughts on the new camera.


  2. jake carvey says:

    Always awesome to see new posts from you, John. Love the way you approached this breakdown. Some great work you’ve been doing lately. Cheers!

  3. Chris Agrance says:

    Who were the three models/ actors? Please give them credit. Fantastic images sir.

  4. Rizibo says:

    Probably the best color, especially the skin tones, I have seen out of a camera. People focus on the resolution but it is the color rendition which makes this the best camera at any price.

  5. Zak Ray says:

    “it also greatly increases the dynamic range compared to a standard Bayer array”

    But according to the specs, it has a stop less DR than the G2. Am I missing something?

    • johnbrawley says:

      Well, if there wasn’t the clear or white pixel, then the DR would be limited to the sensitivity of the coloured pixels ONLY, which is what you get in a regular Bayer CFA. In this new CFA design, the white or clear pixels have no filter, and so have a different brightness sensitivity. When you take those values and COMBINE them with the brightness values of say the RGB pixels, you get a larger DR, by combining the brightness response of both.

      • Zak Ray says:

        So it’s greatly increased compared to a Bayer sensor in general, but because this sensor has such a small pixel pitch, it sort of ends up leveling out. Is that accurate?

      • jake carvey says:

        This loss of 1 stop of DR has some of my DP friends snorting snidley, even after I explained that the 12k isn’t really about 12k, its about new sensor tech more anything.

        Would help to have some imagery to make the case that this isn’t just more of the same “blackmagic look” that so many pros seem to hate.

      • johnbrawley says:

        Kind of ? I think BMD would say it adds more because they can use BRAW to take the exposure of BOTH the W and the combined RGB pixels like a dual gain / HDR exposure and combine these values to INCREASE the DR. As I said above.

      • Morgan Gold says:

        …but we’re still measuring DR by the same metrics? 14 stops is still lower than other high end cameras, correct?

  6. Gareth Stack says:

    Thanks for the post. Really great to hear your hands on experience.
    Wondering if there’s any chance you could post some 12k frame grabs / DaVinci exports. Currently the only footage available (presumably from your shoots) won’t play back on the Windows version of DaVinci. Would be fantastic to actually get eyes on with some full resolution stills.

  7. Robert says:

    John, firstly congratulations on “The Great”, fantastic job.
    How did you watch the 12k footage from Ursa? Shouldn’t it be at least an 8k monitor ? Say, Resolve on MacPro with the BMD 8k Decklink card and some converter to hdmi2.1 for the 8k LG OLED (properly calibrated)? Anything lesser would be a bottleneck, right?
    How would you rate the resolution of Zeiss Superprimes? At 100lp/mm they still would be
    seriously underresolving the Ursa 12k sensor. Is there a piece of glass that can match the 455 pixels/mm ?

    • johnbrawley says:

      I “watch” using my 1920 TV.

      I’m not so much smitten by the resolution, don’t need to watch it at 1:1, but I AM SMITTEN with how this sensor makes colour. That’s what excites.

      I guess your proposed monitor setup would work if you wanted to watch in 8k. It’s hard to test these things in these corona lockdown times.

      I also wouldn’t get so caught up in the resolution stuff. But I did see a post from the very credible M. Duclos and he calculated the max resolution of the camera being 227 lp/mm and he says most “sharp” lenses can hit 200 lp/mm but he ALSO said, lenses are an analog device that don’t have “limits’ like that.

      I’d say, there’s MANY ways to judge a lens and sharpness / resolution is only one of many ways.

  8. Pietro says:

    The sensor “resolution” is indeed 227 lp/mm which comes from the pixel density of 455 pixels/mm.
    Some lenses resolve up to 200 lp/mm in the center, albeit with a poor contrast (MTF20-30). That would make for blurry details. The film format IMAX resolves 12k (IMAX claims 18k) but that with the huge frame o 70×48.5mm and matching lenses. I saw “Dunkirk” in that format and the picture was amazing.
    In your opinion, how does the color from URSA12k compare to that from Alexa LF? That’s my personal benchmark.

    • Merlin Kramer says:

      Well, the legendary Carl Zeiss Planar 50mm f/0.7 were basically a 70mm f/1 with a 0.7x telecompressor between it and the film, trading image circle for aperture.
      Theoretically one could use such a device. Metabones apparently made a 0.64x one for using EF lenses on MFT bodies.

      Given that the telecompressor used in the 50mm f/0.7 had just two lenses, it doesn’t sound too expensive to get a small prototype batch from someone like Samyang or Kamlan. I don’t think they’d take more than 10~30k$ for like 5~10 prototypes and rights to manufacture and sell more of them.

  9. Hey John, thanks for the carefully explained article here. Would you say there is a compromise when using the lower resolutions like 6k/4k? Being that the sensor is rearranging stuff to make it happen, maybe loss in DR or color accuracy as opposed to 8k 12k recording using more of a native pixel areangement. I agree this is the real triumph from BMD with this sensor and even though I understand what you explained I cant help to wonder about the differences between the resolutions (FOV aside that does not change).


  10. Andrew says:

    Nice work Mr Brawley! Stay Safe

  11. abogomolov says:

    Looks like your drobo has one disk failed or removed? The upper Bay has yellow warning indicator. Maybe this is the reason of slowness?

    • johnbrawley says:

      Ha. Thanks for the concern. It wasn’t slow, I was able to grade and edit the clips you see above. It is getting close to full though. I use it for archiving all my many thousands of stills photos from over the years.. The yellow light comes on to show you that you should add more capacity.


  12. Dennis says:

    Hi John. Amazing work as always.

    So a monocrome sensor is higher resolving as it is 1:1 pixel for detail. Nothing is interpolated an every pixel is working to bring dynamic range and detail to the image.

    But RGBW sensors arent those just a in-betweener? Sure you get more tonal (luminance) range (dynamic range) and detail, but you also get less color information as you have less color filters to bring add to the pool.

    I think it is very exciting what BMD has made here but I am also a bit worried. Sure the 8K and 4K coming out of this will be amazing as you have enough color infomation for these modes, but for a 12K image, I would imaging it would be worse than a bayer 12K image (viewed 1:1) from a color information perspective.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.