Every once in a while, you take on an ambitious project, bite off more than you can chew, and learn a lot along the way… 😅
I’m excited to release Nuke Survival Toolkit v2.2.0, now featuring a brand-new Online Wiki, replacing the previous Google Docs setup. Each tool now has its own dedicated page with embedded videos and images, along with a fully functional search bar.
For folks working in internet-restricted environments, there’s also a downloadable Offline Wiki (~70MB). It mirrors the online version (minus video embeds), so you can browse everything locally without internet access. The PDF documentation is now generated directly from the wiki, ensuring it always stays in sync.
Converting the Google Docs to a full wiki, packaging an efficient offline version, and maintaining a dynamic PDF pipeline was… a challenge. But it is now a well oiled system that is easy to update. This process also gave me a deeper appreciation for the effort behind maintaining high-quality documentation, credits, references, and educational resources. Kudos to all you Wiki warriors out there.
Hope the new wiki helps you find tools faster or discover ones you didn’t know were there.
New Documentation System – Live Online Wiki (replacing the previous Google Docs) – Dedicated page for each tool – Embedded videos, GIFs, images, and links – Full search functionality for quickly finding tools – Downloadable Offline Wiki (~70MB) for offline use (no video embeds) – PDF documentation generated directly from the wiki (always in sync) – Updated NST menu documentation launcher – Auto mode: Online Wiki → Offline Wiki → PDF
What else is new since v2.1.0? – 27 new tools added – 44 tool updates, including a Nuke 13+ python2 to python3 sweep – Nuke 16 & PySide 6 compatibility for all tools needing upgrade – New CardToTrack v9 release supported (works on Nuke 16) – 11 targeted issue fixes & cleanup: ie. improved Windows path handling
In this final installment of the CG Compositing Series, we focus on using LightGroups and Material AOVs together in a single workflow, and solving the paradox that come with it.
Why do these 2 rebuild methods seem to clash?
We cover the following topics in the video and in this blog post:
The complications of splitting LightGroups per Material AOV
A method for transferring changes between setups using a Difference Map
The pitfalls of using Subtraction and the advantages of using Division
A comparison of math operations: Add/Subtract vs Multiply/Divide
A stress test of the Division-based setup
Template layout strategies and rules to keep your rebuilds stable
Carrying changes across the template from the 1st rebuild to the passes of the 2nd for the most interactive user experience.
Ideas and techniques you can apply in your own CG Templates.
We could decide to brute force split out each pass even further, into Material AOVs per Light Group.
When we rebuild it you could either prioritize it as Larger buckets of Material AOVs, made up of each LightGroup.
Or prioritize it as larger buckets of LightGroups, made up of each Material AOV, like a mini-Beauty Rebuild per light.
There are many problems with this workflow however:
There are many more layers and channels rendered, making file sizes larger, and nuke slower to process and more difficult to work with.
There is often a need to clone or expression link grades and color correction changes across different parts of the setup in order to affect all the lights at once, or all the material AOVs at once. Creating a clone or expression hellscape.
There are also cases where you will see a master control and expression links, so the user does not get lost in the linked/cloned nodes.
You may also see the entire setup in a Group Node, to hide it and only expose necessary controls.
Compositing is never that straight foward however and we should not be compositing from within a Group node. We often need to pull masks, rotos, elements, etc from other parts of the main node graph, and if everything is in a Group, it becomes difficult to get that information inside of the group to use.
Most Compositing should happen exposed in the main node graph to avoid any headache, and not hidden away in a Group that a user needs to jump in and out of.
This extra split workflow has many cons, let’s look at some other workflows to solve our paradox problem.
Transferring Changes from 1st Setup to the 2nd Setup
Another workflow is trying to capture and transfer the changes from the 1st Rebuild Setup to the 2nd Rebuild Setup. This is the basic idea of the workflow at its core:
An example of this technique could be illustrated from Machine Learning or Generative AI workflows, and is called Style Transfer.
In the below image, I start with an image of a bearded man. I have 2 separate models that are making changes. The first might be for facial expressions and shaves, and the second is for applying makeup. On the left side, I make a change to make the man beardless, and with an angry expression. On the right side, I’ve told it to apply clown makeup. If we want to combine the 2, I might want to package the “Beardless Angry” Changes, and apply that over to the clown makeup side. My result would be a Beardless Angry Clown.
This is a silly example but illustrates the workflow we want to use in Nuke to capture our first changes and apply them to our second changes for a combined change.
But how can we capture and package those changes from the first setup?
Subtractive (Absolute) Difference Method
We can find the difference between the 1st Rebuild and the Beauty Render using Subtraction
Temporarily store the changes in a subtractive difference map
Apply the 1st changes to the 2nd Rebuild Setup
Taking one of your rebuilds, either Material AOV comp or LightGroup comp, and subtracting the original Beauty Render will give you the Subtractive Difference Map, as seen below:
Subtractive Difference Map
The image itself is a map of positive and negative values, telling us how much we would need to add/subtract from the Beauty Render in order to get the result of our changed Rebuild.
Values of Zero will have No Change
Positive Values will get Brighter
Negative Value will get Darker
Let’s get into some equations to help us understand the math behind this workflow.
First let’s define a helpful math symbol: Delta, which stands for “The Change” or “The Difference”
First we’ll do a basic inverse operation with subtraction and addition.
Material AOVs – Beauty = Difference
Beauty + Difference = Material AOVs
Instead of adding the difference back to the Beauty, let’s swap the Beauty out for the result of our LightGroups comp. So I am adding the difference of the Material AOVs comp onto the LightGroups comp, to hopefully get the combined changes.
It’s important to realize that we do not need to start with the Material AOVs and transfer to the LightGroups, but we could also just as easily start with the LightGroups and transfer those changes over to the Material AOVs, it’s a matter of preference, but the result will be the same.
Let’s try this in nuke, by taking the Material AOVs output, minusing the Beauty Render, and then applying our subtractive
The resulting image kind of works, but is also full of problems with odd colors and seemingly black hole areas
Subtractive Method Failure
Let’s take a look at what is going wrong with the Subtraction Difference Method.
Subtractive (Absolute) Difference Problems
The Subtractive Difference Map represents Absolute Values
This tells you the exact values to add/subtract to bring the Beauty Render to the Changed Rebuild
The Subtractive Method (Absolute) only works well if you Brighten values in the Rebuilds, or only Darken them slightly
Brightening both setups will be fine, as the results will only increase.
Darkening both setups however, runs the risk of going below zero and into negative values when the change is applied to the 2nd Setup. The darker the changes on both sides, the higher the risk of negative values.
Remember that the Rebuild passes are embedded in each other’s setups. If we darken some lights, and then darken the Specular, since the specular also contains all the lights, we are essentially subtracting those light groups twice and getting negative values.
So if this Subtractive Difference Method is giving us issues, let’s look at any other ways to get the difference map.
Division (Relative) Difference Method
Let’s ask ourselves: How can I go from 8 to 4?
Obviously we could subtract 4, and 8 – 4 = 4
But if we had a new, lower number, such as 2, and we also minused 4, we’d get -2.
We could also divide 8 by 2, therefore halving it, and we’d also arrive at 4.
Then trying to divide 2 by 2 will get us 1, it is also halved.
The number of change from 8 was -4 but from 2 it was only -1. This number of change is Relative to the input number. It is a ratio or a percent of what the start number is, so it adapts to our input.
Of course, this could also be represented as multiplication. divide by 2 is the same as multiply by 0.5
So instead of trying subtraction and addition, let’s now try divide and multiply
The Result is a Division Difference Map that looks a lot different than our Subtraction Difference Map
Division Difference Map
Now let’s multiply this with our 2nd Rebuild, the LightGroups side:
Side Note: Since Nuke’s Merge node does not have a native B / A operation, if you ever wanted to swap the A and B inputs and have the disable default to the Rebuild instead of the Beauty (for Templating reasons), then you would need a special MergeDivide.
The Result from applying the Division Difference below looks a lot better than the Subtraction Method, and there are no longer any Negative Values in the image.
Division Difference Method
So why does this suddenly work? And what is going on with that Division Difference Map?
Division (Relative) Difference Map
This new Difference map is answering a different question than the subtraction difference map was:
How much do we need to Multiply the Beauty Render by in order to end up with the Rebuild Output?
What Percent do I need to increase or decrease this Beauty Render by to get to the Rebuild Output
Multiplication / Percentage will not get us Negative values
That Division Difference map appears all white, but in fact, it has values over 1, superwhites, that we cannot see by default. let’s darken it a bit so we can see the pixels over the value of 1.
Darkened Division Difference Map – for Visualization
Let’s break it down:
Values above 1 will get brighter
Values between 0 and 1 will get darker
Value of 1 means No Change
So any number multiplied by 1, is itself, and does not change. That is why the map is mostly white.
Multiplication can also be represented as a percentage:
So we could express the pixels on this map in a percentage:
So our new map will be increasing or decreasing our 2nd Rebuild input by a specific percent.
Let’s go over the math equation to see how it works. Once again we have our inverse operation, Starting and returning to Material AOVs using division and multiplication:
Then we are swapping out the Beauty Render, in the second step, with our LightGroup output. So we are applying our Division Difference Changes on top of the LightGroup Changes.
It’s worth mentioning again, that just like before, it does not matter which order you divide or multiply the Rebuilds, Material AOV 1st & LightGroup 2nd or LightGroup 1st & Material AOV 2nd, will yield the same result.
So why does the Division Difference work so much better than the Subtractive Difference?
Below is a animation showing the difference between the add/subtract and multiply / percentage.
Notice that the subtraction will go past zero towards negative values, while multiplication will only approach zero or be zero, but never go negative. We don’t really ever see a negative percent.
Going back to that embedded layers image. This time, instead of subtracting the pass on both sides, we are multiplying to zero on both sides, but we don’t run into negatives, because if you multiply something by zero twice, it is still only zero. 4 x 0 x 0 = 0. So we are actually still safe.
I encourage you to stress test this Division Difference Method with your own renders and unique cases. You are able to push the limits to an extreme level without noticing anything breaking or feeling off.
Template Layout Options
We have to decide if we want to set up our template with our 2 Rebuilds:
side by side
top to bottom
We also need to decide which Rebuild will be first and which will be second, the first will be the one captured in the change map. So either Material AOVs or LightGroups.
We could also go right to left instead of left to right, on the side by side, if we so choose:
Here are some possible template layouts in the node graph:
One thing that is a bit annoying is that while using these Templates, and making changes, we can really only see the effect of our changes by looking at the very bottom, after the changes are combined and both setups are taken into consideration. Is there any way for us to have a more interactive experience, by seeing some of the changes affecting different parts of the Template. Let’s explore that idea.
Interactive Changes throughout the Template
Instead of considering the Rebuild as 1 whole output, like our Beauty, we need to remember that it is made up of individual pieces, like our piechart from before. The passes were split and adjusted and added all up to equal the Beauty.
So instead of multiplying the Division Difference Change Map to the output of the 2nd Rebuild, we could multiply it to each individual pass separately. This would give us the same result once we add all the passes together.
Let’s explore the math of this, it becomes a little easier to understand.
If we split the Output into smaller components, we can apply the multiply to each component and then add them up after. This would be the same result as us just multiplying the whole.
The Equation for use would look something like this (Delta being the Difference, and T being Total Changes):
In nuke, we can set this up in our templates. I am just going to stick to Top to Bottom Templates for the example, as it’s a little easier to set up and understand.
It’s SUPER IMPORTANT to realize that we are only capturing the changes from the 1st setup, and applying them to the 2nd setup. There is no way to make the changes of the 2nd look back around and apply to the first, because you would create a paradoxical change loop: Changing the 1st, which changes the 2nd, which changes the 1st, which changes the 2nd, which changes the 1st…. you get the idea.
So that decision of the flow of your Template, and which setup you want to see the changes reflected in, is very important to decide as you build your CG Template
So, let’s say that we have our Material AOVs 1st, and we are applying the changes to the LightGroups. We’ll need to multiply each lightgroup pass with the division map
And if we started with LightGroups, we’d need to multiply the 2nd setup Material AOVs with the division difference map.
base LightGroupsLightGroups with Material AOV Changes applied per pass
or if you were to use the LightGroups first, you could transfer your changes to each individual Material AOV:
base Material AOVsMaterial AOVs with LightGroup Changes applied per pass
The result is an interactive user experience where you we can see our changes trickle down throughout our template and influence all the downstream passes. This can really help visualize what is happening at a local level.
Rules and Caveats
Material AOVs passes must add up to equal Beauty
Light Groups passes must also add up to equal Beauty
Do not do color corrections that introduce negative values (saturation)
Treat the CG Template as a glorified Color Correction
On the 1st Rebuild side (The Captured Change side) avoid:
Transforms / Warps
Filters: Blur, Defocus, Median, Glow
Chromatic Aberration
Replacing / Merging a totally different image on top
Texture changes should happen at the albedo level
You want to try and consider the entire CG Template as one big color correction. The pixel is being tracked all the way through the setup, in the change map, and comparing back to the beauty and applying to the second rebuild. Things like Transforms or filters, are changing the possible, or blending pixels together, and will cause artifacting because the Change map is not able to really capture the changes properly. Also some filters are a post effect, and really should not be adjusted after use, such as a Glow.
Example of Glowing 1st rebuild and viewing result in 2nd rebuild:
glow problems
Transforms or moving pixels around, will also not allow the setup to track the pixel the whole way through and leave to various artifacting, as shown below:
transform problems
You will want to apply your filters and transforms either after the CG Template, or possible only on the 2nd Rebuild section. So basically avoiding the division change map, which is unable to capture it, and only applying those operations afterwards.
Template Examples
I will be providing you examples of Side by Side, Top to Bottom, and Interactive Change Templates for each renderer: Blender, RedShift, Arnold, and Octane.
All Template Examples: Blender, RedShift, Arnold, Octane. Side by Side, Top to Bottom, Interactive
Template Ideas and Inspiration
There are just way too many variations for me to provide in every situation. However I can give some example ideas or inspirations that I have seen and worked with that you could consider implementing into your CG Template if it fits with your style of comping.
Managing Div-Map with Exposed Pipes
Using Stamps or Hidden inputs for Div-Map
Storing Div-Map in a Layer / Channel for later use
Grouping Sections for less clutter
Template Controller, pick which parts are in use:
Beauty
Material AOVs Only
LightGroups Only
Combined LG / AOV
Reversed Direction
Conclusion
This Division Difference Multiplication Technique used to solve the LightGroup / AOV Paradox is fairly unknown at the moment. There seemed to be a huge black hole of knowledge out there on this subject. I’d like to give a huge shout out to Ernest Dios for being one of the true masterminds behind this technique, and for first introducing me to it. Also a big thank you to Alexey Kuchinski for all of his mentorship.
My hope with this whole CG Compositing Series was to equip you with the knowledge of every piece of the CG Template. What all the passes are, Why they are important, How to use them, Where to put them and how to organize them to Rebuild the Beauty, and When to adjust them for specific notes.
And of course, the final piece of the puzzle. How to combine it all and use the LightGroups and Material AOVs together in an elegant way. To help you push your CG Renders to their absolute limits, without the need for a rerender.
I hope you got value out of this video, or out of any video in the CG Compositing Series.
If I could ask one small favor from you, it would be to help share this video, or this blog, to compositing or VFX friends and colleagues. Whether it’s in a group chat, work chat, discord, linkedin post, I believe this knowledge is too important to keep secret. I would love to see this amazing workflow become more commonplace in the world of Compositing.
Thank you so much for all of your support over the years. It’s be a long journey since the first CG Compositing Series Intro video, and we are finally at the end…for now. I hope it was worth the wait.
Until next time.
Downloads
Nuke scripts
1 Demo nk script, and 1 Template & Idea Proposal nk script, 2 total:
I’ve created a new Junkyard Render specifically for this Light Groups video, please download the Render and the Cryptomatte file here in order to relink it in the Demo nuke script:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the project folder
In this video we move away from the Material AOVs and cover an equally important Beauty Rebuild using Light Group renders. This is another set of passes you can render to adjust the lights in your render, that all add up to the Beauty Render.
A Light Group is a render pass of a light (or a set of lights) in the scene, that is rendered in isolation from the rest of the scene’s lighting.
All other lights are “off” and only the Light Group’s light is “on” and affecting the scene.
All the Light Groups should add together to produce the full Lighting in the Scene; They all plus and build back the beauty render.
Importance of Light Groups
Creating good looking CG is not just about the materials of the objects, but also the Lights in the scene, that interact with those materials, and tell a story.
Different Light types can drive the aesthetic, style, realism, or story of your CG render.
Understanding lighting basics is important for being an effective CG compositor.
Types of Light Groups
Key – Primary Light Source Fill – Lift and soften Shadows Rim – Enhancing silhouette & Separation
Practical – Light Sources serving a purpose and illuminating the scene (they are part of the environment)
The Exposure Triangle refers to 3 settings on a camera that help balance the Exposure / Brightness of the Image. If you increase the brightness of 1 of the 3 sides by 1 stop (double the brightness), then you need to choose 1 of the other 2 sides to lower the brightness by 1 stop (half the brightness) in order to maintain the same exposure level of the photo.
Only Aperture and Shutter Speed are referring to the amount of physical light reaching the sensor through the lens. ISO refers to the amplification (multiplication) of the analog signal before it gets converted digitally.
Check out this AMAZING website that lets you play around with the settings and balance the image brightness in a very interactive way. I loved playing around with the sliders, it is such a cool idea.
The larger the lens opening, the more light gets through, the brighter the image.
Also the bigger opening results in a shallower Depth of Field, or smaller zone of focus. This results in larger Bokeh and separation of foreground and background.
Digital ISO is a lot like a volume knob on a radio. If the signal is weak (aka there is not much light making it to the sensor) then increasing the volume will make the sound louder (make the image brighter) but will also increase the static, or digital noise (sometimes referred to as grain).
For dealing with Exposure in nuke, I would recommend using either the Exposure Node, the Multiply Node, or the Grade node’s Gain or Multiply knobs
In the Exposure node you could change the stops directly by changing the mode to stops You can also just multiply by 2, 4, 8, or enter 1/2, 1/4, 1/8 in the Multiply slider of a Multiply or Grade node. With a normal Multiply, we can use an expression to be able to enter our stop number pow(2, x) where x is the stop number, the same as the Exposure node is using.
I tend to use either an Exposure node for Luminance and a Grade node’s Multiply knob for Color
Or I use a single Grade node, using Gain for Exposure changes, and Multiply for color changes
I also prefer to change my color using the Temperature and Magenta settings of the Color Panel, which allow intuitive corrections which also giving fine control.
This is also an important way to separate your Luminance correction from your color correction, by making sure the Intensity stays around 1 and Luminance is preserved while changing color.
Adjusting Light Groups with Exposure (Gain or Multiply) for Intensity / Luminance, and a Multiply for Color, are my preferred way to Color Grade my Light Groups
beautyLight Group Tweaks
Saturation of Light Groups
Remember that Light Groups are like individual Beauty Renders with only 1 light at a time. So you cannot simply desaturate a light group if you want to desaturate the light color.
You would either have to separate the Lighting information from the material information, using a color pass. But even then you may encounter some issues and artifacting.
Or, you can simply shift the colors of the light group to a more neutral color
Destructive vs Non Destructive workflows
You can use Gamma corrections, but be mindful that it requires an exact order of operations reversal in order to fully restore the original image. So it can be difficult to undo later if your corrections start to stack up
ColorCorrect Nodes can be especially Destructive because they are impossible to reverse, due to the fact that it is pulling a luminance key on it’s input to determine the shadows, midtones, and highlights.
This locks the input of the ColorCorrect, because if you make a change above, you are affecting the result of the ColorCorrect
It means that you either need to keep going, adding more nodes and changes on top, or perhaps start over.
Image each ColorCorrect is dependent on all of the previous ColorCorrects, this can cause a ripple, or chain reaction affect and be altering the results of all or any of the ColorCorrects if they are altered.
Of course in the end of the day, use whatever you need to do to get the shot done! But be mindful that you might be tangling a knot that you cannot untie later.
My advice would be try using Exposure and Multiply Changes for Luminance and Color first, and see how far you can get, and save the fancy ColorCorrects as a last resort, when you need to get the extra mile to completing the shot.
In the Demo Nuke script, you will find AOV and Light Group Rebuilds for:
Blender (Junkyard Scene)
Arnold (Fruitbowl)
Octane (Fruitbowl)
Redshift (Fruitbowl)
You will also find sections demoing:
Exposure
A junkyard light group rebuild that I have tweaked with Exposure and Multiply as an example
Saturation demo dealing with saturation of Light Groups
Section breaking down Destructive and non-destructive workflows in nuke.
Downloads
Junkyard
I’ve created a new Junkyard Render specifically for this Light Groups video, please download the Render and the Cryptomatte file here in order to relink it in the Demo nuke script:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the project folder
References
Below are some links to the various research I used to create this video:
First, big shout out again to the Exposure Triangle Simulator website:
In this video we aim to understanding the problem with refraction (transmission) and reflections (indirect specular) explore potential solutions. The problem with Indirect Specular (Mirror Reflections) and Transmission (or Refraction) passes is they reflect or refract the entire beauty of the environment, locking that information into 1 pass. There often seems there is not much we can do as compositors to separate those passes further.
Here we have a nightmare scenario from a AOV rebuild point of view: A glass jar full of balloons, that is also reflected in a mirror surface. Everything in the mirror Reflection shows up only in the Specular Indirect Pass, and everything seen through the glass jar shows up only in the Transmission (refraction) Pass.
We notice as well that objects that end up in the Transmission (Refraction) pass are missing from the Diffuse Pass.
Mirror Reflections, for example ground plane reflections for our subjects, are also limited to the Indirect Specular pass:
What is Transparency?
Transparency is the ability to see – through an object or surface to what’s behind
It’s as if the object or material is ignored or nonexistent and does not have to do with Light interacting with the material.
The light passing through is not Distorted (Refract), nor does Scatter or change Color (which could be the case with Translucency or Transmission)
Transparency basically has only 1 setting: Amount – or “How much can i see through this”
Transmission can sometimes cause the light to inherit a color tint as it passes through and interacts with the material. Think of colored liquids or tinted glass.
Refraction is the change in direction and speed of a light ray as it travels through or “Transmits” through different mediums, ie. from Air to Glass or Water or Plastic
The 2 more important characteristics of Refraction are:
Transmission is only referring to Light passing through an object
Refraction is requiring the light to have changed direction, and to pass through
The render pass is doing both things, so some Render Engines decided to call the pass Transmission, because it’s referring to light passing through the material
Other renderers call the pass Refraction, referring to the Change of Direction, “bending” or distortion of the light
Both terms in this case are referring to the same phenomena, just focusing on different aspects of the light’s behaviour
Transmission might even be a more accurate label, because technically a material could have a Refraction index of 1.0, meaning no refraction/distortion is occurring, but the light is still Transmitting.
All Refractions require Transmission
Not all Transmissions require Refraction
Why is Light Redirected during Refraction?
Light travels through different mediums at different speeds, depending on the density and make up of the medium.
Examples of Mediums: Vacuum (space), Air, Glass, Plastic, Water, gases, etc.
The change of light speed while passing from 1 medium into the next, causes the light to change direction when entering the 2nd medium.
When the Light goes from a fast medium, to slower medium, and back into the fast medium on the other side, it has another refraction turn
This time, instead of one side of the light wavelength slowing first, one side speeds up first
If the exit angle is the same as the entrance angle, it will reverse the lightwave back to the original direction, and is parallel to the orginal light direction, just offset
Complex shapes create complex caustics, and moving surfaces, like water, create dynamic and organic moving Caustic patterns.
What is Translucency?
Transmissive materials have a Roughness or Glossiness setting that works in the same way as it does on Specular Highlights
Increasing the Transmission Roughness causes the light rays traveling through to scatter / “diffuse” or blur together. Think of Frosted Glass or Plastics.
This effect of “Blurring” or Scattering the Transmitted light is called Translucency
When looking at curved mirrors, it is very obvious that the object we are looking at, is a redirected and distorted view of our surrounding environment
When looking at curved glass, or lenses, light that we are looking seeing through the glass, is a redirected and distorted view of our surrounding environment
photo by betül balcı on pexelsphoto by shukhrat-umarov on pexels
Concave Refractions
With Refractions, light passing through the material and, depending on the surface shape, changes direction upon refracting
Concave shapes cause the refracted light to Diverge – spread apart
Looking at them all next to each other, we can see Reflections and Refractions are both re-directing the light rays from another part of the scene. The biggest difference is Reflect = Light Bounces off, Refract = Light passes through.
There is No Spoon
photo by chait goli on pexelsphoto by otoniel alvarado on pexels
Diffuse – All Light Interaction with Material / Object
Specular – All Surface Reflections (Bounces)
Transmission – All Pass Through Refractions
Here is an Example Scene with 1 sided Glass on the left, and 2 sided Glass on the right:
We can see the Direct Transmission shows the Light Source through only the 1 sided glass, but not the 2 sided glass
Almost all information in the 2 sided glass is stored in the Indirect Transmission:
Almost all objects that contain glass in 3D are supposed to be modelled with a thickness, meaning 2 or more sides. So more often than not, your Direct Transmission Pass will be empty and all information will go to the Indirect Transmission. This is also why very often it is not even split up and is just rendered combined as Overall Transmission.
Recap #2
Transmission – Light passes through
Refraction – Light redirects.
The CG pass could be named either or but is often referring to the same phenomenon.
Specular and Transmission are both similar in that they are capturing light redirecting and showing a virtual image of the distorted surroundings
Emission is the light source
Diffuse describes the object itself
Specular Events captures light bouncing off the object’s surface
Transmission Events capture light passing through an object.
These all get separated into their own categories.
Both Specular and Transmission have:
A Direct pass that show the first reflection or first transmission of light
An Indirect pass showing all subsequent bounces or pass throughs
An Albedo Filter (mask)
Transmissive surfaces like glass are often modelled with 2 sides
Therefore the light usually passes through 2+ sides and ends up in the indirect pass, and the direct Transmission shows up empty
Often rendered as just an overall combined Transmission pass, for convenience.
Incorporating Transmission (Refraction) Into AOV Template
Since most of the Refraction is in the Indirect, there is no need for space for splitting up and adjusting separate direct and indirect, like we do with the diffuse or spec. I recommend combining and keeping the Transmission Section Slim for Space Saving in the Template. I also recommend the layering to go: Diffuse, Transmission, Specular, Emission, Other. To me this was the clearest Layering.
I updated the Material AOV Rebuild Templates in the FruitBowl Renders for Arnold, RedShift and Octane incorporating the new Transmission / Refraction Section.
See the Downloads Section at the bottom for links to the whole nuke scripts for learning and template scripts updated per render engine, arnold, octane, redshift.
Handling Planar Mirror Reflections
One approach to rendering Planar Reflections with AOVs is flipping the Camera along the Mirror Plane
Flipping the Camera along the normal of the Mirror Plane will produce a Virtual camera for you to render the Mirrored Virtual Image from the right perspective
If your Object is sitting on top of the 3D origin ground plane, this can be as easy as making an Axis Node, Scaling the Y to -1 and plugging your camera Axis Input into this Axis Node.
This will view your scene from the perspective of your Mirror. In the above image, you can see after flipping the Camera in -Y, the Nuke rendered result is aligned with the rendered indirect Specular pass. We’ll need to do this method in the Render Application on Lighting side, or pass this camera back to the lighter in order to render the reflection with full AOVs.
Here is the re-rendered Mirror Camera Perspective of the Armored Mech, with full AOVs, matching the original reflection angle:
What about non-ground plane mirrors?
For all oriented mirror planes, the same concept applies, you want to flip the world from the pivot point and orientation of that card along it’s normal facing angle. This is easier to do in 3D applications, but can be done in nuke with a little Matrix Inversion.
I’ve made a tool called MirrorDimension to make this Camera Mirroring super easy. Just stick this node between the Mirror Card in nuke (must have it’s transformations and rotations) and the Camera node. The gizmo is acting as an Axis Node and is just flipping the world along the orientation of the Card input.
No Settings on the node, just the following instructions:
1.) Plug in the MirrorCard input to the Card or Axis node you would like to be the mirror.
– The scale of Card Does not matter as long as the orientation (translation/rotation) are correct.
– The Card’s +Z access is the front of mirror, point that towards subject / camera. This is the blue Z arrow in 3D viewer.
2.) Duplicate your Camera, and plug in the “axis” input of this new Camera to the output of this node.
3.) Your new Camera will be Mirrored according to the plane / card / axis.
4.) Render using this New Camera Setup to get the mirrored CG output.
Before MirrorDimension Node – Original Camera Position:
After Mirror Dimension Node Applied –
You would either do this in your 3D scene and render the AOVs or pass this camera to a Lighter to render from this mirror perspective.
Faking Reflections in Comp
If you suddenly need reflections but have no renders, you can use some of the above techniques to fake your reflections.
If you have your Geometry of the object, try projecting the rgba onto the geometry, and rendering it in nuke from the mirror dimension:
If you have no Geometry, but have a Position Pass. Try using a PositionToPoints node, plugged into your render and Position input plugged into your shuffled out Position pass (or select in the dropdown). You can render your rgb 3D point cloud of the object with the mirror camera and fake some reflections. It won’t be perfect, but perhaps in a pinch, it can save your ass and add more realism:
So the next question becomes, what can we do if it’s not a Planar Reflection? or if it’s multiple planar reflection, or surface is curved, or what about Refractions (Transmission) ?
Getting Help from Lighters
There is a serious limit to how much we can do in comp when encountering Indirect Specular or Refraction (Transmission) passes. Many times, if this is something that is a big feature of are shot and requires a lot of comp tweaks, we’ll need some help from our Lighting Department.
Julius Ihle – Head of Lighting and LookDev at Trixter
We talk to Julius Ihle – Head of Lighting and LookDev at Trixter for potential Lighting Solutions to these problems.
Julius is super knowledgeable, and introduces us to Light Path Expressions and Open Shading Language where lighters can help Build Additional AOVs and help us when the situation calls for it.
Julius is also an online educator and keeps a Lighting Blog discussing exactly these topics, check these tutorials out for more details:
Here is an illustration of the drawing Julius used to explain how renderers are handling Reflection and Refraction Events
In a nutshell, the render engine keeps track of the light ray path and all the events that it undertakes on it’s journey from Camera back towards the Light
Lighters can create new AOVs with custom expressions telling the render engine exactly what parts and what events they want to see in the outputted pass.
There are also Shaders that have been written that can Reflect various AOVs, such as Utility passes and Alpha channel so that reflections can be more useful for us in comp. Julius has written his own shader to do just that, download it from GitHub:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
Project Files for this Video:
Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:
I am linking to the gizmo on the Nuke Survival Toolkit github, where you can download the raw file or copy/paste the RAW source code from your browser into nuke:
Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly.
Slide show PDF
Here is a PDF version of my slideshow in case you would like to save for future research or review:
In this tutorial, we go further down the levels of complexity into the most complex category, which includes Albedo and RAW Lighting. These are the smallest components of AOVS, the building blocks, and unveil how lights, textures, and materials come together to produce the beauty render.
What is Albedo?
An Albedo Map is the base color or texture map that defines either the diffuse color or specular tint of the surface.
Remember that in Physically Based Rendering (PBR) depending on whether a material is Metallic or Dielectric (non metallic), determines whether the albedo color is used as Diffuse Color or Specular Color.
It knows what to use the albedo for based off of a black and white metallic map
The RAW Specular pass is that objects in the scene would look like if they had a 100% reflective chrome shader on. It renders everything uniformly reflective.
Specular Filter is like a mask or a albedo multiplier, that limits the visibility of the RAW specular reflective pass to certain areas. The thought process is: Might as well render everything reflective, and then decide where and how much it is needed.
Just like the albedo and the RAW Lighting, RAW Specular and Specular Filter are multiplied together to form the final Specular pass
Specular = RAW Specular * Specular Filter
What is RAW Reflection and Reflection Filter?
RAW Reflection and Reflection filter is essentially the same thing at RAW Specular and Specular filter. You might see this term depending on the renderer. Sometimes Specular is referring to Direct Specular and Reflection is referring to Indirect Specular.
The more important take away is you want pair the “RAW” pass with it’s “Filter” or “Albedo” pass. They get combined and multiplied together to equal the final pass
Reflection = RAW Reflection * Reflection Filter
RAW Direct Diffuse & RAW Indirect Diffuse
Just like the normal Diffuse pass, RAW Lighting passes can also be split into Direct and Indirect Lighting. So you can end up with the RAW Direct Lighting and the RAW Indirect Lighting. Both passes are using the same Diffuse Albedo, so it is only the lighting that is split, not the albedo.
Total RAW Diffuse = RAW Direct Diffuse + RAW Indirect Diffuse
RAW Direct Specular & RAW Indirect Specular
And just like the Diffuse RAW passes, we can also break up the RAW Specular passes into RAW Direct Specular and RAW Indirect Specular.
Again both Direct and Indirect Specular will use the same Specular Filter pass.
Total RAW Specular = RAW Direct Specular + RAW Indirect Specular
Diffuse Equation
Knowing the diffuse equation will help us understand how it is built, and more importantly, the math behind splitting the Diffuse pass into it’s individual components of Albedo and RAW Lighting. Let’s go over a basic equation and reinforce some math concepts:
x = Albedo y = RAW Light Diffuse = ( Albedo * RAW Light ) Diffuse = ( x * y )
In math, certain operations cancel each other out. Just like Subtraction cancels out Addition, Division cancels out Multiplication
( x + y ) - y = x ( x * y ) ÷ y = x
We can take the Diffuse pass, and dividing by the component we do not want, we can get the component we do want.
What that means is if you have the Diffuse pass and 1 other component, albedo or RAW Lighting, we can always generate the remaining missing pass.
x = Albedo y = RAW Light
Diffuse = ( Albedo * RAW Light ) Diffuse = ( x * y )
( x * y ) ÷ y = x ( x * y ) ÷ x = y
Diffuse ÷ Albedo = RAW Light Diffuse ÷ RAW Light = Albedo
Division Problems
You can divide 0 by any number and you get the result of 0. But if you try to do the reverse, you run into a classic math problem. You cannot divide by 0, the result is undefined… Not possible.
0 ÷ x = 0 x ÷ 0 = undefined
This can cause serious problems in nuke when dividing, and we need to be careful.
Using Expression node to test math in nuke
If we use an expression node we can enter the following equation:
0/r 0/g 0/b 0/a
The nuke Expression node has some predefined variable for using the channels. So it will carry out this math on a per pixel basis for each channel.
r = red channel g = green channel b = green channel a = alpha channel
we can see that once we start dividing by 0 value pixels, we are getting issues. Nuke’s answer for an undefined result is nan pixels
nan stands for “Not A Number”
inf stands for Infinity
Testing for nan or inf pixels
We can use another expression node to write a little tcl expression that will show 1.0 (white) for any illegal value pixels. If it’s a normal number, it will display at 0.0 or black. This can easily and visibly test if we are having “problem pixels” in our image such as nan and inf
isnan() tests for nan (not a number) pixels. You need to enter which channel you want to check inside of the parenthesis, for example isnan(g) and it will display 1.0 for nan values and 0.0 for normal values
isinf() tests for infinity value pixels. You need to enter which channel you want to check inside of the parenthesis, for example isinf(g) and it will display 1.0 for inf values and 0.0 for normal values
We can just add them together to get a full mapping of “illegal values” to warn us
So dividing by 0 in nuke can give you illegal values. Luckily, the Merge(divide) operation in the Merge node avoids these issues. It has built in protections so that 0/0 = 0 and any other number divided by 0 is bypassed, or skipped, and it does nothing. it will just show you in the A input value and not do any math at all.
There is a limitation to the Merge node however. There is only 1 operation for divide, and that is A/B
We know that when we disable nodes in nuke, it defaults to the B input. But if we switch the inputs, we do not get the same result. Meaning we are locked in to our inputs based on whatever image we need to divide by the other image.
So there is no B/A operation, we’ll need to recreate it ourselves
MergeExpression Node
We can use a MergeExpression node, which is basically a combination of a Merge node and an Expression no, in fact the properties look identical to an Expression node.
The Merge Expression has access to the same variables at the normal expression, namely the r,g,b,a variables representing the different channels:
r = red channel g = green channel b = green channel a = alpha channel
But the MergeExpression also has 2 inputs, and we can choose what input we are sourcing from in our equations with capital letters A and B
A = A input B = B input
Because we need to specify which red channel we are grabbing from, A or B red channel, we need to be more specific. Therefore:
Ar = A input red channel Bg = B input green channel
So we specify which input first and then the channel we want.
So now we can do a simple equation of B input divided by A input:
Br/Ar Bg/Ag Bb/Ab Ba/Aa
Fixing the MergeExpression
Unfortunately, the MergeExpression is pure math, and does not have the built in protections that the normal Merge node does when it comes to dividing. So if we end up dividing by 0 using the MergeExpression, we will end up with nan and inf pixel values. And that is very dangerous, because this will break the image, as you cannot do further math with those values, they get corrupted.
But it’s ok, we can implement the fix ourselves, so that we can have safe values just like the Merge node
The solution is to enter a little tcl expression into the node
Ar == 0 ? Br : Br/Ar Ag == 0 ? Bg : Bg/Ag Ab == 0 ? Bb : Bb/Ab Aa == 0 ? Ba : Ba/Aa
This code basically reads as follows:
First we need to check if the A input has 0 values, since that is what we are dividing with. and if we divide with a 0 then we get a problem.
so the first part is does the A input pixel equal 0 ? if yes, just skip, bypass, and revert to B input pixel. Don’t even do any math. If the A input pixel is not 0, then it will proceed to do the operation B/A and give the result.
This will fix the issue as all the zero pixels will be skipped. This result is identical to the Merge node set to divide
Except now it is B/A and when we disable the node, it will revert to the B stream that we want.
you can just copy/paste the code below into your nuke to get the MergeDivide that I created:
Think of Multiply like combining, fusing, mixing, linking, joining, locking
Think of Divide like separating, splitting, unlinking, disjoining, unlocking
Start with the combined pass
Separate with division
Change individual component
Recombine with multiplication
How can we use Albedo and RAW Lighting as Compositors?
1.) The first reason to separate albedo and RAW lighting would be to make an adjustment to only the texture and not the RAW Lighting or vice versa.
if you desaturate the diffuse pass, you risk desaturating the lighting and the texture at the same time. but if you wanted to just desaturate the object, but keep the tinting of the lighting, you would need to separate them first
Here is an example of the Blender Room where we one side desarating the entire diffuse pas, and another where we only desaturated the albedo pass. You will notice on the right side, the light is still warmer and maintaining the warmth of the sunlight. This is what a gray object would look in that environment
left side: desaturating entire diffuse pass right side: desaturating the albedo only
Here is the same example on the VRAY scene, where you can see the desaturation affecting the bounce lighting:
left side: desaturating entire diffuse pass right side: desaturating the albedo only
2.)There are many non linear Color Corrections or operations that you might also specifically want to do while these passes are separated, to get better or cleaner results.
Whether it is to remove light / shadow from a texture CC, or removing texture info so that you can adjust specific lighting. Operations such as:
keying
despilling / desaturating
gamma
ColorCorrect nodes
HueCorrects
HSV node – to pull color keys
3.) The next big reason would be to alter or change the texture in the scene and not need to go back to the CG department.
In this example we replace the picture on the wall with a checkerboard, but it still maintains the lighting of the scene. So you could add noise or blood textures, change billboard ads, etc, and they would still appear to live inside your shot.
left side: original painting right side: replacing the albedo with another image
Different ways to rebuild AOVs at complex level
Variation 01:
Add the direct, indirect, SSS passes together first, generating your diffuse pass. then do a divide / multiply with the albedo pass afterwards at a second step.
variation 01 rebuild structure
Variation 02
We could do the albedo divide multiply on a per pass basis. so basically we are having the RAW direct and RAW indirect split out first. We could do changes to the albedo and return to normal, and then add the direct and indirect and SSS together as a second step.
variation 02 rebuild structure
Variation 03
Similar to variation 02, we do the albedo changes on a per pass basis first. but instead of immediately reverting back to normal, and then plussing the direct, indirect and SSS together. We could instead plus them at the RAW level. The final step would just to multiply the albedo back.
Basically, variation 02 was 3 divide, 3 multiply and 2 plus
and variation 03 is 3 divide, 2 plus, and 1 multiply
variation 03 rebuild structure
Realistic Proposal for CG AOV Rebuild
The above setups are more for learning, with labels and backdrops to help break down the workflow and structure.
Below is the setup that I gravitate towards when settings up CG Templates. I try my best to apply logical flow and convenience. Maximizing organization and flexibility, while still being clean and fast. I have space for albedo / RAW Lighting change, but I keep it off by default and allow to turn it on when needed.
We see all levels of complexity being implemented:
You can find these realistic template nuke scripts of these setups for each renderer below in the downloads section. I exported the individual templates for Arnold, Redshift, Octane, and Blender.
I would recommend waiting for future videos where I will keep expanding on the template and making it more robust. But if you are eager to download and try it out, feel free to give it a try and modify it for your needs. More and better additions will come in the future posts.
Downloads:
If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
Project Files for this Video:
Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:
Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly.
Slide show PDF
Here is a PDF version of my slideshow in case you would like to save for future research or review:
In this tutorial, we move down the levels of complexity into the Intermediate category and explore breaking apart diffuse, specular further into Direct Lighting, Indirect Lighting, and SubSurface Scattering
What is Direct Lighting?
Direct Lighting is when the Light Source directly illuminates a surface. This could be considered the “first bounce” or the first time the light ray is hitting a surface.
Indirect Lighting is all subsequent bounces of the Light. This can be known as “Bounce Lighting”. Light is often diffused throughout the scene, and also will pick up some of the surface colors.
Direct and Indirect Passes as rendered / calculated separately and combined to equal the beauty render. Direct is only the “first bounce” or whatever is directly in view of a light source. Indirect is all bounces after the first hit (excluding the first bounce).
Direct and Indirect Lighting in the real world is used to describe a harsh lightsource, directly hitting a room or object and casting harsh shadows, verses indirect or “bounce lighting” which the light is aimed at a wall or ceiling or bounce card, and diffused throughout the scene, creating a more ambient lit environment.
Ray tracing is a render calculation used to find Direct Lighting, shadows, and specular highlights.
Instead of calculating from the Light Source outwards and every direction in the scene, it saves time by going from the Render Camera backwards, only needing to calculate light rays hitting the camera, and necessary for the creating the final image.
It starts from a pixel on the final render and follows the light path until it reflects off or through a surface/material. It then asks “Am I directly illuminated by a light source?” and if so follows the path back to the light source, and determines the distance, intensity, and color of light hitting the surface.
If the area is not hit by direct light, it renders as black. This calculation ends after the “first bounce”.
Global Illumination or “GI” involves various techniques to calculate the indirect lighting that occurs when light bounces around in a scene.
This process helps to subtly illuminate shadowed areas and contributes to the overall color and intensity of the scene, especially around areas that are hit by direct lighting.
There are often many number of bounces allowed, depending on render time and settings. Each bounce inherits color from objects and materials and further distributes light into the scene.
The result is a more realistic and natural-looking shot, as it mimics the complex ways light interacts in the real world.
I mention this amazing Raytracing video from Josh’s Channel that breaks down how raytracing is working in the renderer with amazing visuals. The video itself is amazing, and entertaining. I highly recommend watching the whole video if you want to know about state of the art raytracing techniques.
The section I clipped from Josh’s video is between 1:24 and 2:14
Real Time Raytracing / Global Illumination – RTX Graphics
Real Time Global Illumination, is becoming the new normal in Real Time Renderers such as Unreal Engine and Unity. More powerful Graphics cards are being upgraded to handle these immense calculations, such as Nvidia’s RTX 3090 or 4090 series graphics cards. These are allowing for real time bounce lighting and reflections, instead of traditionally baked lighting in environments. This all adds significant realism to the scenes and games, and shows just how important this process is to photo realism.
How can we use Direct & Indirect Passes in Compositing?
1.) Contrast / Color Correction
direct / indirect pass decontrast
Individual control of the mids/lows and highlights. Gives more flexibility over the color correction in order to increase or decrease contrast and better match CG to plate.
2.) Filters and FX
Adding glow filters to Direct Lighting pass to “punch” the lighting and adding some realistic camera lens fx. Using direct or indirect lighting passes to drive other FX and filters.
3.) Denoising CG
Indirect passes (and Sub Surface Scattering) are very expensive renders, and often arrive with some unwanted render noise and chattering. Instead of applying denoise techniques to the whole beauty render, applying denoise to only necessary passes can help preserve details and improve final quality of your renders in comp.
CG Denoising Techniques in Nuke
1.) Nuke’s Denoiser
Nuke Denoise Node
We can simply use Nuke’s built in denoiser, it is the easiest to test and doesn’t do a bad job after some settings adjustments. No plugin or external tool required
2.) Neat Video Denoise Plugin
https://www.neatvideo.com/ Neat Video is the best denoiser on the market. It is fairly affordable, and chances are your studio already has a license. It can be a bit heavy, I would recommend pre-rendering the results instead of leaving them live in your comp script.
3.) Motion Vector Denoise
This technique involves using the Motion Vector Utility pass to distort the previous frame and next frame’s pixels, back into the position of the current frame. Usually you see a 3 frame average, or 5 frame average, (current frame, +2 frames ahead, -2 frames before).
It’s also common to use a TemporalMedian Node to help smooth out noise chattering over pixels that are not changing that much frame to frame.
It’s important to note that we should always try to minimise artifacting and quality loss by isolating degrain techniques to only the problematic render passes, and not every layer or the beauty overall. Typically most of the problematic CG noise is occurring on the Indirect and SubSurface Scattering Passes.
I do believe more tools could be made using these techniques and shared with the community. If you want to have a go at using this technique to come up with different tools that reduce grainy CG while minimizing artifacting, I am sure the Nuke community would be grateful!
Downloads:
If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
Project Files for this Video:
Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:
Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly
Finally here is a PDF version of my slideshow in case you would like to save for future research or review:
Following the well received VFX Community Nuke webinar hosted by the Foundry a couple months ago, link here, Josh Parks, Tony Lyons, and Adrián Pueyo wanted to do more. So we decided to start recording more of our conversations.
We’re excited to introduce the VFX Nomads Podcast
Along side us is Senior Compositor/Compositing Supervisor and good friend, Gautama Murcho, who shares his wealth of knowledge, offering insights into his experiences in the VFX Industry.
Please Subscribe if you’d like to have new episodes on your radar. Feel free to post comments, feedback, or questions for us to talk about in the future. We hope you enjoy the first episode!
I recently had the pleasure of teaming up with Josh Parks and Adrian Pueyo in a Foundry Live Panel event on YouTube Live. We talk about advice for people starting in the industry, getting into teaching, how to keep learning, and the importance of networking and community.
Josh, Adrian, and I are friends and former colleagues. I couldn’t be more proud and excited to see them evolve in their careers and see their various contributions to the VFX Compositing Community over the years. It was an honor to talk alongside them in what felt like a typical chat we might have if we all met up in person over lunch.
Back in December we decided to create a space on LinkedIn to be a place for folks to share cool nuke and compositing posts. The LinkedIn news feed can be a little bit of a fire hose of information, and if you don’t save something, it can quickly disappear into the ether. If you’d like to be part of the nuke community there, for articles, tutorials, news, and questions, we’d be happy to have you.
I had an absolute blast speaking alongside Adrian and Josh, and in my opinion, it went by too fast! I hope you enjoy the talk and maybe get a little inspiration out of it. I really hope to chat with them again in the future.
If you’re interested in checking out Josh or Adrian’s websites and courses, here are some links:
In this Bonus video on Material AOVs, I cover Cross Polarization photography, which is a technique that allows us to separate diffuse and specular components of everyday objects. I go into detail about the lighting concepts that allows this separation to occur, and how it’s used to gather reference and textures to recreate objects in 3D.
Electromagnetic Spectrum
Visible Light is a section of the Electromagnetic Spectrum
Light / Color is represented in 2D as a Sine Wave with a specific frequency
3D Light Wave Representation
The 2D representation looks a bit different in 3D space, since the light waves could be oriented in any and all directions along it’s forward axis
A light beam with randomly oriented Light Waves is referred to as an Unpolarized Light
Linear Polarization of Light
Linear Polarization isolates one specific angle of the light wavelength, only allowing a portion of the light waves that were oriented in the that direction, through the filter
Cross Polarization of Light
Cross Polarization uses 2 Polarizers that are perpendicular to each other, effectively eliminating the light wave passing through.
The first polarizer isolates the light wave to only one orientation
The second polarizer, if parallel to the first, continues to allow the polarized light through, but as it becomes more perpendicular, the light gets dimmer, and eventually blocked entirely
Polarization Upon Reflection
When unpolarized light hits a reflective surface (with a refractive index different than the surrounding medium, such as glass, snow, or water) the specular reflection is polarized or partially polarized to the angle perpendicular to the plane of incidence. (along the surface)
How polarized the Reflection depends on many factors; angle of incidence, material type, etc.
Brewster’s Angle
At a specific angle, the specular reflection is completely polarized to the angle perpendicular to the plane of incidence.
This angle is known as Brewster’s Angle.
Unpolarized Diffuse Component
Only the Specular Reflection has the effect of the Brewster’s Angle Polarization
The Diffuse Component is Unpolarized, because they are newly emitted photons from excited atoms
This phenomenon only happens when the light is reflected off dielectric materials such as water or glass.
When reflection occurs on a metallic surface, no Brewster Angle nor refracted light exist
Polarized Specular Reflections
Placing a Linear Polarizer filter in front of the observer will Cross Polarize some Specular Reflections if angled correctly. It blocks the polarized reflection light wave from shining through it
This is how Polarized Sunglasses are able to eliminate harsh glares and reflections from dielectric surfaces such as glass, water, snow, etc.
Cross Polarized Photography
If you polarize the light source, the Specular Reflection is also polarized (because it’s a mirror reflection of the light wave).
The Diffuse Component is unpolarized light because it is newly created lightwaves oriented randomly. Adding a second polarizer on the Camera, means we can block the Specular Component entirely depending on the angle of the Polarizers. When the 2 polarizers are parallel, we see Specular + Diffuse , and when they are perpendicular we will see only Diffuse.
The Parallel Polarized image gives use the Specular and Partial Diffuse (only Diffuse Component of that orientation)
The Cross Polarized image, negates the Specular, and only shows the other half of the Diffuse Component
To isolate the Specular Component, take Parallel Polarized image (Specular + Partial Diffuse) and minus the Cross Polarized image (Partial Diffuse). The Diffuse Components cancel out, and all that is left is the Specular Component
This Cross Polarization Photography allows CG Artists to collect photogrammetry data of everyday objects, and allows them to recreate these objects in 3D with accurate Diffuse and Specular Maps for Physically Based Rendering
What seems just like theoretical Diffuse/Specular Render Pass separation in CG is actually a lighting phenomenon that can be separated into Diffuse and Specular Components in the real world
Notice that Metallic Materials have no real Diffuse Color to them, They show up as completely black in the Cross Polarized result. Metals are entirely surface level Specular Reflections
Occasionally, the Diffuse Components of the Parallel Polarized and Cross Polarized Images are slightly different, (brighter or a shift in color for example)
In this case, when we minus the Cross Polarized result from the Parallel Polarized result, we are left with leftover color information or artifacts. The Specular Component can be desaturated to compensate for those color artifacts
Remember that in Dielectric Materials the Specular Component is the same color as the light source, but Metals can sometimes tint the Specular color depending on the type of Metal
Light Stage: Cross Polarization
The light stage used in films is capturing evenly lit, cross polarized textures of various facial expressions.
This helps separate Diffuse and Specular and aids in tracking features of the face
References:
Here are some great websites that go into more detail about polarizations:
In this post we are going to be focusing in on the Material AOVs Category.
Levels of Complexity
There are different levels of complexity to rebuilding Material AOVs into the beauty, and it all depends on how much flexibility and control you want with the cost of complexity and speed.
Simple
Diffuse
Specular
Emission
Other – Refraction / True Reflection
Intermediate
Diffuse
Direct Diffuse
Indirect Diffuse
Sub Surface Scattering
Specular
Direct Specular
Indirect Specular
Reflection
Coat
Sheen
Emission
Other – Refraction / True Reflection
Complex
Diffuse
Direct Diffuse
Indirect Diffuse
Sub Surface Scattering
Raw Diffuse
Albedo / Color / Texture
Specular
Direct Specular
Indirect Specular
Reflection
Coat
Sheen
Raw Specular
Albedo / Filter / Texture
Emission
Other – Refraction / True Reflection
Diffuse, Specular, and Emission are the Foundational Categories, and the complexities are subdivisions of the Diffuse and Specular Categories
So let’s first focus on the Simple category of Diffuse, Specular, and Emission and really break those down and understand them fully. This will make the future subdivisions easier, familiarise us with terms and concepts, and help us have a grounded foundation of knowledge for what we are adjusting when using these passes.
The full presentation from the video can be downloaded here in pdf format, for those who want to keep or study it offline:
Emission is any object, material, or texture that is actively emitting light into the scene
This includes any Lights, Super-heated metals, or Elemental FX like fire/ sparks / lightning / magic etc
Neon Lights, Screens, Monitors are all examples of real life Emission objects
Diffuse vs Specular
Specular – Surface Level Reflections
Diffuse – Light passes through surface and interacts with the material at a molecular level, Scattering and Absorption allow certain colors to re-exit and scatter into scene
Let’s talk about Specular first andSurface level Reflections
Specular
Law of Reflection
The angles of incidence is equal to the angle of reflection
Smooth Surface – Specular Reflections
Light Beam = a bundle of parallel light rays
Light Beam remains parallel on incidence and parallel on reflection
Planar Mirror and Virtual Image
An Image created by planar specular reflection that does not actually exist as a physical object is referred to as a Virtual Image.
The Virtual Image appears to be located “behind” the mirror
Virtual Image distance = Object to Mirror + Mirror to Observer.
Speculum is the Latin word for “mirror”, which is where “Specular” derives from
The people are witnessing a virtual image of themselves looking back, that is double the distance from them to the mirror. The light travels from them -> to the mirror, and then from the mirror -> back to their eye
Notice the reflected virtual image of the chess piece is in focus, even though the real piece (in the foreground) is out of focus. The camera lens is respecting the mirror’s virtual image distance, even though the mirror itself is out of focus.
Here you can see a ground plane mirror appearing to invert the tree in it’s virtual image
Rough Surface – Diffused Reflection
The uneven surface causes the Incidence Rays to hit at different angles
The outgoing reflection rays scatter in different directions
Here you see some examples of different CG materials along the Roughness / Glossiness spectrum
Wet Surface Reflections
When a surface is wet, the water fills the gaps and flattens the surface and causes more a specular reflection
Microscopic Surface Details
In these slides and examples we are discussing surfaces at a microscopic level. You might think a piece of paper looks smooth, but under a microscope it has quite a bit of roughness to it, which is what makes it so evenly lit and diffuse.
Metallic vs Dielectric Surfaces
The diffuse and specular terms describe two distinct effects going on. The Light interacts with materials differently depending on if the material is a metal, or a non-metal (Dielectric)
Dielectric – Absorbs and Scatters light
Metallic – Does not Absorb light. Only Reflects
Dielectric (Non-Metal)
Light penetrates the surface level and the molecules of the material absorb and scatter the light within
The light photons excite the atoms they hit below the surface. Some of the light is absorbed, and this energy is converted to heat. Then new light rays (photons) are emitted from the excited atoms. Those might excite nearby atoms or exit the surface as new photons. These new photons are same color as our material.
The Base Color Texture (Albedo Map) – determines the color of the diffusely scattered photons from excited atoms. It’s the color that is scattered back out and not absorbed by the material
Metallic
Does not Allow light to penetrate the surface and does not Absorb light. They only Reflect light on the surface
Metals can be thought of as positively charged ions suspended in a “sea of electrons” or “electron gas”. Attractions hold electrons near the ions, but not so tightly as to impede the electrons flow. This explains many of the properties of metals, like conductivity of heat and electricity
The incoming photon does not excite the atoms, but bounces directly off the electron gas
The Base Color (Albedo) is used to describe the color tint of the specular reflection
“Electron Gas” Model
Notice the Specular Reflections are tinted a certain color depending on the metal type:
On Dielectric Plastic balls, the material color changes, but notice the specular highlights are the same color, maintaining the color of the light or surrounding environment.
Comparison of a Metallic vs Dielectric Material in CG
Chrome Sphere and Diffuse Ball
Used as a reference to see what something 100% Smooth and Metal (Specular) and 100% Rough and Dielectric (Diffuse) looks like in the scene.
The diffuse component includes light that penetrates the surface and interacts with the materials molecules. This happens in different ways in the real world
Transmission
Light passing through the material / surface
Can be thought of as “transparency”
Refraction
when light changes angles as it goes through different materials or mediums
Absorption
When certain wavelength colors of light get absorbed by the material
Scattering
when light is dispersed in many directions when it comes into contact with small particles or structures in the material
Simplified Diffuse Calculation
When the distance that light travels beneath the surface is insignificant and negligible, the calculation can be simplified by the renderer and just calculated at the surface point where the light hits. It uses the Base Color Texture (Albedo) as the Diffuse Color that will scatter.
Sub Surface Scattering
When the distance the light travels beneath the surface of the material is significant, the interior scattering must be calculated. This is referred to as Sub Surface Scattering (SSS)
Physically Based Rendering Terminology
Albedo
Base Color Texture Map
On Dielectrics (non-metal) refers to color of material
On Metals, refers to the color tint of the specular reflection
Texture map is without highlights, shadows, or ambient occlusion
Metalness Map
What area is metallic or not. (will use Albedo Color differently). Usually Black or White
Roughness (Glossiness) Map
How blurry or how sharp the reflection will be
Real life objects often have a diffuse and a specular component
Diffuse describes the color of the billard balls, but the specular highlights are all the same color (reflecting the color of the light above the table)
Iridescence
There is also Iridescent materials that change specular color depending on viewing angle.
Iridescence is a kind of structural coloration due to wave interference of light in microstructures or thin films.
Nuke – Simple Material AOV setup
We can break our fruit bowl render into the 3 simple components, Diffuse, Specular, and Emission. They layers look like this:
You can download the nuke script shown in the Tutorial. I created the mini setups for the 3 different types of renderers, Arnold, RedShift, and Octane. Dividing the Beauty render up into their 3 Diffuse, Specular, Emission Components, and Recombining them.
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
This will help the Read nodes auto-reconnect to the sourceImages for you.
Recap
Emission / Illumination materials emit light
Specular and Diffuse can be separated by Surface Level Reflections and below surface Material Interactions
Each individual light ray follows the Law of Reflection.
The smoother a surface is, the more mirror-like the specular reflection will be.
The roughness of a surface will cause the reflected rays to scatter, and reflection to be blurred.
Metallic materials do not allow light to enter the surface. They only reflect light
Dielectric materials allow light to enter the surface. Light rays are refracted, absorbed, scattered by the materials molecules. Certain color wavelengths re-exit the surface in random directions, which is what we perceive as the materials color
Albedo – Base Color Texture. On Dielectrics – color of material | On Metals – color tint of the specular reflection.
Sub Surface Scattering is when light below the surface travels a significant distance before re-exiting
Iridescent materials tint the color of the specular reflection depending on viewing angle.
References, Resources, Credits
Firstly, Thanks to Pexels for providing such a good resource for stock reference images
I did a hell of a lot of research on this topic before creating the video, I really encourage you to dig a little further and explore the topics more using these great resources: