Saturday 29 September 2018

Implementing PBR in "EveryRay - Rendering Engine"

Sample scene with different PBR materials.

This time we will talk about something really interesting and useful - Physically Based Rendering. Actually, I am not going to explain everything about this topic, as it is huge and probably there is a ton of information that already exists on the web. However, I'd like to share my experience with adding PBR to my engine and struggles that I ran into during the process.

But what is PBR? If you only know that it makes things fancy and realistic in the application, I will briefly give you an explanation. By this time you should realize that one of the main goals in computer graphics is to simulate "proper" light behavior. Somehow we want our objects in the scene to interact with light sources like in the real world. We want to shade them, have reflections and do many other complicated things. And it so happened that J.Kajiya introduced a general rendering equation in 1986:


And since then people have been trying to mimic and solve it for all kinds of purposes. Of course, games became 3D much later, but they also needed lights. People have also found partial solutions and approximations of that equation for real-time applications (i.e Lambert diffuse), but they were not 100% accurate. Just look at that nasty indefinite integral over the hemisphere above! How the heck we should compute it in real-time? This task seems to be impossible...

But approximations... They are everywhere in this world... Developers and researchers did not stop on Lambert and other simplified shading models, so that's why in the early 2000s, PBR shading model was introduced. In a way, it is another solution to that equation which takes some physical properties into consideration: conservation of energy, microsurface scattering and etc. So it is more "physical" and thus more accurate and computationally complicated. And fortunately, it was popularized for the real-time applications by companies like Epic Games. If you are not familiar with the concepts of radiance, irradiance, flux, BRDF you can read about them here. By the way, I have used the Cook-Torrance BRDF model, as it is very popular in rendering engines. (Read about it here)

That's how our BRDF radiance equation looks now. D, F, G terms are explained in the link above.

So as you can see we have diffuse and specular components. We can separate this integral in two integrals and solve them separately. Of course, for direct lighting (from direct sources) it is not a big of a deal: we just compute a Lambert diffuse term for the left integral and specular term with all tricky but straightforward formulas: D for GGX Distribution, F - Shlick-Fresnel and G for Smith's Geometry. ( So if things are already unclear, make sure to read the theory! ) In general, this is just some Maths that can be calculated in our shader. However, the results are not satisfying :(

Why? Well, we are calculating radiance for direct lighting. But look at our integral again. It's about all directions (w) over the hemisphere. Pretty unrealistic to compute, I agree... But it doesn't mean that we do not want to try! Let's dive into a more complicated part of PBR implementation - IBL or Image Based Lighting.

Probably you have already realized that we would work with environment maps / cubemaps. That's a nice and relatively cheap way to calculate environment diffuse light. It is called an irradiance map. Basically, it is a pre-convoluted cubemap - for every direction over the hemisphere we take a sample in the cubemap. Fortunately, you can generate an irradiance map with a third-party tool, such as CubeMapGen by AMD, or you can precompute it by yourself in the code.

Moving next, we should deal with a specular part. It consists of 2 textures: radiance and integration maps. The second one is just a lookup texture for computing the BRDF, so you can download it from the Internet. However, the first one must consist of several pre-filtered mip-maps. Your shader will calculate an indirect specular with something like split-sum approximation. That's why for different roughness values on your material you would use different mip-maps of your cubemap. The screenshot below should give you an intuition.

Image taken from https://polycount.com/discussion/139342/introducing-lys-open-beta

The mip-map generation process required some additional code for my engine so I could load my irradiance map into a PBR shader with all mips at once. For example, here is my IBLRadianceMap class:


And that is it! I will leave my final PBR shader/effect for you to explore. I understand that there may be many improvements, but for now, I am really satisfied with the results. As a result, I can load fancy 4K textures and get the realism out of them:)


Links:
1) https://learnopengl.com/PBR/Theory
2) https://blog.selfshadow.com/publications/s2013-shading-course/

TOREAD: "Physically Based Rendering: From Theory to Implementation" by G.Humphreys and M. Pharr

Sunday 19 August 2018

CgFX Shader Compilation


Another short blog post!

I have finally decided to publish my small CgFX shader compilation project on GitHub. It consists of different shaders (i.e Fur, Deformable Surface Tessellation, Fast Subsurface Scattering and etc.) that you can check out. Actually, I have talked about some of them here. However, I also thought it would be a good idea to put the most interesting ones I'd written in Unity in one place. There currently 8 switchable scenes, but maybe I will add more if I have time and interest. I hope you find some of them useful! Good luck!

Links:

Monday 16 July 2018

Have you heard of "ImGui"?

Hello, everyone! This short post won't be about graphics, however, it may be very useful to everyone who develops 2D/3D applications. I'd like to show you one cool and handy thing - ImGui. "What's that?" you may ask. Well, it is an "Immediate Mode Graphical User interface for C++ with minimal dependencies" :) Some of you are already familiar with it, but if you are not, I will quickly go through its main features!

ImGui in my "EveryRay - Rendering Engine"

Is your current UI text-based? Are you binding 75 key combinations to simple actions? Then maybe you should consider integrating a simple GUI framework into your environment! And ImGui is a perfect candidate for that! It is very compact and lightweight, it has a lot of functionalities already developed and, most importantly, it is free and pretty easy to integrate if you know your codebase! You can read about it in details on the GitHub page (here), but I will just mention some of the features it has out of the box. Button bindings, text editing, color picking, list selection, image loading, sliders, plot generation and many more! You can also customize your windows by dragging, resizing, changing color and other properties. ImGui sends draw calls in an optimal way so your performance will not drop radically! Moreover, you can call ImGui from any part of your code.


Just look at my previous post about Cascaded Shadow Mapping and you will see how boring my UI was there. And now look at the image above again - that is how my demo looks now! I hope I have convinced you:)

Links:
1) https://github.com/ocornut/imgui

Tuesday 10 July 2018

Implementing Cascaded Shadow Maps in "EveryRay - Rendering Engine"


For the last couple of weeks I've been working on a small rendering engine - "EveryRay - Rendering Engine". I have decided to start building it because I thought it could be useful both for practicing graphics programming and C++ skills on a more "serious" level. The engine currently supports DirectX 11 and has more or less all essential parts. I should say that I have used the book "Real-Time 3D Rendering with DirectX and HLSL: A Practical Guide to Graphics Programming" by Paul Varcholik as a reference for all fundamental aspects of making a rendering engine/framework. However, my main idea was to be able to implement various techniques and algorithms from different publications. (In a way, I do not want to build an advanced engine, but instead to have a framework where I can test things and try to put them into practice).

Thankfully, in the book, there is also an introductory chapter about shadow mapping. After getting acquainted with the structure that was suggested by the author, I thought about cascaded shadow mapping implementation. Yes, the technique is not new, however, it is still very popular among developers. It has its own advantages and disadvantages, but in the end, I was satisfied with the results. My implementation is not perfect (I will explain why and where), as I was only focused on the key logic of the technique. Down below I am going to explain my steps in details! 

Firstly, I assume that you know how basic shadow mapping works. I've actually written a short post about shadows before, so you can check it out or google:) 
In general, we can have directional and point lights. And creating shadow maps for them requires 2 different types of projections: orthographic and perspective, respectively. So make sure to prepare your frustums for orthographic projection if you want to use cascaded shadow maps. 

Ok, but why do I need cascaded shadow maps at all and why do I need an orthographic projection for them? Well, if you worked with shadow maps before you know that the biggest problem with them is aliasing. When we project our shadow from an object, we may notice jaggies around the edges of the shadow. And it is logical - we are trying to project depth information to pixels! Even if we increase the resolution of our shadow map, we will still see perspective/orthographic aliasing. Moreover, the farther we move our light source, the more pixelated our shadow will be! And imagine if we want to create a shadow map from an "infinite" source, like directional light?... Look at the comparison below (left one is standard shadow map with 2K texture and the right one is csm also with 2K texture): 


As for orthographic projection - if you have not guessed, it is common to have it for directional lights. That's why cascaded shadow mapping uses that type of projection. 

So people came up with an idea: what if we render several shadow maps according to our player's camera position. We want the most accurate shadows near the player and we do not care about the quality of long-distance shadows. Let's create "cascades"!

3 camera frustums

We can bind our projector (I will call them projection boxes) to our camera frustum. In addition, we are going to create several connected frustums (as in the image above) and bind several projectors to them! So the first and the smallest one will be the most accurate! Of course, we can change distances between frustum planes (so-called "clipping planes") and thus render shadows for our specific needs. Sounds easy! We are just doing the same thing as with standard shadow mapping, but several times. 

And what's the catch? Well, the most difficult part of this algorithm, in my opinion, is to bind your projection boxes to frustums. They should be able to rotate with the light source, but still be tighten to the frustums' geometry (think about them as AABB volumes). Also, they should be able to move with your camera, however, they should not rotate with it. Sounds a bit complicated, but a couple of proper matrix multiplications and a bit of math will solve the problem. I will try to demonstrate the behavior from the top-down view below:



So as you can see, our projection boxes are always rotated in the direction of the directional light. Even when our camera (so its frustum) rotates, boxes do not! However, boxes are still bound to their own frustum cascades. If you look at the sides of each projection box, you will see that they always lie on the frustum corners. That is why every update we should recalculate the positions and properties, like width, length, and height, of our projection boxes. Here is the code that I have used:

I should also mention that there are actually two known ways of binding projection boxes: fitting to the scene and fitting to the cascade. You've seen the second one above. Below is the fitting to the scene method that I used in my demo. It is a bit easier to configure, but there is a drawback as well - overdraw factor is higher in comparison with another method, as we are wasting more resources. However, I was satisfied with the results anyway:)


Once you get the projection matrices, you can pass them into the shader. Also, do not forget to pass you clipping planes' distances. Without them, the shader would not know when to switch to another cascade! You can simply do it like that in the pixel shader:


Nothing special there: we are just comparing clipping distances to the pixel depth, then sampling a proper shadow map and, finally, changing the diffuse value (pixel color) for our simple lighting. As you can see, I am using PCF filtering for smoothing shadow edges, but you can use more complex algorithms. I won't explain them here, because this post is not about shadow filtering:) Finally, you can color your cascades, as it really helps when you are trying to debug your shadows.


And that is it, I guess! Now you have an understanding of basic csm algorithm. But is it perfect? No, there is always a place for improvements. CSM itself was not created yesterday, so there are many things that can be added to it.

For instance, we may store our depth maps in a texture array if we are building for modern machines (I suppose we are not interested in DirectX 9 era devices anymore). In addition, I saw some implementations where calculations were done in the shader itself (calculation of projection boundaries and etc). I believe it can improve the performance if you are worried about your CPU time. Another problem that is not fixed in my setup, but actually not very difficult to solve, is shimmering. When we move and render our depth maps every frame, we may notice some stuttering of our shadows. It makes sense because we are changing depth values all the time. That is why you should interpolate you shadow texels somehow. Also, you might want to smooth the cascade edges between shadows. Sometimes abrupt changes in shadow 'qualities' are very noticeable to the player. And, finally, you can use proper filtering techniques. PCF is quite old already, so there are several modern solutions that you should consider if you want to filter your shadows:)

Links:
1) https://docs.microsoft.com/en-us/windows/desktop/dxtecharts/cascaded-shadow-maps
2) https://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf
3) https://mynameismjp.wordpress.com/2013/09/10/shadow-maps/
4) https://mynameismjp.wordpress.com/2015/02/18/shadow-sample-update/
5) http://the-witness.net/news/2013/09/shadow-mapping-summary-part-1/

Other useful resources:
1) "Cascaded Shadow Maps" by Wolfgang F.Engel, from ShaderX5, Advanced Rendering Techniques

Saturday 26 May 2018

Fancy Shaders - Part 3: Deformable Surface with Tessellation

Snow deformation in "Assassin's Creed III" by Ubisoft.
Image is taken from lanoc.org

Hey! This is another post in Fancy Shaders series and I want to touch another cool topic - surface deformation. It's not something that is being used in every game these days but is getting more and more popular. So what's special about it?

Well, there may be certain situations when we want to change the surface of the ground and make it as realistic as possible. Just think about footsteps on snow or sand! Many games had been using different tricks for achieving that effect, but it looked fake most of the time. However, since DirectX 11 has introduced tessellation, developers started using other methods for solving that problem. Today I'm going to shortly talk about a common and quite simple way to deform surfaces in real-time using DX11 and Unity. Of course, I'm not going to talk particularly about tessellation, because it's a big topic, but do not worry! Unity can do a lot of things for you! 

In general, with tessellation enabled in your pipeline, we can increase the number of vertices and polygons of a mesh by subdividing its parts. We need to do that if we want to move the triangles and reshape the mesh in order to fit the "collider" (foot, for instance). Fortunately, it's easy to enable tessellation with Unity's Surface Shader. For example, below is the implementation of distance tessellation from Unity's documentation:


Ok, we prepared our surface for deformation, but what's next? Have you ever tried to work with heightmaps? Actually, we can use the same method for making hollows! We get information from a heightmap texture and then subtract our normal*displacement from the vertex instead of adding. Alright, that was not difficult, but how to generate that texture in real-time? We can do that with one extra shader and a script. In a shader we are going to simulate a brush, like in terrain editors:


And then in a script, we do a simple raycast to the ground from our "object-collider":

It's not really optimized, as performance will drop if we do raycasts from a huge number of objects, but even like that it looks pretty neat in Unity!




So you can actually use that idea for simulating sand, mud or other similar surfaces... What I showed you is a basic setup and there may be a lot of improvements. Personally, I can  think of these games with something similar: "Batman: Arkham Origins" by WB Games MontrĂ©al, "Rise of the Tomb Raider" by Crystal Dynamics and Eidos MontrĂ©al, "Assassin's Creed III" by Ubisoft and one recent title - "God of War" (2018) by Santa Monica Studio. I highly recommend reading their papers if you want to improve your surface deformation, because they described their own ways and optimization guidelines for this feature.

Links:

Other useful resources:
1) "Deferred Snow Deformation in Rise of the Tomb Raider" by A.K. Michels and P.Sikachev, from GPU Pro 7

Monday 16 April 2018

Fancy Shaders - Part 2: Shell Rendering

Cg Fur Shader with different parameters in Unity.

It's been a while since my last post, but I have something interesting for you:) You will see another cool trick that is easy to do with shaders anywhere you want. We will talk about fur in real-time computer graphics!

Obviously, rendering realistic fur in real-time is not a trivial task. Just think about it: somehow we need to create a huge number of fibers on the surface and also control them dynamically (for wind, gravity, external forces, etc.) Several solutions exist for that problem, so let's analyze them!

First of all, we can use a "brute-force" approach, which is creating primitives for every single fiber, such as a line or triangle strips and then manipulating them. Our fur then will be super realistic, but very inefficient to compute. Although we can use geometry or tessellation shaders, our performance is going to be really poor even on modern machines with high specs. Actually, something similar is being used in CGI, as real-time simulations are not required for that.

The second approach is billboarding. Even if you are not familiar with this technique, you will get the idea pretty quickly. Game developers have been using this method for ages when grass or objects in distance are to be efficiently rendered. Even these days billboarding is the main approach for rendering grass (that is similar to fur) in most of the games. We just create 2D sprites of textures in 3D world space and rotate them according to our camera position. Developers can achieve really decent results while rendering grass that way and save a lot of resources, of course. But what about fur? Unfortunately, it will be problematic to render fur with billboards. Fur can be spread on very rough surfaces, like the skin of an animal, and not on a "plane" terrain as grass. That is why billboards may have incorrect angles and players can notice those artifacts. However, I'm sure that some teams can get pretty cool results with that technique if it is set properly.

The last approach that I can think about is the "classical" way of rendering fur in real-time applications. A technique is quite old, but still can be noticed in modern games. It's called shell rendering that is a form of volumetric rendering.

"Conker: Live & Reloaded" (2005)

The idea behind shell rendering is also easy to understand and to implement.  We basically make bigger copies of our mesh which is similar to normal extrusion. It is also not that difficult to control the length of our fur! For example, we can use a mask texture and store information in it. What I did in the scene (top image) was just use a grayscale noise texture that functioned like a heightmap. I'm pretty satisfied with the result, however, you can also do that with a control mask texture and get the values from an alpha channel. Because the mask is created from your pattern texture, your fur won't be random anymore. Instead, you will have distinct regions of fur or its absence. For instance, some animals' skin has certain patterns so it will add realism to your application. Down below is the logic of our Cg shader:


Taken from XBDev.net

Without a doubt, this method is not perfect. As you may have thought, we are dependent on the number of copies of our model. That means the more we instantiate, the more resources we spend and higher quality we achieve. For example, fur from the top image was created by making 20 passes and still has the evidence of those shells.  On the other hand, it requires a lot fewer computations compared to a "brute-force" approach. I'd repeat myself again: there are other industry solutions, such as NVIDIA HairWorks or AMD TressFX, but they do require resources and not every machine can run them these days. Although these approaches are way more realistic and flexible, on mobile platforms we are still limited to shell rendering.

Finally, I want to show you how fur quality has changed over the last 10-15 years in video games. I have specifically chosen an example of a game that uses a lot of fur in it. It is called "Shadow of the Colossus" and was originally developed by Team Ico for PlayStation 2 in 2005. It definitely used shell rendering at that time. However, in 2018 Bluepoint Games released a remastered version of that beautiful game for PlayStation 4. They combined modern technologies with powers of the system and, of course, changed the look of fur in the game. I'm not even sure what was the approach that they'd used, but probably it was not shell rendering. My guess: they did some tricks with something that is similar to billboarding. What do you think?

"Shadow of the Colossus" (2018 - top, 2005 - bottom)

Links:
1) http://www.xbdev.net/directx3dx/specialX/Fur/index.php
2) http://developer.download.nvidia.com/SDK/10.5/direct3d/Source/Fur/doc/FurShellsAndFins.pdf
3) https://dl.acm.org/citation.cfm?id=617876
4) https://forum.unity.com/threads/fur-shader.4581/
5) http://hhoppe.com/fur.pdf

Other useful resources:
1) "Unity 5.x Shaders and Effects Cookbook" by A.Zucconi and K.Lammers
2) "Implementing Fur Using Deferred Shading" by D.Revie, from "GPU Pro 2"
3) "Animated Characters with Shell Fur for Mobile Devices" by A.Girdler and J.Jones, from "GPU Pro 6"

Tuesday 27 February 2018

Fancy Shaders - Part 1: Oil Interference

Final Cg Shader in Unity

I have decided to start a series of posts called "Fancy Shaders". From time to time I'll be posting some interesting tricks that you can do with shaders. Each post will have a bit of theory and explanation and, of course, source files. My main tool is Unity, as it will be easier for you to implement shaders there:)

The first post in "Fancy Shaders" is about interference. That's a pretty cool topic from wave physics which can be used in computer graphics as well! The effect is showed in the GIF above. Basically, it can also be used for puddles, some painting materials, etc. By the way, if you want to read more about diffraction & interference with Unity examples, I highly recommend you to check out the posts by Alan Zucconi. Also, another great source was this awesome book:

"Graphics Shaders: Theory and Practice, Second Edition"
by Mike Bailey and Steve Cunningham
  
Firstly, let's start with some light theory. I will just remind you some things from your high-school physics course:) Iridescence is a phenomenon that appears to change object's color when the view or illumination angles change (yeah, like in soap bubbles). We also know that light is a wave. More specifically, light is a disturbance in the electromagnetic field. A photon is a particle, the quantum of that field. It carries the energy which describes the color of light. As light is a wave, it has some wave's properties, such as wavelengths. We can also represent the energy as a wavelength, so different energies have different wavelengths and thus there're different colors (but the speed of light is constant). Take a look at this table:


Alright, now the question is how to convert this specter to a common RGB palette? There is no straightforward solution for that, but there are many functions which can approximate RGB values for wavelengths. I won't go into the details, as Alan Zucconi explains (here) that topic very well, so I will just use his method. I find it really decent and optimized for our purposes. Because you should understand that we must keep our shaders as simple as possible: i.e. putting a lot of branch-statements is not a good practice!

Ok, we converted the spectrum "somehow". Now to keep things a bit more clear we need to apply this approximation to our color in the shader. But we do not have anything for our function's input parameter - wavelength. Let's then move to interference. Again, to make it simple, I won't dive deep into physics. You can understand interference as a process of getting a new wave from two light waves that are reflecting from the surface (oil/water, let's say). To get a clear understanding take a look at this picture:


Also, for a good approximation, one incident ray is definitely not enough. We need to set a number of multipliers for our function and then loop through them. Another thing to mention is the refractive index which determines the deviation of the light angle. If you look at the picture, you will see that light changes its angle when hitting the oil surface. These refractive indexes are different for various surfaces. (air/oil is about 1.4) Down below is the formula that can be used for calculating the wavelength using the things above.



The final thing is d - the height of our refractive surface (in our case its oil). How can we calculate it? Well, we can use a noise function for that! "Graphics Shaders: Theory and Practice"  has an interesting approach: we multiply the height value (from the input, for instance) by an exponential height of the "hump" that we get from the noise texture. In fact, we assume that our oil surface has the shape of this "hump". That is why we can see all colors from wavelengths of different values.

Now we can calculate d:

And that's basically it! Now you know how we can achieve this cool interference effect with a couple of shader instructions. In addition, there is a lot of space for playing with variables and their values. Many things are editable so you can set everything for your special needs. You can use different noise textures for the pattern, too. And one more time I want to mention Alan Zucconi and his awesome blog. I recommend to go there and clarify all the things you did not understand or I did not mention here! Because the purpose of this post was to give a basic overview of this light phenomenon and its usage with shaders.

Once you have an understanding of the process, you can download my Cg shader here.

Links:
1. https://www.alanzucconi.com/2017/07/15/the-nature-of-light/
2. https://www.alanzucconi.com/2017/07/15/improving-the-rainbow/
3. https://www.alanzucconi.com/2017/07/15/improving-the-rainbow-2/
4. https://www.alanzucconi.com/2017/07/15/understanding-diffraction-grating/
5. https://www.alanzucconi.com/2017/07/15/the-mathematics-of-diffraction-grating/
6. https://bit.ly/2ougyTW