Sunday 29 October 2017

Experimenting with Shadow Mapping (SSM, VSM)

This time I'd like to show you how shadows are made:) Especially I'm going to focus on the technique called Shadow Mapping. Well, this method is quite old (it was introduced in 1978), but still very popular in modern real-time graphics even these days.


Demo scene. FBX models by dhpoware

Basically there are 2 steps of rendering the scene with Shadow Mapping. First of all, we should render our scene from the viewport of the light in order to create a depth map and store the depth values of the occluders. That depth texture will store the distance from the light to the closest vertices that are in the view of the light's "camera". And secondly we render our scene normally with projecting the depth map on it. After that we should be able to calculate the distance between each vertex and the light source for defining whether the vertex is in the shadow or not. If the distance is greater than the distance from the depth map, there's another vertex between our taken vertex and the light, so our current vertex is in shadow. It sounds not very complicated in theory, but in practice we will face several problems. Our shadow map is a texture, so logically it has the limited size and precision. That is why our calculations of the distance can be inaccurate. Look at these ugly artifacts below:


Shadow acne

This phenomenon is called shadow acne and is a precision problem. We can easily fix it by subtracting some value (that is called bias) from the actual depth. Our results will be much better, but keep in mind that a bigger bias value might cause another problem - peter-panning. (Actually, DirectX hardware has solutions for the bias problem, such as Slope Scaled Depth Bias.)


Acne is fixed, but now shadows are "detached". (Peter Panning effect)

Ok, we got an idea of how shadows are made, but there is still one big problem. Remember that we are working with a projected texture? That means that we are going to deal with nasty aliasing. Although we can increase the resolution of our shadow map, this approach is not the best solution for removing the jagged edges as they still won't be smooth. (even for modern GPUs)

So we have to think about filtering the shadow somehow. Luckily, there are a lot of different algorithms for smoothing shadows, like Percentage Closer Filtering, Exponential Shadow Maps, Variance Shadow Maps, Convolution Shadow Maps and etc. This time I'm going to mention only two techniques: Percentage Closer Filtering and Variance Shadow Maps.

PCF is a quite old algorithm that had been popular for many years. The idea behind it is following: we sample an area of texels and calculate a shadow percentage. For example, for 2x2 texel region we average 4 values for smoothing the shadow. (You can read more about PCF in GPU Gems, Chapter 11). So in our HLSL shader we then have something like this:



You can change the region size and get better results. Compare 2x2 and 5x5 PCF. In addition, besides standard PCF you can use more sophisticated methods of filtering, like Poisson PCF, Gaussian Blur or something else. 
2x2 PCF
5x5 PCF
But what if we want to improve our shadows quality using an easier approach with less samples? Well, then we should consider a different shadow map technique, Variance Shadow Maps. I won't get into the details (you can read the original publication), but just give you an overview. The implementation is pretty straightforward. We will work with mean and variance of distribution of the texel and then estimate the probability of being in shadow. So for this we will use two channels of 32-bit texture (red and green) to store depth and depth^2. Previously we used only one channel in our pixel shader. Now let's create 2 moments M1 and M2, where E(x) is the expected value of distribution, x is depth and p(x) is the filter weight. Then we can get the mean and the variance:
   
  
After that we can calculate an approximated upper bound for our probability P(x) using Chebyshev's inequality that was suggested in the original publication:


Variance Shadow Mapping
On the pictures (by Matt's Webcorner) you can see the difference between Simple Shadow Mapping and Variance Shadow Mapping with filtering. Mind the fact that you should use a different filtering algorithm with VSM, such as Gaussian Blur, to achieve the proper results.
Simple Shadow Mapping
This method avoids the problems with bias, such as acne and peter panning, but, unfortunately, has its own disadvantage - light bleeding. It happens when there's a big depth range between occluders/receivers, which results in apparent artifacts on the shadows. Light bleeding can be fixed by increasing the shadow value with exponent, which is a relatively heavy operation as it requires high precision. Anyway, in general VSM gives very good results despite it's costs. In conclusion, here's the shader code snippet that can be used for implementing the technique.





Links:

No comments:

Post a Comment