Wednesday 15 November 2017

Spherical Harmonics in Graphics: A Brief Overview


Visual representation of Spherical Harmonics
 
Global Illumination is a very popular and demanding topic in computer graphics. In particular, real-time graphics developers have been struggling with it for over 10 years now, still optimizing and implementing new techniques for modelling indirect lighting. Today I want to take all the articles, blogs and papers together and review one very efficient method of storing light information - Spherical Harmonics, or more precisely, the method that actually encodes the light data. For example, there can be a common situation when light probes should gather data and SH will help to compress it (i.e. Unity is working with the same approach).

Example of a scene with light probes

Spherical Harmonics are quite difficult to understand without proper math background, so that is why I will try to describe them as easy as possible here. First of all, you may ask why the heck do we need these harmonics at all? Let's take a look at the main rendering equation:

Arrghh... You said "easy"!!!

Don't worry, it can be simplified for our real-time purposes to something like this:

Lambert diffuse without shadows

So we need to solve this integral by finding L(x,w) and max(N*w, 0). That's where exactly SH are going to help us. They can be used as projecting inputs for our 2 functions from the integral above. But how? Let's find out! This intensity function can be estimated with Monte Carlo Integration. I'm not going to dive into the details (its a topic of probability), but the overall idea is that we can approximate our integral by calculating a dot product of 2 SH coefficients. Actually, this property of SH  makes them so efficient!


Now let's quickly talk about Spherical Harmonics themselves to get a better understanding of the process. They come from solving Laplace's equation and describe a set of orthogonal basis functions in spherical coordinates. Basis simply means that these functions can be scaled and combined together to approximate an original function (yeah, like Fourier). Orthogonal (polynomials) have a unique feature: when we integrate the product of any 2 similar of them we get a constant and when we integrate 2 different - we get 0. There are several polynomials of this type, but we are going to work with Associated Legendre Polynomials (just read about them on the Internet). So in general SH then look like this (where l is an integer (band index) that is >=0 and m is 0<=m<=l):


In addition, I want to mention that in real-time graphics 2 and 3 bands are enough, as they already give us desirable results.


That is it! Finally, we can compute our approximated integral using the equation above. Actually, there is a lot more to mention about SH, for example, rotational invariance, zonal harmonics or the fact that this method is only good for low-frequency compression. This time I've only tried to make an overview of Spherical Harmonics! So if you are interested more, you should definitely read the original posts and publications down below. I will also attach the link to a series about another similar, but newer, approach - Spherical Gaussians used in The Order: 1886 by Ready at Dawn. Now check out the shader by Morgan McGuire:

                                                          

Links:
1. http://silviojemma.com/public/papers/lighting/spherical-harmonic-lighting.pdf
2. http://simonstechblog.blogspot.nl/2011/12/spherical-harmonic-lighting.html
3. http://www.ppsloan.org/publications/StupidSH35.pdf
4. http://www.geomerics.com/blogs/simplifying-spherical-harmonics-for-lighting/
5. https://mynameismjp.wordpress.com/2016/10/09/new-blog-series-lightmap-baking-and-spherical-gaussians/

Sunday 29 October 2017

Experimenting with Shadow Mapping (SSM, VSM)

This time I'd like to show you how shadows are made:) Especially I'm going to focus on the technique called Shadow Mapping. Well, this method is quite old (it was introduced in 1978), but still very popular in modern real-time graphics even these days.


Demo scene. FBX models by dhpoware

Basically there are 2 steps of rendering the scene with Shadow Mapping. First of all, we should render our scene from the viewport of the light in order to create a depth map and store the depth values of the occluders. That depth texture will store the distance from the light to the closest vertices that are in the view of the light's "camera". And secondly we render our scene normally with projecting the depth map on it. After that we should be able to calculate the distance between each vertex and the light source for defining whether the vertex is in the shadow or not. If the distance is greater than the distance from the depth map, there's another vertex between our taken vertex and the light, so our current vertex is in shadow. It sounds not very complicated in theory, but in practice we will face several problems. Our shadow map is a texture, so logically it has the limited size and precision. That is why our calculations of the distance can be inaccurate. Look at these ugly artifacts below:


Shadow acne

This phenomenon is called shadow acne and is a precision problem. We can easily fix it by subtracting some value (that is called bias) from the actual depth. Our results will be much better, but keep in mind that a bigger bias value might cause another problem - peter-panning. (Actually, DirectX hardware has solutions for the bias problem, such as Slope Scaled Depth Bias.)


Acne is fixed, but now shadows are "detached". (Peter Panning effect)

Ok, we got an idea of how shadows are made, but there is still one big problem. Remember that we are working with a projected texture? That means that we are going to deal with nasty aliasing. Although we can increase the resolution of our shadow map, this approach is not the best solution for removing the jagged edges as they still won't be smooth. (even for modern GPUs)

So we have to think about filtering the shadow somehow. Luckily, there are a lot of different algorithms for smoothing shadows, like Percentage Closer Filtering, Exponential Shadow Maps, Variance Shadow Maps, Convolution Shadow Maps and etc. This time I'm going to mention only two techniques: Percentage Closer Filtering and Variance Shadow Maps.

PCF is a quite old algorithm that had been popular for many years. The idea behind it is following: we sample an area of texels and calculate a shadow percentage. For example, for 2x2 texel region we average 4 values for smoothing the shadow. (You can read more about PCF in GPU Gems, Chapter 11). So in our HLSL shader we then have something like this:



You can change the region size and get better results. Compare 2x2 and 5x5 PCF. In addition, besides standard PCF you can use more sophisticated methods of filtering, like Poisson PCF, Gaussian Blur or something else. 
2x2 PCF
5x5 PCF
But what if we want to improve our shadows quality using an easier approach with less samples? Well, then we should consider a different shadow map technique, Variance Shadow Maps. I won't get into the details (you can read the original publication), but just give you an overview. The implementation is pretty straightforward. We will work with mean and variance of distribution of the texel and then estimate the probability of being in shadow. So for this we will use two channels of 32-bit texture (red and green) to store depth and depth^2. Previously we used only one channel in our pixel shader. Now let's create 2 moments M1 and M2, where E(x) is the expected value of distribution, x is depth and p(x) is the filter weight. Then we can get the mean and the variance:
   
  
After that we can calculate an approximated upper bound for our probability P(x) using Chebyshev's inequality that was suggested in the original publication:


Variance Shadow Mapping
On the pictures (by Matt's Webcorner) you can see the difference between Simple Shadow Mapping and Variance Shadow Mapping with filtering. Mind the fact that you should use a different filtering algorithm with VSM, such as Gaussian Blur, to achieve the proper results.
Simple Shadow Mapping
This method avoids the problems with bias, such as acne and peter panning, but, unfortunately, has its own disadvantage - light bleeding. It happens when there's a big depth range between occluders/receivers, which results in apparent artifacts on the shadows. Light bleeding can be fixed by increasing the shadow value with exponent, which is a relatively heavy operation as it requires high precision. Anyway, in general VSM gives very good results despite it's costs. In conclusion, here's the shader code snippet that can be used for implementing the technique.





Links:

Thursday 19 October 2017

Dual-Paraboloid Environment Mapping in HLSL (XNA 4.0)

"Dual-Paraboloid Reflections" demo

Hey, guys! Today I want to talk a little about Dual-Paraboloid Reflections and show their implementation in High-Level Shader Language using XNA 4.0 environment. The code and explanation are partially based on the post of Kyle Hayward from 2008, so I have decided to revive this topic because it's still popular among game developers these days. That means that you won't find anything new if you are familiar with this technique, but if not, it will be a great chance to discover or at least look at some fancy pictures :) 

Dual-Paraboloid Mapping is another way of doing the environment mapping in your scene besides cube or sphere maps. This method is cheaper than cube-mapping, as we have to update only 2 textures, not 6 as for the cube map

Let me explain what is actually going on with all this stuff. So as you may have guessed this method uses two hemispheres (front and rear) that reflect all incident rays in a constant direction.  

   
An example chart of elliptic paraboloid
Sample function equation of elliptic paraboloid
Image courtesy of Imagination Technologies


Now we have to calculate the mapped coordinate. A normal vector at the point of intersection is the result of an addition of an incident and a reflection vectors. We should also divide its x, y and z components by z to get a scaled result. 


The vertex shader should be pretty clear now: 


In the pixel shader, we will generate and sample our textures. Remember that we have to store only front and back.

So that's it! Nothing overcomplicated, because the method is quite similar to cube mapping. Unfortunately, the results will not be as good as with cube maps. Some of you could have noticed the artifacts on the spheres from the top picture. If not, here is another, more evident, example:


In general, this method can be very useful in some situations during the development, although the quality of reflections will be lower compared to cube mapping. Some great examples of using dual-paraboloid mapping are "Grand Theft Auto IV" and "Grand Theft Auto V" by Rockstar Games. These amazing games have some beautiful visuals, including the reflections :)

"Grand Theft Auto IV" by Rockstar Games

"Grand Theft Auto V" by Rockstar Games
Links:

Monday 11 September 2017

Deferred and Forward Rendering in Unreal Engine 4

This is my first post here! So I've decided to do a short research-overview of two popular rendering techniques using the scene from "Infinity Blade: Glass Lands" environment by Epic Games.

Courtesy of Epic Games

You won't find anything specifically new here, just a comparison between two rendering algorithms in a modern scene that is actually meant to be playable. What I mean is that in theory we all know pros and cons of Deferred and Forward Rendering, but what about a real situation? So let's figure it out!

Everyone who is familiar with Deferred Rendering knows that it allows you to render multiple lights by gathering information with one geometry pass into the G-Buffer. Unfortunately, it has some common problems with complex BRDF models, anti-aliasing and transparency. (There're solutions, of course, but sometimes they aren't super-efficient.)

Normal Buffer

Forward Rendering is another technique that is less popular these days, but isn't obsolete at all. Well, it has issues with rendering multiple lights because of the way this algorithm works (although there're solutions, too, such as Forward+, for example), but at the same time it can easily handle MSAA. 

By default, UE4 uses Deferred Rendering, but you can switch to Forward as well. Forward Rendering is popular for developing VR projects, although it isn't fully optimized yet. For instance, Screen Space Techniques (SSR, SSAO) aren't supported in UE4 at the moment of writing this post. 

So let's take a look at a non-VR environment and compare the perfomance with in-built GPU Profiling. In addition, I should mention that I'm using Unreal Engine 4.16 version (Shader Model 5) and NVIDIA GeForce GTX 850m with Intel Core i5 4200m for the specs.

First of all, I will check the basics - frame rate. Deferred Rendering gives us about 18ms (~55 FPS), whereas Forward Rendering shows ~16ms (~62 FPS). The difference is not gigantic, but it is noticable. To be more specific:

Forward Rendering

Deferred Rendering

Of course, for delving into details let's check out both "Scene Rendering" commands:

Deferred Rendering

Forward Rendering
So as we can see there are some ms differences (i.e "Translucency drawing") in rendering with both techniques. I have to admit that the scene that I took for this research gives higher perfomance with Forward Rendering, even though in some other circumstances (light complexity and etc.) the situation might be completely different!

All I wanted to say is that we should not consider one or another technique old or useless in the modern video game development. We have to wisely choose the most suitable algorithm for our demands (AA or something else) or even combine both of them! (This is a common practice these days.) Anyway, there are many topics to learn and discover, such as Clustered Shading, Tiled Shading and etc.

Links: