Monday, 5 February 2018

Simple image processing with Xenko Engine

This will be a very short post, as I only want to introduce you Xenko Engine. It's a modern, open-source, 2D/3D cross-platform engine by Silicon Studio that has a plenty of cool features: scene editor, VR support, physics, animation, PBR and many more. Xenko is very similar to Unity because it also uses C#, so working with it wouldn't be painful. In addition, it has its own Xenko Shader Language that is based on HLSL. That's why I'd like to show you a couple of small shaders which I implemented while playing with the engine last night. They're just simple color tranformations: inversion, thermal vision and posterization. Nothing special, however, I find it kind of useful in order to understand how things work :)


Ok, so let's start with something difficult - color inversion... Of course, it's ridiculously simple! We just subtract our input color values and that's it :)


Inversion effect

Then we move to the thermal vision effect. You should understand that it's still a "fake" simulation of heat vision. However, it might be useful in some cases. Again - super simple: make gradient, calculate pixel's luminance and change it's color using linear interpolation. Here's the code:

Thermal Vision

Lastly, posterization effect. It is a process of a continuous gradation of tone to regions of fewer tones, with abrupt changes. Shader itself is quite simple, too, although, you may play with some settings: gamma value and number of colors.

Posterization Effect

If you want to add your own custom transform shaders, some .cs scripts should also be included. You can read more in Xenko's documentation. Don't worry, I will put everything into an archive down below. Oh, and by the way, some people may notice that I am inheriting Texturing on top of my shaders but I do not use any textures. That's true! I just wanted to show you that it is possible to create your own custom textures and implement more complicated and fancy effects, such as blur, for example.
Link to the archive: download

Tuesday, 16 January 2018

Inside "Alien: Isolation" - Graphics


"Alien" franchise is by far one of my favorite sci-fi movie series ever! That's why I was so thrilled when a video game "Alien: Isolation" came out in 2014. Guys from Creative Assembly have done an awesome job because the game looks and plays not only well but also recreates the vibes of 1979's cult classic "Alien".

Some time ago I found Adrian Courrèges's blog on the Internet with his "graphics studies" of different games and I wanted to do something similar since then. At the time of discovering new stuff in computer graphics, I decided to open up a popular profiling tool RenderDoc and perform some analysis of "Alien: Isolation" graphics. But I should mention that it was my first experience with profiling such a big game, so don't expect anything deep and meticulous! All I wanted to do is to dissect some frames and see how things were rendered. I will also mention some techniques that Creative Assembly used in the game, and that were then shown in the article by AMD.


First of all, the game uses Deferred Rendering which allows putting numerous light sources into the scene. I have touched that topic a quiet ago, so you should have a basic understanding of the process. One cool thing that the developers of the game came up with is working with multiple materials. We all know that if we want to use different BRDFs it can be problematic to do with Deferred Rendering. But "Alien: Isolation" has a nice solution: stencil buffers mark objects with unique materials and then lighting uses multipass while doing some culling for these objects. So basically if some materials are out of the player's FOV, they are just not used in the pipeline.


Let's see how the lighting is rendered in the same scene step by step. If you played the game, you know it's full of projectors, flashers and other sources of light, despite the fact that it is set in dark environments most of the time. Everything looked amazing because of indirect lighting, radiosity lightmaps and emissive surfaces (displays, buttons, LEDs and etc). Sorry, I'm too lazy to implement a decent slider script, that's why look at the GIF I made and try to remember the details :)


Shortly the process is following: emissive surfaces -> data from the lightmap -> more data plus the sunlight (or maybe not the sun, as we're in the deep space!) -> volumetric fog -> and after some post-processing - a final image.

Simply gorgeous! I really like it! And I hope you as well:) After the scene has rendered everything in it, passes for UI are coming. I haven't noticed anything special in them: just some textures with HUD and stuff like that. So that's it for this scene. But! Let's move to another one... Of course, I'm not going to repeat everything again, because I'm only interested in one classic and geeky thing from the franchise - a motion tracker!


Obiously, nothing but only overlayed lines and elements that are mapped to 256x256 textures with emissive light, lol. But mind that it's interactive, with motion detection. In that case, it is a little bit more interesting in terms of game mechanics (not graphics though) as it adds a moving dot on the display that tells you where the danger is. At that moment, there was simply no alien around...


The game also has a DLC "Crew Expendable"  that features the characters from the original film, including Ellen Ripley. Let's take a quick look how her model was rendered on the screen. I should state that a proper skin rendering was definitely used, because, for example, a subsurface-scattering pass is there.


I already told you that the lighting in the game is incredible. Besides, VFX are also stunning! Sparks, fire and smoke particles are physically-simulated on GPU using DirectCompute. Thousands of them can be rendered simultaneously with different properties. For example, in the scene below there are 1092 vertices in the pass to render these fire particles (of course, flames themselves are just textures).


I suppose that is everything I wanted to show except the shadows. I haven't talked about Contact Hardening Shadows that were used in the game, but you can find an original paper if you are interested. You can also read about the texture compression that was mentioned in "High-Tech Fear - Alien: Isolation" article by AMD. Oh, and check out the game, of course, if you haven't played it yet:)

Links:
1. https://community.amd.com/community/gaming/blog/2015/05/12/high-tech-fear--alien-isolation

Wednesday, 15 November 2017

Spherical Harmonics in Graphics: A Brief Overview


Visual representation of Spherical Harmonics
 
Global Illumination is a very popular and demanding topic in computer graphics. In particular, real-time graphics developers have been struggling with it for over 10 years now, still optimizing and implementing new techniques for modelling indirect lighting. Today I want to take all the articles, blogs and papers together and review one very efficient method of storing light information - Spherical Harmonics, or more precisely, the method that actually encodes the light data. For example, there can be a common situation when light probes should gather data and SH will help to compress it (i.e. Unity is working with the same approach).

Example of a scene with light probes

Spherical Harmonics are quite difficult to understand without proper math background, so that is why I will try to describe them as easy as possible here. First of all, you may ask why the heck do we need these harmonics at all? Let's take a look at the main rendering equation:

Arrghh... You said "easy"!!!

Don't worry, it can be simplified for our real-time purposes to something like this:

Lambert diffuse without shadows

So we need to solve this integral by finding L(x,w) and max(N*w, 0). That's where exactly SH are going to help us. They can be used as projecting inputs for our 2 functions from the integral above. But how? Let's find out! This intensity function can be estimated with Monte Carlo Integration. I'm not going to dive into the details (its a topic of probability), but the overall idea is that we can approximate our integral by calculating a dot product of 2 SH coefficients. Actually, this property of SH  makes them so efficient!


Now let's quickly talk about Spherical Harmonics themselves to get a better understanding of the process. They come from solving Laplace's equation and describe a set of orthogonal basis functions in spherical coordinates. Basis simply means that these functions can be scaled and combined together to approximate an original function (yeah, like Fourier). Orthogonal (polynomials) have a unique feature: when we integrate the product of any 2 similar of them we get a constant and when we integrate 2 different - we get 0. There are several polynomials of this type, but we are going to work with Associated Legendre Polynomials (just read about them on the Internet). So in general SH then look like this (where l is an integer (band index) that is >=0 and m is 0<=m<=l):


In addition, I want to mention that in real-time graphics 2 and 3 bands are enough, as they already give us desirable results.


That is it! Finally, we can compute our approximated integral using the equation above. Actually, there is a lot more to mention about SH, for example, rotational invariance, zonal harmonics or the fact that this method is only good for low-frequency compression. This time I've only tried to make an overview of Spherical Harmonics! So if you are interested more, you should definitely read the original posts and publications down below. I will also attach the link to a series about another similar, but newer, approach - Spherical Gaussians used in The Order: 1886 by Ready at Dawn. Now check out the shader by Morgan McGuire:

                                                          

Links:
1. http://silviojemma.com/public/papers/lighting/spherical-harmonic-lighting.pdf
2. http://simonstechblog.blogspot.nl/2011/12/spherical-harmonic-lighting.html
3. http://www.ppsloan.org/publications/StupidSH35.pdf
4. http://www.geomerics.com/blogs/simplifying-spherical-harmonics-for-lighting/
5. https://mynameismjp.wordpress.com/2016/10/09/new-blog-series-lightmap-baking-and-spherical-gaussians/

Sunday, 29 October 2017

Experimenting with Shadow Mapping (SSM, VSM)

This time I'd like to show you how shadows are made:) Especially I'm going to focus on the technique called Shadow Mapping. Well, this method is quite old (it was introduced in 1978), but still very popular in modern real-time graphics even these days.


Demo scene. FBX models by dhpoware

Basically there are 2 steps of rendering the scene with Shadow Mapping. First of all, we should render our scene from the viewport of the light in order to create a depth map and store the depth values of the occluders. That depth texture will store the distance from the light to the closest vertices that are in the view of the light's "camera". And secondly we render our scene normally with projecting the depth map on it. After that we should be able to calculate the distance between each vertex and the light source for defining whether the vertex is in the shadow or not. If the distance is greater than the distance from the depth map, there's another vertex between our taken vertex and the light, so our current vertex is in shadow. It sounds not very complicated in theory, but in practice we will face several problems. Our shadow map is a texture, so logically it has the limited size and precision. That is why our calculations of the distance can be inaccurate. Look at these ugly artifacts below:


Shadow acne

This phenomenon is called shadow acne and is a precision problem. We can easily fix it by subtracting some value (that is called bias) from the actual depth. Our results will be much better, but keep in mind that a bigger bias value might cause another problem - peter-panning. (Actually, DirectX hardware has solutions for the bias problem, such as Slope Scaled Depth Bias.)


Acne is fixed, but now shadows are "detached". (Peter Panning effect)

Ok, we got an idea of how shadows are made, but there is still one big problem. Remember that we are working with a projected texture? That means that we are going to deal with nasty aliasing. Although we can increase the resolution of our shadow map, this approach is not the best solution for removing the jagged edges as they still won't be smooth. (even for modern GPUs)

So we have to think about filtering the shadow somehow. Luckily, there are a lot of different algorithms for smoothing shadows, like Percentage Closer Filtering, Exponential Shadow Maps, Variance Shadow Maps, Convolution Shadow Maps and etc. This time I'm going to mention only two techniques: Percentage Closer Filtering and Variance Shadow Maps.

PCF is a quite old algorithm that had been popular for many years. The idea behind it is following: we sample an area of texels and calculate a shadow percentage. For example, for 2x2 texel region we average 4 values for smoothing the shadow. (You can read more about PCF in GPU Gems, Chapter 11). So in our HLSL shader we then have something like this:



You can change the region size and get better results. Compare 2x2 and 5x5 PCF. In addition, besides standard PCF you can use more sophisticated methods of filtering, like Poisson PCF, Gaussian Blur or something else. 
2x2 PCF
5x5 PCF
But what if we want to improve our shadows quality using an easier approach with less samples? Well, then we should consider a different shadow map technique, Variance Shadow Maps. I won't get into the details (you can read the original publication), but just give you an overview. The implementation is pretty straightforward. We will work with mean and variance of distribution of the texel and then estimate the probability of being in shadow. So for this we will use two channels of 32-bit texture (red and green) to store depth and depth^2. Previously we used only one channel in our pixel shader. Now let's create 2 moments M1 and M2, where E(x) is the expected value of distribution, x is depth and p(x) is the filter weight. Then we can get the mean and the variance:
   
  
After that we can calculate an approximated upper bound for our probability P(x) using Chebyshev's inequality that was suggested in the original publication:


Variance Shadow Mapping
On the pictures (by Matt's Webcorner) you can see the difference between Simple Shadow Mapping and Variance Shadow Mapping with filtering. Mind the fact that you should use a different filtering algorithm with VSM, such as Gaussian Blur, to achieve the proper results.
Simple Shadow Mapping
This method avoids the problems with bias, such as acne and peter panning, but, unfortunately, has its own disadvantage - light bleeding. It happens when there's a big depth range between occluders/receivers, which results in apparent artifacts on the shadows. Light bleeding can be fixed by increasing the shadow value with exponent, which is a relatively heavy operation as it requires high precision. Anyway, in general VSM gives very good results despite it's costs. In conclusion, here's the shader code snippet that can be used for implementing the technique.





Links:

Thursday, 19 October 2017

Dual-Paraboloid Environment Mapping in HLSL (XNA 4.0)

"Dual-Paraboloid Reflections" demo

Hey, guys! Today I want to talk a little about Dual-Paraboloid Reflections and show their implementation in High-Level Shader Language using XNA 4.0 environment. The code and explanation are partially based on the post of Kyle Hayward from 2008, so I have decided to revive this topic because it's still popular among game developers these days. That means that you won't find anything new if you are familiar with this technique, but if not, it will be a great chance to discover or at least look at some fancy pictures :) 

Dual-Paraboloid Mapping is another way of doing the environment mapping in your scene besides cube or sphere maps. This method is cheaper than cube-mapping, as we have to update only 2 textures, not 6 as for the cube map

Let me explain what is actually going on with all this stuff. So as you may have guessed this method uses two hemispheres (front and rear) that reflect all incident rays in a constant direction.  

   
An example chart of elliptic paraboloid
Sample function equation of elliptic paraboloid
Image courtesy of Imagination Technologies


Now we have to calculate the mapped coordinate. A normal vector at the point of intersection is the result of an addition of an incident and a reflection vectors. We should also divide its x, y and z components by z to get a scaled result. 


The vertex shader should be pretty clear now: 


In the pixel shader, we will generate and sample our textures. Remember that we have to store only front and back.

So that's it! Nothing overcomplicated, because the method is quite similar to cube mapping. Unfortunately, the results will not be as good as with cube maps. Some of you could have noticed the artifacts on the spheres from the top picture. If not, here is another, more evident, example:


In general, this method can be very useful in some situations during the development, although the quality of reflections will be lower compared to cube mapping. Some great examples of using dual-paraboloid mapping are "Grand Theft Auto IV" and "Grand Theft Auto V" by Rockstar Games. These amazing games have some beautiful visuals, including the reflections :)

"Grand Theft Auto IV" by Rockstar Games

"Grand Theft Auto V" by Rockstar Games
Links:

Monday, 11 September 2017

Deferred and Forward Rendering in Unreal Engine 4

This is my first post here! So I've decided to do a short research-overview of two popular rendering techniques using the scene from "Infinity Blade: Glass Lands" environment by Epic Games.

Courtesy of Epic Games

You won't find anything specifically new here, just a comparison between two rendering algorithms in a modern scene that is actually meant to be playable. What I mean is that in theory we all know pros and cons of Deferred and Forward Rendering, but what about a real situation? So let's figure it out!

Everyone who is familiar with Deferred Rendering knows that it allows you to render multiple lights by gathering information with one geometry pass into the G-Buffer. Unfortunately, it has some common problems with complex BRDF models, anti-aliasing and transparency. (There're solutions, of course, but sometimes they aren't super-efficient.)

Normal Buffer

Forward Rendering is another technique that is less popular these days, but isn't obsolete at all. Well, it has issues with rendering multiple lights because of the way this algorithm works (although there're solutions, too, such as Forward+, for example), but at the same time it can easily handle MSAA. 

By default, UE4 uses Deferred Rendering, but you can switch to Forward as well. Forward Rendering is popular for developing VR projects, although it isn't fully optimized yet. For instance, Screen Space Techniques (SSR, SSAO) aren't supported in UE4 at the moment of writing this post. 

So let's take a look at a non-VR environment and compare the perfomance with in-built GPU Profiling. In addition, I should mention that I'm using Unreal Engine 4.16 version (Shader Model 5) and NVIDIA GeForce GTX 850m with Intel Core i5 4200m for the specs.

First of all, I will check the basics - frame rate. Deferred Rendering gives us about 18ms (~55 FPS), whereas Forward Rendering shows ~16ms (~62 FPS). The difference is not gigantic, but it is noticable. To be more specific:

Forward Rendering

Deferred Rendering

Of course, for delving into details let's check out both "Scene Rendering" commands:

Deferred Rendering

Forward Rendering
So as we can see there are some ms differences (i.e "Translucency drawing") in rendering with both techniques. I have to admit that the scene that I took for this research gives higher perfomance with Forward Rendering, even though in some other circumstances (light complexity and etc.) the situation might be completely different!

All I wanted to say is that we should not consider one or another technique old or useless in the modern video game development. We have to wisely choose the most suitable algorithm for our demands (AA or something else) or even combine both of them! (This is a common practice these days.) Anyway, there are many topics to learn and discover, such as Clustered Shading, Tiled Shading and etc.

Links: