Santiago Sanchez

Software developer

Ray tracing demo with WebGL — October 20, 2015

Ray tracing demo with WebGL

This is a simple ray tracing demo done entirely in a fragment shader over a fullscreen quad.

ray_tracing

The idea is that each fragment shader shoots one ray from the eye, through the screen and into the scene; checks which objects it intersects, if any; and based on this it calculates the color for the fragment. This is a good resource if you want to read more about ray tracing.

The materials are all based on procedural textures. I will assume you have some knowledge about perlin noise and procedural textures, but if not you can check this famous presentation by Ken Perlin himself from 1999, where he explains the concept and presents some of its applications. I’ll just quote the presentation and say that noise is a controlled random primitive; a pseudo-random function where all the apparently random variations are the same size and roughly isotropic. By itself it doesn’t do much but creating a simple pattern; its power comes when combining noise at different frequencies and feeding it to other constructs to provide interesting variations. It can be used to generate textures, models, animations, etc.
The implementation of perlin noise I used was developed by Stefan Gustavson.

Let’s dig into the code…

First let’s see the functions that combine noise at different frequencies, which serve as the basis for the different procedural textures used. These are explained in the presentation I mentioned above.
The function ‘cnoise’ (not listed here) is the basic perlin noise primitive.

// Fractal sum of noise. At each step we double the frequency and 
// halve the amplitude, repeating the noise patterns at different 
// scales.
float sumNoise(vec3 p)
{
    float f = 0.0;
    for (float i=0.0; i < 8.0; i++)
    {
        float w = pow(2.0, i);
        f += cnoise(p * w) / w;
    }
    return f;
}

// Same as above but taking the absolute value of the noise
// primitive, which creates discontinuities in the gradients.
// It is known as turbulence function.
float sumAbsNoise(vec3 p)
{
    float f = 0.0;
    for (float i=0.0; i < 8.0; i++)
    {
        float w = pow(2.0, i);
        f += abs(cnoise(p * w)) / w;
    }
    return f;
}

// Uses the turbulence function from above to do a phase shift
// in a simple stripe pattern.
float sinSumAbsNoise(float x, vec3 p) {
    return sin(x + sumAbsNoise(p));
}

Here are the data structures used in the shader. They are pretty self explanatory:

struct Material
{
    vec3 diffuse;
    float shininess;
    float glossiness;
};

struct Sphere
{
    vec3 center;
    float radius;
    Material mat;
};

struct Plane
{
    vec3 normal;
    vec3 point;
    Material mat;
};

struct Ray
{
    vec3 origin;
    vec3 dir;
};

The shader’s main function:

void main()
{
    // Calculate the screen uv coords. 'resolution' is a vec2 
    // uniform with the screen resolution.
    vec2 screenUv = gl_FragCoord.xy / resolution.xy;

    // Create the ray
    vec2 aspectRatio = vec2(resolution.x / resolution.y, 1.0);
    Ray ray = Ray(rayOrigin, normalize(vec3((2.0 * screenUv - 1.0) *
        aspectRatio, 1.0)));

    // Intersect the ray with the scene and compute the color based
    // on the object that was hit.
    float dist;
    float objectId = intersectWithScene(ray, dist);
    vec3 color = getColor(objectId, ray, dist);

    gl_FragColor = vec4( color, 1.0 );
}

The function intersectWithScene just checks whether the ray hits the sphere, the plane, or nothing, and returns an object id based on this.
What’s more interesting are the functions that return the color for each surface. This is the function for the sky:

vec3 getSkyColor(vec3 rayOrigin, vec3 skyPoint, vec3 sunPos)
{
    // Clouds layer
    vec2 skyUv = sphereTexCoords(skyPoint);
    vec3 clouds = 1.4 * vec3(sumNoise(0.002 * skyPoint + 
        vec3(0.02,0.02,0.03) * time)) * smoothstep(0.0,0.5,skyUv.y);

    vec3 skyDir = normalize(skyPoint - rayOrigin);
    vec3 sunDir = normalize(sunPos - rayOrigin);
    float sunDot = dot(skyDir, sunDir);

    // Sun and sun rays layers
    vec3 rays = vec3(0.7 * pow(max(0.0, sunDot), 50.0));
    vec3 sun = vec3(0.4 * pow(max(0.001, sunDot), 500.0));

    // Sky layer
    float curve = 1.0 - pow(1.0 - max(skyDir.y, 0.1), 10.0);
    vec3 sky = mix(HorizonColor, SkyColor, curve);

    float alpha = 1.0 - clouds.x;
    return clouds + alpha * (sky + sun + rays);
}

It has four layers: the clouds, the sky color, the sun and its rays. Let’s see each one in more detail.

The clouds are created with a fractal sum of 3d noise values. The input to the sumNoise function, which was listed above, is the point where the ray hit the sky (imagine a sky dome). This point is scaled down to ‘zoom in’ in the noise space, and then we animate the values in all 3 dimensions, otherwise we would get static clouds.
Then we multiply the result with a smoothstep that depends on the altitude of the point in the sky, to prevent the clouds from going below the horizon.

The sky layer is just a gradient between two colors, the horizon color and the sky color.

The sun and rays layers are computed in the following way. We take the dot product of the normalized directions from the ray’s origin to the sun position and to the sky position we are in. As you should know from your algebra lessons, this dot product is the cosine of the angle between these two vectors. We feed this value to the pow function and get a circular shape which is at full intensity in the sun position and falls off to zero while getting away from it. This falloff is controlled by the pow exponent: a small value for the rays results in a bigger shape, and a bigger value for the sun renders a small and more concentrated circle.

Finally we combine the four layers by adding the clouds to the other three layers scaled by an alpha value. This alpha is taken to be 1 – clouds.x, and its purpose is to remove some black spots from our sky, which would be really weird.

Moving on, let’s see the function that outputs the plane color. Explained in the comments inside the code:

vec3 getPlaneColor(Ray ray, float dist)
{
    vec3 point = ray.origin + dist * ray.dir;
    vec3 vecToLight = g_sunPos - point;
    float distToLight = length(vecToLight);
    vec3 lightDir = normalize(vecToLight);

    // Shadow: shoot one shadow ray pointing to the light. If it
    // intersects the sphere, consider the point is in shadow.
    Ray rayToLight = Ray(point + lightDir * EPSILON, lightDir);
    float ts = intersectSphere(g_sphere, rayToLight);
    float shadow = 1.0;
    if (ts > 0.0) {
        shadow = 1.0 - ts / (0.5 + 0.5 * ts + 0.5 * ts * ts);
    }
    vec3 vecToSphere = g_sphere.center - point;
    float distToSphere = length(vecToSphere);
    shadow *= smoothstep(0.0, 4.5, distToSphere);

    // Reflections: shoot a ray in direction of the reflection of
    // the incoming ray with respect to the plane's normal. Then 
    // use the getColorSimple function which computes the reflected
    // color but without shooting additional reflection rays.
    vec3 reflectedColor = vec3(0.0);
    vec3 reflectDir = reflect(ray.dir, g_plane.normal);
    Ray reflectedRay = Ray(point + reflectDir * EPSILON, reflectDir);
    float reflectedObjectId = intersectWithScene(reflectedRay, dist);
    reflectedColor = getColorSimple(reflectedObjectId, 
        reflectedRay, dist);

    color = shade(g_plane.mat, g_plane.normal, lightDir, ray.dir, 
        reflectDir, reflectedColor, shadow);
}

And here is the shade function, which applies the lighting model:

vec3 shade(Material mat, vec3 normal, vec3 lightDir, vec3 viewDir, 
    vec3 reflectDir, vec3 reflectedColor, float shadow)
{
    const vec3 specularColor = vec3(0.8);
    float NdotL = max(dot(normal, lightDir), 0.0);
    float RdotV = max(dot(viewDir, reflectDir), 0.0);
    vec3 spec = pow(RdotV, mat.shininess) * specularColor;
    return mat.diffuse * (NdotL * shadow + Ambient) + shadow * 
        mat.glossiness * (spec + RdotV * reflectedColor);
}

You can check the full source code in my github account.
Enjoy!

Advertisements
Distortion effect — October 11, 2015

Distortion effect

Distortion effect

This effect consists in distorting the rendered image, which can be used to model phenomena like heat distortion, e.g. around fire and explosions.

It is pretty simple to implement. We first render the particles to a render target, which we’ll use as a mask for the effect since we don’t want to distort the whole image. Then we draw the full scene in another render target, and lastly we render a post-processing pass where we do the distortion effect and compose the final image that is presented to the screen. Three.js’ EffectComposer API is used to do this.

Distortion effect image

The fragment shader in the post-processing pass takes three textures as input: one with the scene without the distortion effect (screenTex), one with the mask, and a noise texture, which is used to offset the uv values.

distortion_effect_mask
Mask texture

Here is the code:


uniform float scale;

uniform sampler2D maskTex;
uniform sampler2D noiseTex;
uniform sampler2D screenTex;

varying vec2 vUv;
varying vec2 vNoiseUv;

void main()
{
    vec3 mask = texture2D(maskTex, vUv).rgb;
    float distortionStrength = min(1.0, dot(mask, mask));

    vec2 noise = texture2D(noiseTex, vNoiseUv).rg;
    noise = noise * scale * distortionStrength;

    vec3 distorted = texture2D(screenTex, vUv + noise).rgb;
    gl_FragColor = vec4(distorted, 1.0);
}

The distortion is achieved by offsetting the uv values with the noise values, which depend on the mask, so the final image is not distorted where the mask texture is black.

You can check-out the code here.

Enjoy!

Screen Space Global Illumination — July 17, 2011

Screen Space Global Illumination

Here are some of the first results of my implementation of SSAO plus one bounce of indirect light in the Crystal Space engine.

In the photo gallery you can see three pictures of the famous dragon model from the Stanford repository in the Sponza Atrium scene. The first one is rendered with ambient occlusion and one bounce of indirect light, the second one is without ambient occlusion, and the third one is with ambient occlusion only.

The sponza scene with stanford models was exported to crystal space format by Ollie Brown in the context of Summer of Code 2009. You can find it here.

Ambient Occlusion — July 16, 2011

Ambient Occlusion

Ambient Occlusion is a concept used in computer graphics to describe the shadowing of ambient light. It is important because it helps improve the perception of creases and contact areas, and grounds objects, specially those which are not under direct illumination.

This images are from a car lit by ambient light. In the left image we see the floor under the car darkened by it. In the right one, the shadow under the car is removed, and the car appears to be floating, like it does not belong there. (Images taken from John Hable’s Uncharted 2: HDR Lighting presentation at GDC 2010).

Formally, ambient occlusion is defined as the integral of the visibility function over the hemisphere around the surface normal. It measures the amount of light being blocked by nearby objects. (Image taken from Separable Approximation of Ambient Occlusion presentation at Eurographics 2011).

The most common approach to solving the integral is to cast rays over the hemisphere around each surface location or vertex, which is extremely expensive to do at interactive framerates. Therefore it is used to precompute the occlusion factor for static scenes, both in games and in the film industry, as an offline method.

In order to achieve real-time framerates, several methods have been developed which approximate the effect in screen space, the first of them being Crytek’s Screen Space Ambient Occlusion method, originally developed for their game Crysis. The main idea behind these algorithms is this: for every pixel on the screen, sample a set of pixels around it from the depth buffer, and compare the depth values to get an estimate of the amount of occlusion of the pixel. The sampling can be done in 3D as well, but it requires having the positions of the objects rendered at each pixel, which can be either directly retrieved from a buffer, or reconstructed from depth. In addition, surface normals can be used to avoid sampling below the surface and to attenuate the occlusion value depending on the angle between the normal at the point and the direction towards the occluder. A buffer with occlusion values is obtained from this algorithm, which is then combined with the lighting information of the scene to produce the final image. Typically the occlusion factor only affects ambient light, but you can also made it affect direct light to achieve different results.

GSoC at Crystal Space — May 18, 2011

GSoC at Crystal Space

In early April I made a proposal to participate in Google Summer of Code program at Crystal Space and it was accepted!

GSoC is a program where several students all over the world (around 1000 every year since 2005) are chosen to work on different open source software projects over a three-month period.

Crystal Space is (quoting from its website):

A mature, full-featured software development kit (SDK) providing real-time 3D graphics for applications such as games and virtual reality.

For the next few months I’ll be working in extending the deferred shading manager with a technique called screen space directional occlusion. In the next days I’ll be posting more details about the project.

%d bloggers like this: