This is a simple ray tracing demo done entirely in a fragment shader over a fullscreen quad.

ray_tracing

The idea is that each fragment shader shoots one ray from the eye, through the screen and into the scene; checks which objects it intersects, if any; and based on this it calculates the color for the fragment. This is a good resource if you want to read more about ray tracing.

The materials are all based on procedural textures. I will assume you have some knowledge about perlin noise and procedural textures, but if not you can check this famous presentation by Ken Perlin himself from 1999, where he explains the concept and presents some of its applications. I’ll just quote the presentation and say that noise is a controlled random primitive; a pseudo-random function where all the apparently random variations are the same size and roughly isotropic. By itself it doesn’t do much but creating a simple pattern; its power comes when combining noise at different frequencies and feeding it to other constructs to provide interesting variations. It can be used to generate textures, models, animations, etc.
The implementation of perlin noise I used was developed by Stefan Gustavson.

Let’s dig into the code…

First let’s see the functions that combine noise at different frequencies, which serve as the basis for the different procedural textures used. These are explained in the presentation I mentioned above.
The function ‘cnoise’ (not listed here) is the basic perlin noise primitive.

// Fractal sum of noise. At each step we double the frequency and 
// halve the amplitude, repeating the noise patterns at different 
// scales.
float sumNoise(vec3 p)
{
    float f = 0.0;
    for (float i=0.0; i < 8.0; i++)
    {
        float w = pow(2.0, i);
        f += cnoise(p * w) / w;
    }
    return f;
}

// Same as above but taking the absolute value of the noise
// primitive, which creates discontinuities in the gradients.
// It is known as turbulence function.
float sumAbsNoise(vec3 p)
{
    float f = 0.0;
    for (float i=0.0; i < 8.0; i++)
    {
        float w = pow(2.0, i);
        f += abs(cnoise(p * w)) / w;
    }
    return f;
}

// Uses the turbulence function from above to do a phase shift
// in a simple stripe pattern.
float sinSumAbsNoise(float x, vec3 p) {
    return sin(x + sumAbsNoise(p));
}

Here are the data structures used in the shader. They are pretty self explanatory:

struct Material
{
    vec3 diffuse;
    float shininess;
    float glossiness;
};

struct Sphere
{
    vec3 center;
    float radius;
    Material mat;
};

struct Plane
{
    vec3 normal;
    vec3 point;
    Material mat;
};

struct Ray
{
    vec3 origin;
    vec3 dir;
};

The shader’s main function:

void main()
{
    // Calculate the screen uv coords. 'resolution' is a vec2 
    // uniform with the screen resolution.
    vec2 screenUv = gl_FragCoord.xy / resolution.xy;

    // Create the ray
    vec2 aspectRatio = vec2(resolution.x / resolution.y, 1.0);
    Ray ray = Ray(rayOrigin, normalize(vec3((2.0 * screenUv - 1.0) *
        aspectRatio, 1.0)));

    // Intersect the ray with the scene and compute the color based
    // on the object that was hit.
    float dist;
    float objectId = intersectWithScene(ray, dist);
    vec3 color = getColor(objectId, ray, dist);

    gl_FragColor = vec4( color, 1.0 );
}

The function intersectWithScene just checks whether the ray hits the sphere, the plane, or nothing, and returns an object id based on this.
What’s more interesting are the functions that return the color for each surface. This is the function for the sky:

vec3 getSkyColor(vec3 rayOrigin, vec3 skyPoint, vec3 sunPos)
{
    // Clouds layer
    vec2 skyUv = sphereTexCoords(skyPoint);
    vec3 clouds = 1.4 * vec3(sumNoise(0.002 * skyPoint + 
        vec3(0.02,0.02,0.03) * time)) * smoothstep(0.0,0.5,skyUv.y);

    vec3 skyDir = normalize(skyPoint - rayOrigin);
    vec3 sunDir = normalize(sunPos - rayOrigin);
    float sunDot = dot(skyDir, sunDir);

    // Sun and sun rays layers
    vec3 rays = vec3(0.7 * pow(max(0.0, sunDot), 50.0));
    vec3 sun = vec3(0.4 * pow(max(0.001, sunDot), 500.0));

    // Sky layer
    float curve = 1.0 - pow(1.0 - max(skyDir.y, 0.1), 10.0);
    vec3 sky = mix(HorizonColor, SkyColor, curve);

    float alpha = 1.0 - clouds.x;
    return clouds + alpha * (sky + sun + rays);
}

It has four layers: the clouds, the sky color, the sun and its rays. Let’s see each one in more detail.

The clouds are created with a fractal sum of 3d noise values. The input to the sumNoise function, which was listed above, is the point where the ray hit the sky (imagine a sky dome). This point is scaled down to ‘zoom in’ in the noise space, and then we animate the values in all 3 dimensions, otherwise we would get static clouds.
Then we multiply the result with a smoothstep that depends on the altitude of the point in the sky, to prevent the clouds from going below the horizon.

The sky layer is just a gradient between two colors, the horizon color and the sky color.

The sun and rays layers are computed in the following way. We take the dot product of the normalized directions from the ray’s origin to the sun position and to the sky position we are in. As you should know from your algebra lessons, this dot product is the cosine of the angle between these two vectors. We feed this value to the pow function and get a circular shape which is at full intensity in the sun position and falls off to zero while getting away from it. This falloff is controlled by the pow exponent: a small value for the rays results in a bigger shape, and a bigger value for the sun renders a small and more concentrated circle.

Finally we combine the four layers by adding the clouds to the other three layers scaled by an alpha value. This alpha is taken to be 1 – clouds.x, and its purpose is to remove some black spots from our sky, which would be really weird.

Moving on, let’s see the function that outputs the plane color. Explained in the comments inside the code:

vec3 getPlaneColor(Ray ray, float dist)
{
    vec3 point = ray.origin + dist * ray.dir;
    vec3 vecToLight = g_sunPos - point;
    float distToLight = length(vecToLight);
    vec3 lightDir = normalize(vecToLight);

    // Shadow: shoot one shadow ray pointing to the light. If it
    // intersects the sphere, consider the point is in shadow.
    Ray rayToLight = Ray(point + lightDir * EPSILON, lightDir);
    float ts = intersectSphere(g_sphere, rayToLight);
    float shadow = 1.0;
    if (ts > 0.0) {
        shadow = 1.0 - ts / (0.5 + 0.5 * ts + 0.5 * ts * ts);
    }
    vec3 vecToSphere = g_sphere.center - point;
    float distToSphere = length(vecToSphere);
    shadow *= smoothstep(0.0, 4.5, distToSphere);

    // Reflections: shoot a ray in direction of the reflection of
    // the incoming ray with respect to the plane's normal. Then 
    // use the getColorSimple function which computes the reflected
    // color but without shooting additional reflection rays.
    vec3 reflectedColor = vec3(0.0);
    vec3 reflectDir = reflect(ray.dir, g_plane.normal);
    Ray reflectedRay = Ray(point + reflectDir * EPSILON, reflectDir);
    float reflectedObjectId = intersectWithScene(reflectedRay, dist);
    reflectedColor = getColorSimple(reflectedObjectId, 
        reflectedRay, dist);

    color = shade(g_plane.mat, g_plane.normal, lightDir, ray.dir, 
        reflectDir, reflectedColor, shadow);
}

And here is the shade function, which applies the lighting model:

vec3 shade(Material mat, vec3 normal, vec3 lightDir, vec3 viewDir, 
    vec3 reflectDir, vec3 reflectedColor, float shadow)
{
    const vec3 specularColor = vec3(0.8);
    float NdotL = max(dot(normal, lightDir), 0.0);
    float RdotV = max(dot(viewDir, reflectDir), 0.0);
    vec3 spec = pow(RdotV, mat.shininess) * specularColor;
    return mat.diffuse * (NdotL * shadow + Ambient) + shadow * 
        mat.glossiness * (spec + RdotV * reflectedColor);
}

You can check the full source code in my github account.
Enjoy!

Advertisements