Atmospheric scattering

This weekend I got around to implementing the atmospheric scattering approach described in "A Practical Analytic Model for Daylight". It is basically a simplified model of the scattering that happens when light travels through air, with the light from the object being out-scattered, and sunlight and skylight being in-scattered. The amount of scattering is determined by the amount of turbidity in the air. Higher turbidity leads to higher haziness or cloudiness in the air. The result can be shown in the following two renders, the first with low turbidity and the second with high turbidity:

A Practical Analytic Model for Daylight


Beer's Law

I added Beer's law causing light that travels through water to be attenuated based on the inverse exponential of the distance traveled. This means that increasing depth will give the water a darker appearance, as seen in the render below:

I also switched to an implicit representation of the water surface, reducing the memory requirements. In addition I compute the normal vector based on the water surface equation instead of indirectly based on the height map, meaning the water surface is now bump mapped, and the geometrical resolution doesn't need to be that high in order to get high frequency details in the light reflection and refraction.



I made a simple height map generator which basically just sums a set of sine curves with different directions, amplitudes, and wavelengths, creating something that looks like water. The amplitude of the waves are also modulated by the water depth by looking at the height of the terrain when computing the water height map, so that the waves will get more shallow closer to shore. Combining the water height map with Fresnel based reflection and refraction produces the nice water effect shown in the image below:

The height map for the water now takes up the same amount of memory as the terrain height map, so I might try to avoid storing it explicitly, instead computing the sum of sines on demand. This also requires some changes to the kd-tree I use for ray tracing a height map, storing the tree nodes there implicitly as well. The upshot is that the highest wave frequency would no longer be limited by the resolution of the height map.

Reflections & Refractions in Raytracing


Depth of field and Perlin noise

This weekend I added depth of field to the ray tracer. Basically it involves perturbing each camera ray slightly based on a lens radius and a focal distance. It doesn't require a lot of code to add it to a working path tracer, and the result is pretty nice:

I also added Perlin noise and created a procedural marble-like texture for the statues:


Just for fun

Just for fun I did a render where the only light sources in the scene are the Lucy statues. With ray tracing this is really simple, I just set the emissive term of the material used for the model to some value, and render the scene using path tracing. The result was kind of cool so I decided to share it:


Albedo correction

Someone commented that the lighting was a bit off in the previous render, and it turns out the albedo was too low on both the statue and terrain. I increased the terrain's albedo to be closer to the 0.25 of green grass, and set the statue's albedo to 0.5. This made the render look a lot more natural and removed the hard shadows:

Note: I lowered the height of the statues a bit to make the scene less crowded (there are still 36 million statues). The increased amount of indirect lighting caused more noise, so for this render I used 2000 samples per pixel giving a render time of 9 hours.


And... a quadrillion

I did some optimizations to the model representation to avoid wasting space, and managed to squeeze the memory requirements for the Lucy model down to 535 MB. I then increased the number of instances to 36 million, reaching an impressive amount of a quadrillion (1e15) polygons. The result (with global illumination):

In this render I've also perturbed the sun ray randomly to give smoother shadows, and added a decal texture to the terrain for more detail (thanks to Mirchiss for complaining about the boring ground). I also added PNG export to the raytracer (earlier I just used KSnapshot), and made a movie zooming in on a single statue from afar: lucy_terrain.avi. The movie is made from 400 frames which are exported separately as PNGs, and then converted to an avi by mencoder. Each frame took approximately one minute to render with 50 samples per pixel.


18 trillion triangles

Just got 2 more gigs of RAM (giving 4g, though only 3.3g are visible due to using a 32-bit OS at the moment), so I had to come up with a nice render to take advantage of it. The scene below contains 640000 instances of the Lucy statue at 28 million triangles, giving roughly 18 trillion triangles (that is ~1.796e13 triangles). Due to the nice O(log N) properties of the BIH however, it only took about 10 minutes to render without any global illumination. Not all triangles are visible in the current viewport though. The entire scene consumes around 2.7g, 960m for the terrain and the rest for the Lucy statue.

Below is another render of the same scene, this time with global illumination through path tracing. This render took 8 hours, though the difference from the render above isn't that big. This is due to the sharp direct illumination from the sun, exaggerated further from treating the sun as a point light.


Natural looking sky

I've now added skylight and sunlight to the ray tracer based on the approach in "A Practical Analytic Model for Daylight". I currently treat the sun as a point light source though, which is not physically correct. Without bilinear path tracing or metropolis light transport though, having such a small and intense light source would lead to large amount of noise in the rendering, so it'll have to do for now. The image below shows an example scene with skylight and sunlight. It's a more detailed version of the terrain used before. I added support for 16-bit greyscale PNGs recently, meaning I could finally load the 16385 x 16385 Puget Sound height map. It's not terribly slow compared to the 4097 x 4097 version, though due to the memory requirements I have to cut off the kd-tree before it reaches the maximum depth, causing each leaf node to have 32 triangles. The code currently intersects the ray against all the triangles in the leaf node, which is a bit inefficient, so I should probably optimize it at some point. At this size the height map consumes 512 MB and the kd tree consumes 384 MB of memory. I'm still using the 4096 x 4096 texture, since the highest resolution would consume 1 GB of memory.

A Practical Analytic Model for Daylight


Terrain ray tracing

The image below shows the result from my kd-tree based heightmap traversal for terrain rendering. I alternately split the terrain in two in either the x- or z-directions. Each node in the tree stores the minimum and maximum height of the terrain in that node. This gives a very tight fit to the terrain, so when an intersection check against a terrain triangle is done the hit probability is rather high, and only a few triangles per ray need intersecting.

The heightmap and texture represent the Puget Sound area, and can be found at the Large Geometric Models Archive.

Terrain Guided Multi-Level Instancing of Highly Complex Plant Populations
Large Geometric Models Archive


Coherent grid traversal

Last week I implemented the grid traversal approach from "Ray Tracing Animated Scenes using Coherent Grid Traversal". For me it was a good crash course in coherent ray tracing using ray packets, and I got some experience in implementing SIMD packet traversal and triangle intersection. I got a pretty nice speedup as well, as the table below shows. I haven't implemented coherent ray tracing for the BIH yet, but that shouldn't be too hard.

TrianglesMono gridMono BIHCoherent grid
Asian Dragon7.2M0.181.030.35

The table shows the frame rates achieved with the different approaches at 800x800. All except the hand model are from The Stanford 3D Scanning Repository (note no animations this time). In the table, "mono" refers to tracing one ray at a time, whereas "coherent" means tracing 4x4 rays at once in a coherent fashion. Note also that only the coherent version is optimized with SIMD using SSE intrinsics. Using a coherent grid traversal versus a single ray grid traversal gives me a 2x-3x speedup currently. An interesting thing to note from the table is that the BIH traversal is much less affected by the model complexity than the grid traversal. In fact it seems that the BIH tree quality is of higher importance than the model complexity.

Ray Tracing Animated Scenes using Coherent Grid Traversal
The Stanford 3D Scanning Repository


Geometry instancing

Just implemented geometry instancing, so I wanted to share some renders with you. This scene contains one million instances of the hand model, which in itself contains 17135 polygons. Each instance is simply defined by a transformation matrix specifying its position and orientation. Performance doesn't suffer significantly from instancing either, as I use a top-level BIH for the instances.


Using BIHs for dynamic geometry

Since the Utah 3D Animation Repository has some nice keyframed animations I had to try to do some animations with my ray tracer. My first approach was to simply create a new BIH each frame representing the interpolation between two keyframes. Already I could ray trace some nice animated models in real-time, although at rather low resolutions. Still, this shows how efficient the in-place BIH construction is. I wasn't quite satisfied yet though, so I implemented lazy construction of the BIH (what W├Ąchter and Keller refer to as construction on demand). If I'd been coding the ray tracer in Haskell instead of C++ I guess I'd gotten lazy construction for free :) Anyways, lazy construction gave a nice little speed-up of up to 20% in some cases.

I then got an idea that I decided to try out. I noticed that for some models the difference between successive keyframes would be rather low. Thus, I implemented an approach where I precompute one BIH for each pair of successive keyframes in the animation. For the part of the BIH construction that needs the bounding boxes of the triangles in a model I simply use the unions of the bounding boxes of the triangles in the successive keyframes. I then do the interpolation between two triangles in the keyframe pair every time a ray intersects against a leaf node containing the id of the given triangle pair. A single BIH can thus be used to intersect against all possible interpolations (in the range [0,1]) between two keyframes. This gives a big boost to the frame rate in the cases where the BIH construction is the bottleneck.

The table below shows frame rates achieved with the different approaches. "Eager" is the approach where the full BIH is recomputed for each frame, "Lazy" is the approach where only the parts of the BIH that are needed are recomputed for each frame, and "Precomputed" is the precomputed BIH approach.

Wood doll35.235.744.23.323.493.00

The table demonstrates the small but consistent performance increase that lazy construction gives compared to eager construction. The interesting part however is how the precomputed BIH approach compares to recomputing the BIH each frame. First, note that it is consistently slower at 800x800, because at that resolution the ray traversal and intersection becomes the bottleneck, and thus the tighter fit that the recomputed BIHs give cause them to outperform the precomputed ones. For animations where there is a lot of change between successive keyframes precomputed BIHs perform very poorly as the results from the "Fairy" animation show (the animation contains a lot of movement in just 21 keyframes). The "Ben" animation however shows that when there is little change between successive keyframes _and_ the BIH construction is a bottleneck (when there is a low amount of rays hitting the model for example), then the precomputed BIH approach really shines. A possibility might be to use precomputed BIHs when viewing a model from a distance, and then swap to recomputed BIHs once we're up close. One thing I didn't take into account is that more time can be spent to improve the quality of the precomputed BIHs, as they are only computed once. In my current implementation I use the same construction heuristics for both precomputed and recomputed BIHs.

The picture above shows what a single frame of the "Ben" animation looks like in 200x200. That's all for this post, and I hope someone will find the precomputed BIH approach interesting :)


Ray tracing again

I recently started working on a new ray tracer. I've started from scratch redesigning everything to create a core that's both efficient and flexible enough to allow me to experiment with various algorithms and techniques. I created this blog to act as a journal on my progress, as well as a scratch-pad for random thoughts and ideas. This is also my first attempt at blogging, so if it shines through bear with me :)

The status quo is a ray tracer which supports the obj and ply model formats. I've already done some ray tracing and simple path tracing with it, so here are some obligatory pictures:

The acceleration structures I have implemented currently are a uniform grid and the bounding interval hierarchy. I was really amazed by how simple the latter was to implement, while still being both efficient and flexible. The in-place quick sort-like construction algorithm with the global heuristic is especially elegant. I'm looking forward to implementing a kd-tree to see how the BIH compares with it at ray traversal.

As I mentioned I have implemented a simple, bare-bone unbiased path tracer already (very simple once you have the ray-model intersection code). However, it's quite noisy and requires an intimidating amount of samples to get a decent result. Eventually I'm planning to implement bidirectional path tracing and metropolis light transport to try to mitigate that. But first I want to optimize the acceleration structures so that they can act as a core for both stochastic and real-time ray tracing. This will probably involve implementing and SIMD-izing both single-ray and coherent traversal routines.

Other stuff I want to experiment with: terrain, volumetric ray tracing, sub-surface scattering, etc... I'll see where things lead and how long until interest fades :)

Instant Ray Tracing: The Bounding Interval Hierarchy
The Utah 3D Animation Repository
The Stanford 3D Scanning Repository