Paul Porfiroff

Faculty of science and technology

Department of applied mathematics and computer science

Specialty: Software engineering

Master's thesis topic: Enhancing performance of stereo-image ray tracing synthesis using parallel graphics processors

Scientific adviser: PhD, Sergey Zori

Abstract

At the time of publishing of this abstract the work on master's thesis is still in progress. The final version of the work will be available in December 2014. The text of the dissertation and materials on the topic of research can be obtained from author or his supervisor.

1. Introduction

Novadays the most popular rendering algorithms are those based on rasterization principles. These algorithms perfectly fit SIMD architecture basis which underlies GPUs and are therefore used widely in interactive graphics. The main disadvantage of these algorithms is that pictures they synthesize are lack of realism and, especially, of physical accuracy.

Ray tracing algorithms are used to physically model the propagation of the light in virtual environments. This modeling process is extremely resource-intensive which is the main reason for that ray tracing is mostly used for static visualization. However, with the continuous growth of hardware performance, modern ray tracers are able to reach interactive frame rates.

2. Actuality of the dissertation theme

Despite the fact that today there are many visualisation systems that allow ray tracing virtual scenes interactively or even in real-time, they often impose some restrictions on them. For example, the restrictions may be related to the support of the scene animation, to the prevalence of certain types of materials (as the rest are processed slower or not supported at all by the algorithm), to the scene geometry representation, to the number of objects in the scene and their complexity, to the support of the interior and exterior scenes.

3. Aim and objectives of the research

Aim — to study and to analyze existing ray tracing algorithms and ways to improve their efficiency for GPU implementation.

Research objectives:

  • To analyze existing ray tracing algorithms and their characteristics;
  • To analyze existing ray tracing acceleration techniques;
  • To analyze different ways of implementing ray tracing algorithms and acceleration techniques on SIMD architectures;
  • To modify ray tracing algorithm and to implement it on CUDA-enabled GPU

Object of the research: ray tracing algorithms.

The subject of the research: improving efficiency of the ray tracing algorithms and their CUDA-based implementation on GPU.

4. Theoretical foundations of the ray tracing algorithms

First of all, it should be noted that there are an ambiguity in the notions of forward and backward ray tracing in the papers studied. In this paper, the backward ray tracing means tracing rays from the sensor (camera, eye) to the light source.

Let image plane Μs be a part of the rendered scene. The intensity of channel λc of the j-th pixel of the synthesised image associated with some region Μj on the image plane is defined by the next equation [4]:

I_{j}\left(\lambda_{c} \right ) = \int_{\left ( x, \omega_{i}, \lambda \right ) \in M_{j} \times S^{2} \times \Lambda_{c} } W_{e}^{\left( j \right )}\left( x \leftarrow \omega_{i}, \lambda \right)L_{i}\left( x \leftarrow \omega_{i}, \lambda \right)\left | cos\left(\theta_{i}\right ) \right | dA\left( x\right) d\sigma\left(\omega_{i}\right )d\lambda

The integration domain on the right-hand side of the equation is defined by Cartesian product of the sets denoted as follows:

  • Μj — a set of points on the image plane associated with the j-th pixel of the image;
  • S2 — a set of points on the surface of the unit sphere (each point determines the direction of the incident light);
  • Λc — a set of wavelengths assocoated with channel λc of the image which is a subset of a set of the wavelengths from the visible spectrum of the electromagnetic radiation, Λc∈Λ

The integrand uses following notations:

  • Li(x ωi, λ) — radiance of the light of the wavelength λ incident on the point x∈Μj of the j-th pixel from direction -ωi;
  • W^{\left ( j\right)}_{e}(x ωi, λ) — sensor responsivity function for the j-th pixel which scales Li(xωi, λ);
  • θi — the angle between the normal to the image plane at the point x and the vector ωi;
  • Α(x) — area measure defined on the Μj;
  • σ(ωi) — area measure defined on the S2 (which is a solid angle measure, because the sphere S2 is of unit radius);
  • dλ — differential length measure, defined on the Λc (wavelength);
  • In equation (1) an unknown function is Li(x ωi, λ). Finding the value of this function requires solving the equation first proposed in [3], the rendering equation (the light transport equation), which in the more general case is defined as follows:
L_{o}\left( x \rightarrow \omega_{o}, \lambda \right ) = L_{e}\left(x \rightarrow \omega_{o}, \lambda\right) \right ) + \int_{\left ( \omega_{i}, \lambda \right ) \in \times S^{2} \times \Lambda_{c} } f_{s}\left( x, \omega_{i} \rightarrow \omega_{o}, \lambda \right)L_{o}\left( r_{M} \left( x, \omega_{i} \right ) \rightarrow -\omega_{i}, \lambda \right)\left | cos\left(\theta_{i}\right ) \right | d\sigma\left(\omega_{i}\right )d\lambda

Next notations are used in equation (2):

  • Lo(x ωo, λ) — radiance of the light of the wavelength λ incident on the point leaving point x∈Μ in the direction ωo, where Μ denotes a set of all points of all surfaces of the scene;
  • Le(x ωo, λ) — radiance of the light of the wavelength λ emitted from point x in the direction ωo;
  • fs(x, ωi ωo, λ) — BSDF (Bidirectional Scattering Distribution Function), describes the part of incident illumination of the wavelength λ transported to the point x from the direction ωi to be scattered in the direction ωo;
  • rΜ(x, ωi) — function that returns the first point of intersection of the ray emanating from the point x in the direction ωi of the rendered scene Μ

The equation (2) is an integral equation of the second kind as the unknown function Lo(x ωo, λ) occurs both under the integral sign and outside the integral operator. The equation (2) assumes the following assertion to be true: in the absence of obstacles that photons can interact with, the radiance on the path from one surface to another does not change, that is Li(x ωi, λ)=Lo(rΜ(x, ωi) i, λ). Thus, equation (2) assumes that photons can interact only with the surfaces of the scene, but not with the space inside or around them. In other words, the surfaces of the scene are surrounded by vacuum.

Ray tracing algorithms provide methods of solving equation (2). In [1] noted that while it is possible to simulate a variety of global illumination effects using equation (2), some of the more simple effects, such as direct lighting without shadows, can not be synthesized. If the path between a point on the surface of the scene and the light source is blocked by opaque surfaces, this point will lie in the shadow. On the other hand, if the overlapping light will be totally transparent, the point on the surface will not be obscured but transparent surfaces themselves will not be percieved by the sensor (because transparent surface cannot reflect or refract light). To increase the flexibility of the model (2), the authors propose using not scalar, but vector radiometric functions that include two scalar functions describing the visible and invisibleillumination.

5. Ray tracing algorithms

When the rendered scene totally consists of light sources (in this case, the integral operator on the right-hand side of equation (2) disappears completely) or when the direct illumination is rendered (light-emitting surfaces are known beforehand, so Lo(rΜ(x, ωi) i, λ)=Le(rΜ(x, ωi) i, λ) for all the points x, lying on the surface of light sources, for the other surfaces it can be assumed that Lo(rΜ(x, ωi) i, λ)=0 or the set S2 in the integration domain can be replaced with a set of points on light surfaces), the equation (2) can be solved analytically. In general, finding partial solution to the equation requires the use of numerical methods. Various ray tracing algorithms are used to find a partial solution to the equation (2).

Backward ray tracing algorithm proposed by Turner Whitted in [5] is considered a classic. To approximate the instensity of the j-th pixel, this pixel is projected on the image plane and ray with an origin at the viewer's position is then traced through the center of this pixel. Each interaction of the ray with the scene surfaces requires computation of the Phong local illumination model and mixing the result with reflected and refracted components. Thus, at each point x of intersection of the ray with the surface the following global illumination model is computed:

I\left(x\right) = k_{a}I_{a} + k_{d}\sum_{i=1}^{N}I_{d}I_{l_{i}}\theta_{i}v\left(x, \omega_{i}\right) + k_{r}I_{r} + k_{t}I_{t}

The notation used in equation (3) is described below:

  • I — light intensity;
  • k — light intensity scaling factor;
  • N — the number of light sources;
  • v(x, ωi) — visibility function (returns 1 if a ray (x, ωi) doesn't intersect scene surfaces and 0 otherwise)

The following notation is for indices in equation (3):

  • a — fake indirect (ambient) illumination at a point;
  • d — direct diffuse illumination at a point;
  • l — light source;
  • r — specular reflection;
  • t — specular transmission

Reflected and refracted components are calculated by casting two rays in the reflected and refracted directions and solving the equation (3) in each subsequent point of intersection. To render realistic reflections and refractions, corresponding coefficients must be chosen with respect to the physical laws (Fresnel's law or its approximations).

An animation below demonstrates the process of solving light transport problem using Whitted's algorithm (fig. 1). Depending on the local illumination model used, objects are denoted D (diffuse) and S(specular), and indices r and t are of the same meaning as in equation (3). L stands for light sources. Primary rays are colored in green, shadow rays (shadow feelers) are black, reflectef rays are blue and transmitted rays are magenta. На рисунке 2 также отмечены проекции точек пересечения теневых лучей с поверхностями сцены (в трёхмерном пространстве).

Demonstration of the light propagation model implemented by a classic backward ray tracing algorithm

Light propagation model implemented by a classic backward ray tracing algorithm
(animation: number of frames: 10; frame rate: 1 frame/s; loop count: 7; size: 25.6 Kb)

It may be noticed that the secondary diffuse reflections are not modeled by the classic ray tracing algorithm (actually, the same applies to the diffuse refractions).

Path tracing algorithm was proposed in [3]. It relies on the idea that the most significant contribution into the approximation (estimate) of radiance make primary and shadow rays, that is rays lying at the beginning or at the end of the path between the sensor and the light source.

Since the classic ray tracing algorithm and its extended stochastic version named Distributed (distribution) ray tracing [2], build binary (in case of classic ray tracing) or N-ary (in case of Distribution ray tracing) trees, which include mainly intermediate rays, the computational resources are wasted (to compute illumination, which is usually has less impact on the final result, as compared to the direct illumination).

6. Conclusions

This abstract provides an overview of the basic ray tracing algorithms and their theoretical foundations. Ray tracing algorithm proposed in [5] is considered a classic and is capable only of the specular reflections and specular refractions while other indirect illumination effects are modeled loosly. To extend the variety of effectes modeled by the classic algorithm Distributed (distribution) ray tracing is proposed in [2]. Path tracing proposed in [3] improves rendering efficiency by reducing a branching factor present in previous algorithms.

Further research objectives:

  • To develop an optimized for GPU implementation ray tracing algorithm capable of real-time rendering of the dynamic scenes;
  • To implement the optimized ray tracing algorithm on CUDA-enabled GPU;
  • To evaluate the efficacy of the implementation

7. References

  1. Banks, D.C. and Beason, K.M. Retro-rendering with vector-valued light: Producing local illumination from the transport equation. Proceedings of SPIE, the International Society for Optical Engineering, Society of Photo-Optical Instrumentation Engineers (2006), 60600C.1-60600C.7.
  2. Cook, R.L., Porter, T., and Carpenter, L. Distributed ray tracing. ACM SIGGRAPH Computer Graphics, ACM (1984), 137-145.
  3. James T. Kajiya. The Rendering Equation. Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, ACM (1986), 143-150.
  4. Veach, E. and Dept, S.U.C.S. Robust Monte Carlo methods for light transport simulation. Stanford University, 1997.
  5. Whitted, T. An improved illumination model for shaded display. ACM SIGGRAPH Computer Graphics 13, 2 (1979), 343-349.