Ray tracing is a method, based on rather simple maths, to mechanically make a realistic looking 3D image on a computer. Ray tracing is based on shooting parallel rays into the depths of the user viewport, and keeping track of collisions of these rays, until they eventually reach an end-of-life or bump into light sources. Coming back the trip from the end to beginning, the pixel from where the ray was originally shot, gets its color by an algorithm that uses the data of these collisions.

We’ve talked a bit about rays as one of the tools in computer graphics toolbox. Rays were used mainly as collision detection method; “to see where a bullet would hit the wall”.

Rays have also a completely different kind of role in Ray Tracing.

Ray tracing is about the idea that every pixel in image can be different. Each pixel should be evaluated individually, by measuring backwards the source of the light that eventually reaches our eyes. This realistic and more complex technique also gives a photorealistic depiction of the scenery.

Game graphics done in traditional non-raytracing 3D looks more like a comic book / artistic depiction, but ray traced graphics brings about a certain noisy realism; even a bit more trashy look sometimes.

What our very own eyes see in real world, is incoming light beams. The

beams come not directly from light sources, often, but from reflected light. The original light is probably from Sun, or articifial light bulbs. Reflection happens from both interiors or objects (the surface area), and also especiallly at the very edge of an object. Light behaves in a way that reminds bending, when it counters an edge. This behavior is very intuitive for us humans, but for graphics rendering, all of these have to be programmed into the algorithm of a ray tracer.

Color theory says that materials are the color they are, because

these materials absord other wavelengths. So they leave intact

the corresponding wavelength of their own color; this bounces

and continues towards us, the perceiving eye.

In 3D ray tracing, we want to know the color of each

pixel of a rectangular viewing area. It’s a simple and powerful

idea. To have a realistic looking computer graphics scene,

it suffices to just “see where all the light beams came from”.

How to raytrace?

- Shoot a ray towards the depth (increasing the distance) from

your eye - Thus make a {x,y,z} Point
- find first intercept of Point with an Object face OF
- use the normal of the OF to calculate direction vector of Point’s new direction
- whenever the ray has a collision, store the color of the

encountered surface

So for modern 1366 x 768 resolution, a 1+ million rays would need

to be shot. The number of rays is simply the X component of your image resolution multiplied by Y component. (=width * height)

Let’s say, further, that we will track each ray for 5 bounces away. The algebra of the ray operations was around 9 operations (maths involving +, – and * and /)

If these nine ops in turn take max 4 machine cycles, we get a neat equation that tells us a good estimate of frame rate. A 3.000 GHz single-core would be able to produce screen updates at a frame rate of

## (1366 * 768 ) * 5 * 9 * 4 / 3000000000

= 15.89 fps

Modern games are acceptably smooth when they achieve at least 50 fps speed. So ray tracing, as can be seen from the mere 15.89 fps, is heavy and needs more brute power from the chips, or more clever algorithms.

Each multiplication factor (the resolution dimensions, number of ray

bounces, number of operations in algorithm, and number of cycles per

instruction) is linearly contributing to the speed. Using 50% of bounces

(leading to less realistic final result) would double the fps to 31. There are trade-offs.

Khan Academy has really interesting chapter on Ray tracing.

## Leave a Reply