Clever Geek Handbook
📜 ⬆️ ⬇️

Path trace

A simple scene rendered using path tracing. A distinctive advantage of this image is the "softness" of the shadows and the "smoothness" of lighting.

Path tracing is a computer graphics rendering technique that seeks to simulate the physical behavior of light as close to real as possible. Path tracing is a generalization of traditional ray tracing , the algorithm of which traces rays in the direction from the virtual camera through space; the beam “bounces” away from objects until it is completely absorbed or scattered. The quality of images obtained using the path trace method is usually better than the quality of images obtained by other rendering methods, however, path tracing requires much more computational resources.

Path tracing is the simplest, most accurate on the physical side and the slowest rendering method. Tracing the path in a natural way reproduces many optical effects that are difficult to achieve or generally unattainable by other rendering techniques: shadowing , depth of field ( motion depth ), motion ( English motion blur ), caustic , ambient occlusion and indirect lighting. Implementing these optical effects using path tracing is much simpler than using other techniques.

Based on its accuracy and the absence of approximations and assumptions ( English unbiased ), path tracing is used to generate images, which are then used as comparative samples to evaluate the rendering quality of other algorithms. In order to get high quality images generated using path tracing, you need to trace a very large number of rays; otherwise, graphic artifacts in the form of noise will appear.

Content

  • 1 History
  • 2 Description
  • 3 Bidirectional ray tracing
  • 4 Performance
  • 5 Scatter distribution functions
  • 6 notes
  • 7 External links

History

The rendering equation and its application in computer graphics was introduced by James Kajiya in English in 1986 [1] . This presentation was the first description of a path tracing algorithm. Later this year, Lafortune proposed many improvements to the algorithm, including bi-directional path tracing [2] .

Metropolis light transport , a method of perturbing previously found ways to increase productivity in complex scenes, was introduced in 1997 by Eric Veach and Leonidas Gimbas [3] .

After some time, the GPUs reached such a level of development that they could arouse interest in them in terms of transferring track trace calculations to them. Tim Purcell in 2002 was the first to introduce a global lighting algorithm that runs on a GPU [4] . In 2009, Vladimir Koylazov demonstrated the first commercial implementation of a path trace algorithm running on a GPU [5] . This was facilitated by the maturation of GPGPU- oriented programming tools such as CUDA and OpenCL .

In 2015, it is planned to transfer the path trace to DirectX 12 and the Vulkan API. Work on the first is already underway.

Description

In the real world, many small portions of light are emitted by light sources and propagate in straight lines in the form of rays through the medium and from object to object, changing their color and intensity. This "journey" continues until the rays are absorbed by objects, including objects such as the human eye or camera. This ray propagation process is simulated by tracing a path, except that the rays are traced backwards from a virtual camera (observer) to a light source. This is due to the fact that of those rays that come from the light source, only a very small part falls on the lens of the virtual camera, so the calculation of the predominant majority of the rays does not affect the image received by the virtual camera.

This behavior is mathematically described in the rendering equation . This equation attempts to solve path tracing algorithms.

Path tracing is not a simple ray tracing with an unlimited number of ray reflections (i.e., with a recursive depth). In traditional ray tracing, light is calculated at the moment the ray directly intersects a diffuse surface. When tracing a path, a new ray is randomly generated inside the hemisphere of the object and then traced until it intersects with the light source, which may not happen. When tracing a path, the beam path can intersect with many diffuse surfaces before it intersects with the light source.

Pseudocode that implements path tracing may look like this:

  Color TracePath ( Ray r , depth ) {
     if ( depth == MaxDepth )
       return Black ;  // bounced enough times
    
     r .  FindNearestObject ();
     if ( r . hitSomething == false )
       return Black ;  // nothing was hit
    
     Material m = r .  thingHit -> material ;
     Color emittance = m .  emittance ;
    
     // pick a random direction from here and keep going
     Ray newRay ;
     newRay .  origin = r .  pointWhereObjWasHit ;
     newRay .  direction = RandomUnitVectorInHemisphereOf ( r . normalWhereObjWasHit );
     float cos_t = DotProduct ( newRay . direction , r . normalWhereObjWasHit );
    
     Color BRDF = m .  reflectance / PI ;
     float scale = 1.0 * PI ;
     Color reflected = TracePath ( newRay , depth + 1 );
    
     // return emittance + (BRDF * scale * cos_t * reflected);
     return emittance + ( BRDF * scale * reflected );
   }

In the above example, if each surface of the enclosed space is radiated and reflected (0.5,0.5,0.5), then each pixel in the image will have a white color .

Bidirectional ray tracing

You can sample the integral for a point using two independent methods:

  • Shooting rays (Shooting rays) from light sources and create paths in the scene. The path is interrupted by a random number of ray-bounce steps. Then the light is directed to the projected pixel of the resulting image. During this rendering method, millions of paths are created and the results of rendering paths contributing to the image are stored.
  • Gathering rays from a point on a surface. The beam shoots through the pixels of the image and jumps around the scene until it meets a light source in its path. The light from the light source is then sent in the direction of the image pixels. The process of creating a path is called "sampling." One point on the surface usually receives from 800 samples (up to 3 thousand). The final picture is translated using arithmetic operations, not just summing the samples.

Bidirectional ray tracing combines Shooting and Gathering in one algorithm and this gives faster image convergence (faster and less noise). These 2 methods for generating paths are traced independently and then the start of the shooting path is connected to the tail of the gathering path. It takes into account the attenuation of light at each bounce of the beam and is stored in the image pixels. This technique seems paradoxically slow at first glance, but this is due to the fact that 2 paths are considered right away. In practice, on the contrary, the additional convergence rate of the image compensates for the slowdowns arising from the need to release new and new rays.

In order to accelerate the convergence (convergence, convergence) of images, bidirectional algorithms trace paths in both directions. In the forward direction, the rays are traced from the light source until they become so weak that they cannot be seen, or until they hit the lens of a virtual camera. In the opposite, that is, standard, generally accepted direction, the rays are traced from the virtual camera until they collide with the light source, or until the number of their reflections exceeds a certain limit. This approach usually leads to an image that converges much faster than using only one direction.

Vich and Guibas gave a more accurate description of bidirectional path tracing [3] :

These methods generate two subpaths: one from the light source, and the second from the lens of the virtual camera. Then they <methods> consider all the paths that are obtained by connecting each prefix of one subpath with each suffix of the other subpath. This leads to a family of various important sampling techniques that are then combined to minimize discrepancies.

Original text

These methods generate one subpath starting at a light source and another starting at the lens, then they consider all the paths obtained by joining every prefix of one subpath to every suffix of the other. This leads to a family of different importance sampling techniques for paths, which are then combined to minimize variance.

Performance

The path tracer is constantly sampling ( English sampling - sampling ) pixels of the image. An image becomes distinguishable only when several samples per pixel are made, up to 100 samples per pixel. As a rule, about 5,000 samples are taken for ordinary images and to reduce digital noise to an acceptable level. However, for pathological cases, the number of samples becomes much larger. The rendering process can take hours and days, depending on the complexity of the scene and the performance of the hardware and software. Modern implementations of GPUs promise from 1 to 10 million samples per second, which makes it possible to generate a relatively silent image of acceptable quality in a few seconds or minutes. Digital noise creates a special problem for animation , creating, as a rule, an undesirable effect of "grain" of the image.

The Metropolis light transport group of methods slightly modifies previously traced successful paths and produces the most important samples for the image first. This can lead to a decrease in image noise and a decrease in the number of samples.

It is rather difficult to fairly evaluate the renderer performance level. One approach is to count the samples (samples) per second, the other counts the number of paths that can be traced and added to the image per second. The results of these methods vary greatly depending on the scene and depend on the "depth of the path", that is, on how many times the beam is allowed to bounce off the object before it is stopped. The result of the performance measurement is also highly dependent on the hardware used. Finally, one renderer can produce many low-quality samples, while another can render the final image faster using fewer higher-quality samples.

Scatter Distribution Functions

 
Image of bidirectional scatter distribution functions

The reflectivity of surfaces — the amount of reflected light, its direction and color — is modeled using the two-beam reflectance function . The equivalent of the transferred light (the light transmitted through the object) is the bidirectional surface scattering distribution function ( Bidirectional scattering distribution function ). The path tracer can take advantage of the complex, carefully modeled or calculated distribution functions that determine the appearance (“material”, “texture” and “shade” in terms of computer graphics) of an object.

Notes

  1. ↑ Kajiya, JT, The rendering equation , Proceedings of the 13th annual conference on Computer graphics and interactive techniques , ACM, 1986
  2. ↑ Lafortune, E, Mathematical Models and Monte Carlo Algorithms for Physically Based Rendering , (PhD thesis), 1996
  3. ↑ 1 2 Veach, E., and Guibas, LJ Metropolis light transport . In SIGGRAPH'97 (August 1997), pp. 65–76.
  4. ↑ Purcell, TJ; Buck, I; Mark, W; and Hanrahan, P, "Ray Tracing on Programmable Graphics Hardware", Proc. SIGGRAPH 2002 , 703 - 712. See also Purcell, T, Ray tracing on a stream processor (PhD thesis), 2004
  5. ↑ Vray demo ; Other examples include Octane Render, Arion, and Luxrender.

External links

  • This "Introduction to Global Illumination" has some good example images, demonstrating the image noise, caustics and indirect lighting properties of images rendered with path tracing methods. It also discusses possible performance improvements in some detail.
  • SmallPt is an educational path tracer by Kevin Beason. It uses 99 lines of C ++ (including scene description). This page has a good set of examples of noise resulting from this technique.
  • Pat Hanrahan. Monte Carlo Path Tracing Stanford University (May 21, 2002). Date of treatment July 18, 2010. Archived on May 3, 2012.
  • Progressive path tracing with V-Ray . spot3d.com. Date of treatment July 18, 2010. Archived on May 3, 2012.
  • Weas. Application of Progressive Path Tracing in the VRay module (unspecified) . 3DMir.ru . Date of treatment July 18, 2010.
  • Clanek Tomase Toegela (cyberdime - translation into Russian. Language) . Yaf-Ray Guide :: Path tracing - PT (unopened) . https://en.wikipedia.org/wiki/Ray_tracing_(graphics) . blender3d.org.ua (September 15, 2007). Date of treatment July 18, 2010. Archived on May 3, 2012.
Source - https://ru.wikipedia.org/w/index.php?title=Tracking_path&oldid=102534639


More articles:

  • Likhoborsky Bugry Street
  • Wnt Signaling Path
  • Kazakhstanskaya Pravda
  • The Siege of Acre (1291)
  • Northern Circle of Fine Arts Lovers
  • Philoctetes
  • Information Television Agency
  • Archvadze, Tengiz G.
  • Verification
  • Austrian Football Championship 2010/2011

All articles

Clever Geek | 2019