Research Output
Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering
  Area lighting computation is a key component for synthesizing photo-realistic rendered images, and it simulates plausible soft shadows by considering geometric relationships between area lights and three-dimensional scenes, in some cases even accounting for physically-based material properties [2]. In rendering, computing visibility on a surface point can be naturally performed by a ray casting process, e.g,. processing shadow rays. However, this process typically requires a large number of shadow rays to accurately simulate penumbra regions. Otherwise, the rendered shadows can be often corrupted by a high variance, i.e., noise. This makes fully ray traced approaches non-practical under real-time constraint scenarios [1]. To tackle this problem, while achieving high-quality shadows interactively , we propose a filtering based visibility computation framework. Our method employs ray casting to accurately compute the visibility value of each sample, but we use only small sample counts to lower the sampling time required. This visibility image, generated by a small number of samples, is corrupted by high-frequency noise especially in penumbra areas. Instead of increasing the sample count, we apply our new filtering to reduce the noise and enable a high-quality rendering with soft shadows. In this short paper we present an efficient hybrid area light computation framework that uses ray casting with a small number of shadow samples to get smooth visibility maps using an adaptive post-processing filter. In the end the result of direct lighting computed using the rasterization pipeline is combined with the filtered ray-traced visibility to create the final images. Next, we provide the overview of our rendering framework devised to accelerate the visibility computations, and describe a new post-processing filter for producing high-quality soft shadows. In addition, we demonstrate that our method generates high-quality rendering results interactively , guided by our area lighting computation that runs in real-time. Hybrid Rendering Framework We have implemented interactive area lighting effects using a hybrid GPU rendering framework. Our proposed rendering system is built on top of an OpenGL deferred shading rasterization pipeline, encompassing a visibility pass that leverage on a GPU ray tracing engine (i.e. NVidia OptiX) to gather visibility samples. Since the number of samples required to get a noise free visibility map easily exceed the available computation time for interactive scenarios, we propose to employ a lower number of visibility samples, and filter this approximate result to get a nearly noise free visibility. Figure 1: Our GPU Hybrid Rendering Framework Overview. A classic GPU deferred shading rasterization pipeline (OpenGL) cooperates with a GPU ray tracer program (i.e. using NVidia OptiX), in charge of gathering in real-time a cheap amount of visibility samples. Our visibility filter, implemented in CUDA, is capable of reducing the noise in the visibility buffer and makes possible to generate high-quality rendered results. Our lt. visibility Input visibility Ref. Visibility Hybrid render using our visibility PSNR 30.10 dB RMSE 0.03122 PSNR 46.32 dB RMSE 0.00482 Figure 2: Filtering results for soft shadows. Our filtering output significantly reduces the Monte Carlo noise by just using pixel positions as a denoising feature buffer. Visibility Filtering Algorithm Our approach employs real-time filtering to reduce high-frequency noise in the visibility buffer. The goal of this filter is to approximate the ground truth visibility function that can be only computed with an infinite number of samples. In our real-time rendering scenario, allocated samples per pixel are typically a very small number (e.g., 1 to 4), and thus generated visibility functions are often very noisy especially when the shadow functions are generated from large area lights. We adapt a recent filtering method that adaptively controls polynomial order to optimally fit our visibility function to the unknown ground truth visibility [4]. The previous method demonstrated high-quality filtering results for images rendered by Monte Carlo ray tracing. The main idea was to approximate the unknown image locally with adaptively chosen polynomial functions. Especially, their underlying functions were formed by using geometric features such as normals, textures, and depths, as well as pixel positions. However, geometric features are not good indicators for our target functions, i.e., visibility functions, since the correlation between visibility and geometric buffers is not high. Hence, we choose to utilize pixel positions as an indicator for our approximations. This choice allow our filter to compute filtered visibility in real-time. As result, our computational overhead (e.g., 15 ms) is much lower than previously developed methods for general rendered images (e.g., [3] is typically one order of magnitude slower than ours). Our conclusion is that high-quality visibility on hybrid rendering systems is promising thanks to the successful combination of accurate ray-traced shadows that can be filtered, for low-sampling rates, using a real-time version of adaptive image-based denoising methods. [1] Koa Ming Di and Henry Johan. Efficient screenspace rendering for area lights. In

  • Date:

    31 December 2016

  • Publication Status:

    Published

  • Library of Congress:

    QA76 Computer software

  • Dewey Decimal Classification:

    006.6 Computer graphics

  • Funders:

    The Walt Disney Company Ltd

Citation

Iglesias-Guitian, J. A., Moon, B., & Mitchell, K. (2016). Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering. In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016)

Authors

Keywords

Area lighting computation, photorealistic rendered images,

Monthly Views:

Available Documents