Real-time Cloud Rendering on the GPU
Summary
In the quest for realistic rendering of outdoor scenes, clouds form a crucial component. The sky is a prominent feature for many such scenes, and seldom is the air devoid of clouds. Their great variety in shape, size and color poses a difficult problem in the pursuit of convincing visualization. Additionally, the complexity of light behavior in participating media like clouds, as well as their sheer size, often cause photo-realistic approaches to have exceedingly high computational costs.
This thesis presents a method for rendering realistic, animated cumuliform clouds through a phenomenological approach. Taking advantage of the parallel processing power of the GPU, the cloud appearance is computed per pixel, resulting in real-time full-screen visualization even on low-end GPU's.
The pixel color is determined by three components. First, the opacity is computed based on a cloud mesh consisting of a set of ellipsoids, upon which a fractal noise texture is superimposed to generate detailed cloud features. Second, the single scattering component is analytically calculated using the same triangle mesh and a discretized Mie phase function. Finally, the multiple scattering component is obtained from a phenomenological model using trigonometric and logarithmic formulas with coefficients based on the view and sun angles and the cloud thickness at the current pixel. The parameters for this model are based on the work of Bouthors et al., who studied light transport in plane-parallel slabs. We apply this model in a more robust manner to find the best-fitting plane-parallel slab, providing for a better and faster alternative to the original approach.
The clouds are animated at no extra cost by translating them across the scene, which creates a simulation of turbulence, due to the fact that the noise texture is sampled by the world coordinates of a pixel. Additionally, cloud formation and dissipation is simulated by modifying the water droplet density.