# 14 March 2013 - 21:19Rendering translucent medias on the iPhone

I have implemented a simple idea on how one can render real-time single scattering in participating medias with relative small effort and cost. This can be used to simulate e.g. the effect of light bleeding through objects. The idea combines the depth information obtained from a shadow map and a depth-peeling rendering. Consider an object/media as depicted below.

The problem we want to address is how much light going is going in the direction of the eye/camera. From the rendering equation of participating medias we limit the lighting model to the following contributions to the radiance arriving at the eye,

Here is the ordinary scattering of light at the surface point A, is the light which arrives at the backside surface point B of the object and is attenuated through the media due to scattering, and finally is the radiance arriving through the media from the light source along the line of sight inside the object (e.g. point C) and scattered in the direction of the eye.

When rendering the scene we will have access to the information at point A in the fragment shader and can implement e.g. a Blinn–Phong shading for the or any other model that suits our needs. In order to obtain the we will need besides a rendering of the scene without the object, the thickness of the object along the line of sight, since the attenuation through the media is essentially just exponentially damped. To obtain this thickness we must have access to the next-nearest depth value which can be obtained by using a depth-peeling scheme. Think of it as a depth buffer storing the depth of the surface behind the nearest. Attaching this next-to-nearest depth buffer to the fragment shader we have access to the thickness of the object along the line of sight, that is the distance |AB|. What this term achieves is essentially just an alpha blending with varying alpha value across the surface.

Thanks to Humus for providing the skybox texture

Now, in order to compute the contribution we will need to know the distance |Cc| from any point C along the line of sight inside the object to the boundary of the object (point c) in the direction of the light source. All this information is stored in the shadow map, but in particular, we can look up the depth distances from the light source to the object in the direction of the points A and B, that is the distance from the light source to a and b. Having the positions of those two points in the shadow map one can integrate/sample the linear path between the two points while having access to all distances from the light source to the object in the direction of any point on the line of sight. This integration can be approximated in many ways, but the simplest is to assume that one can do a linear interpolation of the values across the line inside the object, so simply using the depth values at the two end points A and B. In this case one can perform the integrations analytically assuming that the scattering coefficients are constant throughout the object and that the phase function is simple e.g. uniform scattering, that is constant and equal to 1/4pi. Also, one can to good approximation neglect that the "rays" are refracted at the boundaries of the object.

And of course we need a rendering of the cornell box with some not so realistic shadows for the translucent object, but hey, they come for free, since we have generated the shadow map.

No Comments | Tags: Real-Time Rendering, iPhone