14 March 2013 - 21:19Rendering translucent medias on the iPhone

I have implemented a simple idea on how one can render real-time single scattering in participating medias with relative small effort and cost. This can be used to simulate e.g. the effect of light bleeding through objects. The idea combines the depth information obtained from a shadow map and a depth-peeling rendering. Consider an object/media as depicted below.

The problem we want to address is how much light going is going in the direction of the eye/camera. From the rendering equation of participating medias we limit the lighting model to the following contributions to the radiance arriving at the eye,

L = L_{\text{frontside}} + L_{\text{backside}} + L_{\text{single scat.}}

Here L_{\text{frontside}} is the ordinary scattering of light at the surface point A, L_{\text{backside}} is the light which arrives at the backside surface point B of the object and is attenuated through the media due to scattering, and finally L_{\text{single scat.} is the radiance arriving through the media from the light source along the line of sight inside the object (e.g. point C) and scattered in the direction of the eye.

When rendering the scene we will have access to the information at point A in the fragment shader and can implement e.g. a Blinn–Phong shading for the L_{\text{frontside}} or any other model that suits our needs. In order to obtain the L_{\text{backside}} we will need besides a rendering of the scene without the object, the thickness of the object along the line of sight, since the attenuation through the media is essentially just exponentially damped. To obtain this thickness we must have access to the next-nearest depth value which can be obtained by using a depth-peeling scheme. Think of it as a depth buffer storing the depth of the surface behind the nearest. Attaching this next-to-nearest depth buffer to the fragment shader we have access to the thickness of the object along the line of sight, that is the distance |AB|. What this term achieves is essentially just an alpha blending with varying alpha value across the surface.

Thanks to Humus for providing the skybox texture

Now, in order to compute the L_{\text{single scat.}} contribution we will need to know the distance |Cc| from any point C along the line of sight inside the object to the boundary of the object (point c) in the direction of the light source. All this information is stored in the shadow map, but in particular, we can look up the depth distances from the light source to the object in the direction of the points A and B, that is the distance from the light source to a and b. Having the positions of those two points in the shadow map one can integrate/sample the linear path between the two points while having access to all distances from the light source to the object in the direction of any point on the line of sight. This integration can be approximated in many ways, but the simplest is to assume that one can do a linear interpolation of the values across the line inside the object, so simply using the depth values at the two end points A and B. In this case one can perform the integrations analytically assuming that the scattering coefficients are constant throughout the object and that the phase function is simple e.g. uniform scattering, that is constant and equal to 1/4pi. Also, one can to good approximation neglect that the "rays" are refracted at the boundaries of the object.

And of course we need a rendering of the cornell box with some not so realistic shadows for the translucent object, but hey, they come for free, since we have generated the shadow map.

No Comments | Tags: Real-Time Rendering, iPhone

6 August 2012 - 15:45Steel Beams

I am proud to announce that the application,

Steel Beams,

is now available on App Store for 2$.

Application Description:

Steel Beams can perform a static analysis on simply supported beams with I-shaped profiles. The cross-section properties for rolled IPE, HEA, HEB and HEM profiles can be read from database or a custom welded I-shaped profile can be specified. The section classification is determined based on the chosen steel grade. For section classification 4 the effective section properties are computed. The static analysis yields bending moment, shear force, deflection and rotation diagrams.

For more information see the dedicated page in the menu bar.

No Comments | Tags: Physics, iPhone

31 July 2012 - 17:33Triple Buffer Rendering on the iPhone 4S

Recently, I began playing around with the OpenGL ES 2.0 capabilities of the iPhone 4S. In particular, my first question was how to implement a solid render loop. It turns out that an ordinary render loop can be implemented using CADisplayLink (or if you like, drawInRect, if one is using the GLKit which the xcode opengl sample project is based on). Performance wise CADisplayLink should be a good choice compared to the NSTimer object that one had to use before this functionality was provided. However, the thing about CADisplayLink is that it will only get called a fixed number of times per second and you can not be sure at what time it is called. The timing thing is not that bad, but the fixed rate can potentially make the GPU idle or result in frame skipping, if your rendering is too slow compared to the selected refresh rate. Also, the fastest rate at which the loop can be called is determined by the iPhones native refresh rate, that is, 60 fps. So how can one make a render loop more like an ordinary one?

One way is by utilizing a standard producer/consumer scheme. This scheme comes in handy when one can run an additional (producer) thread at a different rate than the main (consumer) thread. The producer/consumer scheme that is suitable for a render loop is known as triple buffering. That the iPhone 4S has a dual-core just adds another good point for taking this approach.

On the iPhone, the easiest way to implement this scheme is to create two opengl contexts with a sharegroup (see this link). This way the main thread (consumer) will have access to the resources created by the render thread (producer). On the iPhone a buffer is rendered to by attaching it to a frame buffer object (FBO). Actually all rendering is done through FBOs. There are three different color buffers one can render to: Drawable renderbuffer, off-screen renderbuffer or a texture. My initial thought was to make the producer render to an off-screen renderbuffer and then use the glBlitFramebuffer functionality to copy them to the consumer, but since the blit functionality is not supported by OpenGL ES, I had to go with render to texture. The catch of this approach is that the main (consumer) thread then have to render a full screen quad to display the frame in order to implement the blit functionality.

The implementation of the triple-buffering scheme is outlined below. The two threads are only concerned about three pointers: consume, dirty, and snap. Behind the scene one will have allocated three storage buffers. In this case three texture buffers. The execution can be thought of as something like this: In the render thread, the dirty buffer is always rendered to. When it is updated, it swaps the dirty with the snap and sets a notification flag about it. It then goes on to update the dirty buffer again. In the main thread, the notification flag is checked. If there is a new buffer available it will swap the snap with the consume buffer; otherwise it will do nothing. It is essential to note that snap is never written to or read from.

One concern is whether glFlush is bad for performance. Well, it is, however, it is required to synchronize resources between the two opengl contexts (see here). It is not all bad though, since it will only blocks until all opengl commands have been submitted to the hardware, it does not wait until the GPU has finished executing the commands. This means that before glFlush is called we should perform some CPU work, so the pipeline can have time to submit the commands by itself ultimately avoiding the thread to be stuck inside glFlush.

So does one gain anything by using a triple buffer scheme (multiple threads) vs the single threaded approach? Well, I have to say that my demo app runs very neatly, but it does not really provide much of a test case. My render thread runs at full speed producing a lot of frames (in the thousands) that actually never gets displayed and the CPU is spending most of its time in the glFlush command because no CPU work is done.That said, now I don't have to worry about fixed rated render loops, now I can treat my render loop as an ordinary one.

Actually, I also found out that triple buffering was used for the iPhone version of the Doom and Wolfenstein games (see this link). However, in newer versions of the source code, it seems like they have used a single threaded approach.

Links

Doom iPhone code review (by Fabien Sanglard)

OpenGL ES Programming Guide for iOS

No Comments | Tags: Real-Time Rendering, iPhone

21 February 2011 - 22:00Site Location

We have decided that the fate of the blacksmith-studios site has come to an end. Therefore, the blog will continue at this location, so please update your bookmarks.

No Comments | Tags: Slug

14 October 2010 - 21:58Calculation of the Pair-Bubble Diagram

Not so long ago I handed in my second MSc. dissertation on the subject of determining (in a theoretical setup) the vibrational spectrum of graphene. The purpose of determining the vibrational spectrum is both to yield experimentalists results for which they can determine what structures they actually look at, but it also provide the first step in calculations of scattering between electrons and phonons. Understanding these scattering processes is of importance, since this is the limiting factor for the electrons mobility in graphene and hence important for use of graphene in future electronics. In this regard I studied the random phase approximation (RPA) which is used in condensed matter physics to calculate the self-energy of an interacting electron gas through a perturbation in bubble-diagrams. Here I performed the calculation of the super important pair-bubble diagram including some details which I thought I would share.

Here is the compiled latex file: Polarization Free Electron

No Comments | Tags: Physics

18 July 2009 - 11:33Dissertation

Haven't produced any posts in a while (well at least not published any), I guess that's just an inevitable consequence of personal blogs! However, the explanation, as it usually is, is that I have been extremely busy with my theoretical studies and actually still am. I'm writing my dissertation over the summer.

To be fair, the graphics side of my work has been suspended since last October! Crazy indeed, I guess physics won this interval of time. Anyway, I hope to produce something shiny in the autumn. For now, I can say that I'm pretty excited about what is going on with Evolve.

No Comments | Tags: Physics

23 November 2008 - 19:19SO(3) Rotation on Matrix Exponential Form

The derivation of the matrix for a general rotation in three dimensional Euclidean space is often encountered in strange forms. This is sometimes a bit puzzling until you have found your own way around it. In this post, I will not derive the expression, but show how the useful matrix exponential of the SO(3) generators is in fact producing a rotation about an arbitrary axis through some angle. [Note: SO(3) is the group of 3x3 orthogonal matrices with unit determinant].

Let the rotation matrix about an arbitrary rotation axis \vec{n} through an angle \theta be denoted R(\theta, \vec{n}) . The matrix satisfies R(-\theta, \vec{n})=R(\theta, -\vec{n}) and we can therefore pick \theta\in[0,\pi] and let \vec{n} be arbitrary. This way any double covering is avoided.

The infinitesimal generators of the SO(3) group is can be written as:

J^1=\left[\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}\right] ; J^2=\left[\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}\right] ; J^3=\left[\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}\right] ,

which for example can be obtained by considering the generators of SO(2) (which is far more easier obtained if you are starting from scratch) and generalize them to three dimension. Note that you can form Hermitian matrices by defining J^i=-iT^i . The generators satisfy the Lie algebra \;[J^a,J^b]=\varepsilon_{abc}J^c , where \varepsilon_{abc} is the Levi-Civita symbol.

Consider the matrix \exp(\theta n^iJ^i) , where the repeated index i is to be summed over. Now, since the generators J^i are antisymmetric we can write: n^iJ^i=CJ^3C^{-1} for some matrix C . This is a general result from linear algebra, that any antisymmetric matrix X=-X^t can be decomposed like this (c.f. http://en.wikipedia.org/wiki/Antisymmetric_matrix). Hence,

\exp(\theta n^iJ^i)=\exp(\theta CJ^3C^{-1})=C\exp(\theta J^3)C^{-1}

The neat thing is that the exponential matrix containing J^3 is easy to evaluate. In fact, we probably already know the answer from the derivation of the generators in the first place.

\exp(\theta J^3)=\left[\begin{array}{ccc}\cos\theta&-\sin\theta&0\\\sin\theta&\cos\theta&0\\0&0&1\end{array}\right]

This matrix is the well known matrix which produces rotations about the z-axis. The matrix can be expressed in terms of the generator by realizing that (J^3)^2=-I , where I is the identity matrix:

\exp(\theta J^3)=1-(\cos\theta-1)(J^3)^2 + \sin\theta J^3

Putting this result into the relation:

\exp(\theta n^iJ^i)=\exp(\theta CJ^3C^{-1})=
1-(\cos\theta-1)(CJ^3C^{-1})^2 + \sin\theta CJ^3C^{-1}=
1-(\cos\theta-1)(n^iJ^i)^2 + \sin\theta n^iJ^i ,

which is the expression for a general rotation in three dimensional Euclidean space! Oh, that was easy to show, if you knew all the right algebra! Cheers.

No Comments | Tags: Mathematics

16 November 2008 - 18:37Result from Group Theory

In group theory Lagrange’s theorem is one of the most important in the theory of finite groups.

Theorem 2.6 (Lagrange's theorem) The order of a subgroup H of a finite group G is a divisor of the order of G, i.e. |H| divides |G|.

One evident, but funny, implication of the theorem is in the answer of the following question: List all of the subgroups of any group whose order is a prime number.

Solution: According to Lagrange’s theorem, the order of a subgroup H of a group G must be a divisor of |G|. Since the divisors of a prime number are only the number itself and unity, the subgroups of a group of prime order must be either the unit element alone, H = {e}, or the group G itself, H = G, both of which are improper subgroups. Therefore, a group of prime order has no proper subgroups.

No Comments | Tags: Mathematics

16 August 2008 - 12:30Caustics from a Sphere

I decided that it was probably time for a release of a graphical demo! Since I did not code up anything in relation to the participating media research I thought that this time I would do the research the other way around! This time I have done some research related to light scattering from large spheres. When considering large spheres, it is vaguely speaking possible to use geometrical optics to describe the physics of the scattering pattern. Geometrical optics is also know as ray tracing in computer graphics.

Although, the use of geometrical optics, instead of the full solution to Maxwell's equations (known as Mie Theory in this relation), does not provide the classical exact solution to the scattering pattern, it will provide an almost correct one. The scattering pattern will contain the forward scattering known as caustics, the primary rainbow, secondary rainbow and so on.

Having computed the scattering pattern from a sphere and stored in a simple 1D table (say floating point texture), it is pretty straight forward to imagine the use in a real-time application, since the scattering of light in any direction from the sphere is now known from a single look up. Actually it is only a 1D table for a infinite distance light source. In order to handle a nearer point light source correctly, it is necessary to compute a two dimensional table, because the finite distance to the sphere excludes some possibilites of the incoming angles depending on the radius of the sphere. Also to get the entire range of rainbow colors one would have to compute the scattering for different wavelengths.

The colors of the rainbow Cornell box with point light Cornell box with infinite distant source

Demo Info

It is possible to use either the 1D or the 2D apporach in the implementation. Make sure you try both, because there is a pretty rainbow hiding! In the demo the sphere has the same properties as a large water drop and the scattering was only computed for three different wavelengths red, green, and blue. No indirect lighting was taken care of (hey it should be able to run real-time). This means that when the light source (which is modeled as being a point light source) is close to the sphere there will be a large ugly unrealistic shadow on the other side of the sphere. In addition, only light coming directly from the light source will be scattered through the sphere. In principle one could add secondary bounces.

Please, be patient with the loading time, the scattering tables are stored as simple ASCII data. The sphere itself is rendered by a combination of the reflection of light and the first bounce of light coming from the environment. The reflection is computed using the scattering tables ones again, while the light from the environment is estimated by rendering a cube map and looking up in the direction of the double refracted ray through the sphere.

Note that the reference is really enjoyable reading for folks with that sense of humor and is highly recommendable.

Download

Sphere Caustics Demo (2.4MB, Windows only)

References

[1] H.C.Van De Hulst, Light Scattering by Small Particles

No Comments | Tags: Real-Time Rendering

25 July 2008 - 9:49Analytic Approaches to Single Scattering in Participating Medias

I have finally completed my writeup concerning analytic approaches for rendering participating medias. Enjoy!

Abstract

An analytical approach toward visualization of participating media is presented. The volume rendering equation is considered under the assumption that the in-scattering integral can be sampled as a Dirac delta function in the direction of the illuminating light source. This assumption leads to a completely general result for isotropic homogeneous single scattering medias for which the airlight integral is shown to be a special case of. In addition, the limit for which a light source positioned far away is considered, which is particularly interesting in the case of ocean rendering.

The properties of the result are investigated and an approximate model for point light sources is proposed. The presented model is shown to approximate well for thin medias with a close light source and is suitable for real-time rendering. It is capable of simulating homogeneous translucent medias with a low albedo like marble, wax, skin, and smoke.

Download
Analytic Approaches to Single Scattering in Participating Medias (0.3MB)

References
[1] GREEN, S. 2004. Real-time approximations to subsurface scattering,
http://http.developer.nvidia.com/GPUGems/gpugems_ch16.html.
GPU Gems, Chap 16.

[2] JENSEN, H. W. 2001. Realistic image synthesis using photon mapping.
A. K. Peters, Ltd., Natick, MA, USA.

[3] SUN, B., RAMAMOORTHI, R., NARASIMHAN, S. G., AND NAYAR, S. K. 2005.
A practical analytic single scattering model for real time rendering.
ACM Trans. Graph. 24, 3, 1040–1049.

[4] WENZEL, C. 2007. Real-time atmospheric effects in games revisited,
http://ati.amd.com/developer/gdc/2007/d3dtutorial_crytek.pdf.
GDC 2007.

No Comments | Tags: Real-Time Rendering, Realistic Image Synthesis