Radiosity

Jason Wither

Graphics final project

Radiosity is a global illumination technique used to model three dimensional scenes. Global illumination differs from direct illumination (the technique used in more standard rendering systems) in that instead of just having objects and lights as different things, everything in a globally illuminated scene also acts as a light. Some examples of direct illumination techniques are ray-tracing, as well as polygon rendering using z-buffering. All of these techniques are very different, and have different advantages and disadvantages. These will be outlined below, as well as the different approaches to radiosity, from standard matrix radiosity, to progressive radiosity, to wavelet radiosity, and even some other less developed techniques, such as discontinuity meshing, and importance-driven radiosity. View dependence will also be discussed, as well as techniques to make a robust radiosity renderer that is view independent with special regard to incorporating it into a raytracer. Then a possible new technique will be described, to better do reflections in scenes using radiosity, something that has been difficult in the past.

The traditional techniques that use direct illumination are quite varied by themselves, only having in common the fact that they only get light from described light sources, and perhaps from some ambient value assigned to a material. These models can be done very quickly, as is the case with polygon rendering techniques that use scan-line and z-buffering. They can also be quite slow, as is the case with raytracing, but have much sharper results, as well as handling things like curved surfaces much better. One problem that all of these techniques have to a certain extent though is that an object is either lit by a certain light source, or it is not. There is no way to get the diffuse type of lighting that is seen in the real world. Things are either in shadow or not. Shadows are also completely black, unless there is another light source. One addition that has tried to remedy this problem is the addition of ambient light to these models. This is just a value that is given to a material, which specifies some non-black color for that material to have if it is not directly lit. Another related problem is the hardness of shadows, as was said something is either in shadows or not. There is no way to have a gradient of brightness across a wall as might be seen on a wall with a light at one end that is not shining directly on the wall. Another attempt to fix this is fuzzy shadows which can be implemented in a raytracer by calculating the light at each point several times, moving the light source by small amounts between calculations. This makes the lines slightly less harsh, but does not remove the overall problem of shadows either being on or off.

Global illumination is another technique that has been developed to deal with this problem of lights either being on or off. Because of the way all surfaces are also treated as lights these lighting effects that are seen in the real are much better handled. It is possible to get brightness gradients, as well as fuzzy shadows. Ambient light also does not have to be added because it is calculated by the renderer, and no longer is that ambient light just flat, it actually reflects the amount of ambient light at that point in the room. The down side of this is that it is much slower. The radiosity technique for example has to calculate the amount of light for each patch (where a patch is just a small part of a surface) with respect to all the other patches that it can see in the hemisphere in front of it. There are some speed-up techniques that make this not quite as slow, but it is still an intensive process, and is very difficult to do with any amount of detail in real time. There are other problems with radiosity as well, for instance most implementations handle point lights very poorly because light is supposed to come from a surface instead of a point. So, lights have to have some volume to them to work well in a radiosity renderer. Another problem with global illumination, and radiosity in particular, is that, as was mentioned earlier, it handles reflections very poorly. Each surface is treated somewhat as a reflective surface, in that there is some light incident to it, and then some amount of that light is then retransmitted out from it. The difference though is that the angle of incidence and angle of reflection are the same (with an opposite sign), where as the emissive light using radiosity goes in all directions equally. Because of this though it makes it hard to have a reflective surface because you have to somehow also have this reflective light that only goes off in the opposite direction it came in from. A possible solution to this will be discussed later in the paper.

The first technique for radiosity is called matrix radiosity, if it is called by any special name, and is radiosity in it's simplest form. First, each surface is divided up into patches, this works just like a texture mapping, where each patch knows it's location on the surface, and then of course the surface is defined in world coordinates. Each patch then uses a hemi-cube to look at all surrounding patches and get light input from them. This hemi-cube is one of the most important parts of the calculation because of the way it is implemented. It is used to represent the hemisphere surrounding the patch in the direction of the normal of the patch. It is used instead of a hemisphere because of ease of use. Basically each face of the hemi-cube is constructed by taking a picture of all of the patches in that direction, just as a normal image would be constructed using a viewing plane. The lights from each face of the hemi-cube are then added to make up the total light that is incident on the patch in question. Each patch in each part of the hemi-cube also has to have some weight assigned to it though, so that patches at angles are not given unfair weight in the total. This can be seen to be needed by imaging the hemi-cube as a stretched out hemisphere. Obviously the parts that are stretched out more should have a smaller weight multiplication factor than those that are not stretched at all. In addition to this, the weights must be adjusted to fall off around the edges of the hemi-cube (or hemisphere), so that light that is shining directly on the patch (in the direction opposite of the patch normal) are given more weight than those that are only shining on it at an angle. The total amount of light incident to the patch is then summed over the hemicube, and multiplied by the some factor defined in the material of the surface which defines how much of that incident light is then re-released as excident light.

This process is then repeated for each patch in the scene taking into account only the old patches (the old patches, and the new updated ones must be kept for each iteration), and this then is one iteration of the whole algorithm. This process is then repeated some number of times, until the scene retains approximately the same look from one iteration to the next.

This technique, while the simplest to implement, and to visualize is also extremely processor, and space intensive because each patch must be calculated a number of times, and two copies of the entire scene must be kept, one from the last iteration, and the one currently being built. Both of these could be cut down somewhat by using tricks, but it is still going to be fairly slow.

The next technique is called progressive radiosity. It is fairly similar to normal matrix radiosity, except it proceeds in the opposite direction. Instead of collecting all of the incoming rays from other patches at a patch, rays are shot out from the current patch at all the patches it can see. Because of this each iteration is much shorter, consisting of just looking out from one patch, but there are more iterations (at least one for each patch in the scene). The way that this is more effective than matrix radiosity is that an ordered list of the patches is kept, from the brightest to darkest patches, and the next brightest patch that hasn't gone is always the next to go. This is effective because patches that are brighter than the current patch already are only going to be minimally effected, if at all by the darker patch that is shooting. Thus, effectively only darker patches are effected at each point, and so fewer total iterations through all the patches need to be made. For this to be effective though there has to be some way to limit the amount of light that is being moved around. For instance, if your light source has a brightness of 0.9, then it might seem that every patch it cast a light onto would also then have a brightness of 0.9. Obviously this is incorrect. To solve this the form factor is used. This is basically just a number which says which part of the total brightness of 0.9 is being cast onto that one patch. So for instance, if you were casting light onto 100 other patches, and all of them were receiving exactly the same amount of light, then each patch would only have an illumination of 0.09 after that initial run. Then when the next patch went, (with a brightness of 0.09) it would cast out it's rays onto 100 patches, which would each receive 0.009, so the original patch might have it's total brightness increased to 0.909, but that is not a hugely significant increase. Most scenes would have far more than 100 patches as well, so this would be even less of a problem. This form factor also has to be used in calculating the excident light from each patch in the matrix radiosity technique, but because in discussing that the outgoing rays were not talked about too much, it was not mentioned there.

The last main technique of radiosity is wavelet radiosity. This technique is often implemented in a fairly similar manner to progressive radiosity but the differentiating factor between this technique and others is not how the rays are cast, but instead on how patches are handled. Previously patches were always a static size, and were just an inherent property of the surface. Wavelet radiosity changes this somewhat though by not using constant size patches, but instead varying them according to how far they are from the current light source. Surfaces that are closer to the light source are broken up into much smaller patches than those that are far away in order to give them greater resolution, while still maintaining some of the speed that comes from using larger patches. This is naturally hard to implement though because of the problem finding a way to decide how large patches on certain surfaces should be. Not only is this hard to implement, but also can sometimes lead to discontinuities in where the patches are located, which can cause artifacts in the final image. Overall though when implemented well wavelet radiosity does work quite well, and removes the assumption that patches size is significantly smaller than the distance between facing patches.

There is also a technique that does some combination of progressive and wavelet radiosity, called progressive radiosity with substructuring. This technique just uses a couple of sizes of patches though, instead of figuring out the patch size on the fly. In this way smaller patches can be used around areas of higher detail (like edges of shadows), while larger patches can be used where detail is not so important. This speeds up the running time without losing a whole lot of information, or having to worry about discontinuities on your surfaces from patches not matching up.

There are of course other techniques as well that are there to address specific problems. For instance, discontinuity meshing does a better job of making shadows more distinct, something radiosity has trouble with. Say, for example, that a scene has something like a window with bars across it. Using normal radiosity techniques those bars would do a very poor job of showing up because of the way the lights are blurred. This technique tries to fix that by paying special attention to when there are thin objects in front of a light source, so that their shadows are more distinct.

Another radiosity technique has been changed to only work for view dependent rendering. Most radiosity techniques render a whole scene, so that it is view independent, in that once the radiosity is done, and the light in the scene has reached it's equilibrium it doesn't matter where the viewer is looking because every surface is prepared to be rendered. In importance-driven radiosity however, the scene is only developed to be viewed from one point of view, so things that are not going to have an impact on that part of the scene can either be done with much coarser patches, or left out completely.

Because of this view independence of radiosity though it is possible to use the results of it with other rendering techniques as well. An example of this is to combine raytracing and radiosity, so that the weaknesses that each model has individually is somewhat balanced out by the strengths of the other technique. A very easy way to do this is to get the results of the running the radiosity engine on a scene, and then add that to a raytracer, using the resulting values as the ambient values in the raytracer. The raytracer can then add highlights by doing things like reflection, and transmission on top of the diffuse radiosity generated scene. If this is done correctly very smooth shadows and lighting can be seen in a raytraced scene that also has sharp reflections. This also has the added advantage of making very nice images, without significantly slowing down the raytracer because the radiosity pre-processor can be applied once to the scene, and then any number of raytraced images can be gotten from that radiosity improved scene.

This incorporation of the radiosity engine with a raytracer, is currently something that is being done by Lightscape, a company that makes a commercially available radiosity engine. It is also what I hope to do next semester, along with Steve Diverdi for research, except with the following addition. One of the major drawbacks of radiosity, as has been mentioned, is it's inability to do transparency and reflections well. We hope to develop a technique that will better handle these things without having to rely on the raytracer to put them in on top of the radiosity extended scene. Our approach will be as follows: We hope to have a additional value assigned to each patch that is the reflectance of that patch. The direction of the excident light will then be dependent on this value. For instance, if this value is zero then the radiosity will work exactly as before. However if this value is greater than zero rays coming in will reflect out more or less only in the direction that they would if the material was only reflective. The difficulty of this technique is determining where most of the light incident on a patch is coming from. So for each patch this must also be stored. The problem will be to decide if there needs to be a one to one correspondence between incident rays an excident rays. Reflection would be made much easier by using a one to one correspondence because that is the way it is done in direct lighting techniques like raytracing. Another possibly better approach might be to have some combination of the two. Store a list of all incoming rays, so the respective excident rays can then also be drawn with respect to them, and then also just have the normal excident light. Then just combine the two of these depending on what the reflectance of the patch is. For transmission this also then needs to be done for the back of the surface, light coming from the front of the surface also needs to go through the surface, and vise-versa as well.

Of course there maybe be some easier way to do this, and that is what a large part of the research next semester will be to figure out. There may also be some problems with some of the other techniques described, but because of my lack of experience implementing them I have just either over-looked something, or not quite understood something.

Overall though this is definitely a topic worthy of further investigation, not only into the model that tries to take reflection and transmission into account, but also just the standard model as well. Just the results that can be seen in simple scenes that are rendered first using a direct lighting technique, and then using a global lighting technique make it very clear that this is something that should be pursued because of the much better handling of diffuse light. The different techniques of doing radiosity are also very interesting, as the evolution between them can be seen very clearly, as the model gets more advanced, but also more complicated, and harder to implement. I look forward to further investigating radiosity next semester while building my own engine to work with the raytracer built this semester.

Bibliography

Elias, Hugo. Radiosity. http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm

Evaluating the Form Factors. http://www.cee.hw.ac.uk/~ian/hyper00/radiosity/formfact.html

Light Transport Simulation. http://www.graphics.cornell.edu/research/globillum/transport.html

Lightscape White Paper. http://www3.autodesk.com/adsk/files/773377_Lightscape3.2__Technology_WP.pdf

Willmott, Andrew J. and Paul S. Heckbert. An Empirical Comparison of Radiosity Algorithms. Carnegie Mellon University, 1997. http://www-2.cs.cmu.edu/~radiosity/emprad-tr.pdf

All materials copyright 2002 Harvey Mudd College