The frustum, or viewing volume, is typically a truncated pyramid.
For this assignment we are not truncating the pyramid. The sides
of the frustum are defined by the #camera directive in the ray file.
#camera
px py pz
dx dy dz
ux uy uz
ha
The near and far planes are distances, respectively, 0 and infinity,
from the camera position (px,py,pz) along the "toward"
vector (dx,dy,dz). In terms of computing the direction
of the cast rays, any plane orthogonal to (dx,dy,dz) that is
in front of the camera will do. So set the view plane a distance
of 1 from the camera to compute the cast rays. If the
ray R: p +ad
intersects
a surface at a >= 0 then the intersection point
is in the frustum.
2. Why does GetColor return a Point3D?
Colors should be computed as a triple of doubles in the range [0,1]
in order to avoid rounding errors.
See the next two questions for more details.
Point3D should really be called ThreeDoubles. It is used throughout
the code to represent colors, points, and
vectors. For example, you can take the dot product of two Point3Ds.
This is a bit confusing but can also be
useful. For example, the "difference" between two points is a
vector. This type of operation is quite easy since
points and vectors are defined as Point3Ds. It can also, of course,
lead to trouble. You will save yourself a lot
of trouble if you take the time to understand the mathematics
you are implementing before writing code. If you
have unexpected results, check if you have used a point as a vector
or vice versa.
4. How do I convert a Point3D to a Pixel?
There are clean ways and not so clean ways. Here is a not so clean
way.
Make p[3] in Point3D public. Assume theColor is a Point3D returned
by getColor.
Pixel pixel;
pixel.r = (unsigned
char) (clamp(theColor.p[0])*255.0);
pixel.g = (unsigned
char) (clamp(theColor.p[1])*255.0);
pixel.b = (unsigned
char) (clamp(theColor..p[2])*255.0);
ImageSetPixel(img,i,j
,&pixel);
Be sure to clamp theColor values to [0,1].
A cleaner way to do this is to add a member function to Point3D that
takes an integer i and returns the value of p[i]. Then
you can leave
p[] private; i.e.
Flt Point3D::getCoord(int i)
{
if (i >= 0 &&
i <=2)
return p[i];
return 0.0;
}
Replace theColor.p[i] in the first piece of code by theColor.getCoord(i).
This requires a little more mucking around with the code base.
Even better, you can write a member function to cast a Point3D to a
Pixel.
If so, you need to define Pixel in geometry.h. More mucking.
Take your pick.
5. Why am I getting "holey" pictures?
Did you implement shadows and suddenly get "holey" pictures like the following (based on sphere.ray)?
To compute InShadow you (should) cast a ray from the point of intersection to each light. If a ray intersects some object, then the point of intersection is shadowed with respect to the corresponding light. Note that the ray originates at some object, i.e. the one containing the point of intersection, and thus intersects that object at alpha=0. In Flt Group:: Intersect you should test for alpha > 0, rather than alpha >= 0. Even so, due to round off error, you might get a positive alpha for the intersection point.
What to do?
Each vertex is assigned a texture coordinate in the #vertex directive of the ray file. The point av1 + bv2 on the triangle takes the texture value at au1 + bu2, where v1 and v2 are the vectors from vertex 0 to, respectively, vertices 1 and 2 of the triangle, and where u1 and u2 are the vectors from the the texture coordinate of vertex 0 to, rexpectively, the texture coordinates of vertex 1 and vertex 2. The texture value at au1 + bu2 should be found by bilinear interpolation. The rgb values of the texture at au1 + bu2 should be used in lieu of the ambient and diffuse material properties. The specular properties in the #material directive should be used without modification. This allows you to add highlights to texture mapped triangles.