Friday, December 21, 2007

Structure of RIB

Most RIB files begin with the declaration of some options and attributes that are specific to certain renderers, We will look into these a little bit latter, but for know lets go over some calls that every RIB file must have, either by declaration or by using the renderers default value.

Image Declaration

Before anything can be created or rendered at all the renderer needs to know several things. Where will it send the output of the image (the pixels)? What name will this image have? What file format will this image be? What resolution will be the image? All of these questions are answered on a block that will usually read like this:
Display “./myImage.tif” “file” “rgba”
Format 640 480 1
These calls tells the renderer to generate an image named myImage.tif on the same folder where we are reading the RIB from (./.), to save the scene with the default image format specified by the current renderer “File” and to save the red, green, blue and alpha channel on that image “rgba”. For most renderers the default image format used when “file” is passed is “tif” (make sure to check though). To send the rendered pixels into a screen window you can replace the "file" parameter for "framebuffer". These calls are usually followed (or preceded, the order isnt really important) by the quality settings of the scene:
ShadingRate 1
PixelSamples 3 3
PixelFilter “catmull-rom” 3 3
ShadingRate specifies how small each mycropolygon should be. A shading rate of 1 means that each mycropolygon will be the size of a pixel. Since RenderMan renderers usually shader the corners of a mycropolygon, this means you are getting an average of 4 samples per pixel. If the ShadingRate is set to 0.5 then the renderer will make mycropolys that are un quarter the size of a pixel, so it will be shaded 9 times per pixel, which will result in more detailed textures. A shading rate of 2 will generate mycropolys with a sieze of two pixels (less detail). Smaller shading rates will result in higher rendering times, as you would assume,everything that adds more detail or quality to your image will result on higher rendering times.

PixelSamples is somewhat similar to ShadingRate but it affects the anti-aliasing of the edges of your objects. It determines how manytimes the final pixel will be sampled. Higher values will give you smoother edges. Higher values are also necessary when rendering with depth of field or with motion blur. PixelFilter allows you to select what kind of filtering you want to use on your final images.

Camera Declaration

After the image settings are declared a camera needs to be created. This is usually done by calling
Projection “perspective” "fov" 45
This call tells your renderer what kind of camera to use, it can be perspective or orthographic, the number after the fov parameter indicates the Field of View (FOV), 45 degrees on this case.

A special note must be added here. Remember that RenderMan has a hierarchical graphic state and it keeps track of the current coordinate system. This means that when you apply a translation or a rotation, it is the whole coordinate system that is affected. When you call the projection command you have just created a camera and the camera has its own local coordinate system. Since everything in our digital world is created AFTER the camera then we can say that the world is positioned in relation to the camera or is a children of the camera. This is a concept that is a little hard to grasp at first since in most 3D apps we tend to think of the camera as one more object in our world. For this reason you will usually see a bunch of transformation calls before the camera is declared. This moves the camera (and its coordinate) to where it needs to be.

World Declaration

After the camera we can finally begin to create our world. Calling WorldBegin creates a new coordinate system where all of our objects will be added to. The first thing to be created are usually the lights. This is done because RenderMan uses things as they are declared. You cant declare a model and then a light that is supposed to affect it, because the object will have already been passed to the renderer, so the light cant affect it. Lights are declared with the following call
LightSource “lightShaderName” lighthandle parameters
Here is an example of how a point light is usually called

LightSource “pointlight” 1 “float intensity” 10 “color lightcolor” [1 0.5 1]

After the lights are declared the objects can finally be created. Objects are usually created inside what is known as attribute blocks. You can change anything on the graphic state in an attribute block without affecting the rest of the scene because the graphic state is “popped” back to its previous state when you exit the attribute block. Lets look at a simple example of how the whole graphic state and attribute blocks work:

WorlBegin #Begin declaring the world
Surface "plastic" #Declare the current surface shader
Color [1 0 1] #Define the current color - all objects from now on
#will receive this shader and color
AttributeBegin #Now we start a new attribute block,
#we can override the previous shader and color
Attribute "identifier" "name" ["nurbsSphere1"] #Name the sphere
ConcatTransform [2.1367 0 0 0 0 2.1367 0 0 0 0 2.1367 0 0 0 0 1] #Apply a transform to the sphere's coordinate system
Surface "rudycplasmaball" "float additive" 0.75 #Change the current shader
Color [0.5 0.7 0.8] #Change the current color
Sphere 1 -1 1 360 #Declare a sphere
AttributeEnd #The end of the attribute block, we go back to the
#original Surface and Color
Sphere 1 -1 1 360 #This sphere will use plastic and the color [1 0 1]
WorldEnd #End of our world

What is a RIB

The RI API
The original way that animation packages connected with RenderMan was through API calls. API stands for Applications Programming Interface. The RenderMan API is referred to as the RI API, where RI stands for RenderMan Interface. This method for connecting to the renderer is very powerful since you can use a plethora of programming tricks to pass information to the renderer. However, once the image is generated the data on the API calls is deleted and if you want to re-render the frame you must reanalyze the scene and make the same API calls with the new changes. This might not seem like a big deal, I mean you do it all the time with your 3D application. But wouldn't you agree that things would go a lot faster if while lighting a scene, you where able to export the geometry only once and then every time you re-render only export the light information?Well, that is one of the many advantages of using RIB.
The RenderMan Interface Byte stream (or RIB)
It is true that going through the API might be faster and more compact (disk space-wise), but most production houses use RIB more than API. So what is RIB you might ask?. Well, it is a file format that contains direct calls to the the RI API, so in other words it is an easier, more readable way to pass commands to the API. For example a RIAPI program to generate a simple constant color polygon might look like this:
#include
RtPoint Square[4]= {{.5,.5,.5},{.5,-.5,.5},{-.5,-.5,.5},{-.5,.5,.5}};
main()
{
RiBegin(RI_NULL);
RiWorldBegin();
RiSurface(“constant”,RI_NULL);
RiPolygon(4, RI_P,(RtPointer)Square,RI_NULL);
RiWorldEnd();
RiEnd();
}
and this is what its RIB counterpart would look like
WorldBegin
Surface "constant"
Polygon "P" [.5 .5 .5 .5 -.5 .5 -.5 -.5 .5 -.5 .5 .5]
WorldEnd
As you see RIB is a lot more straight forward and easy to read. The previous chunk of commands omits a lot of the calls that are usually used to setup a RIB properly, the reason why an image is successfully generated is because renderers usually insert the necessary RIB calls with some default value in order to render the image.