Robotic Pandas

On abstractions for building next generation user interfaces

Designing a GPU-oriented geometry abstraction – Part Two.

Posted by mcdirmid on 23/11/2009

My last post described the problem of crafting an appropriate geometry abstraction for Bling. Bling previously solved the code problem for vertex and pixel shading, but lacked decent geometry input abstractions as well as an abstraction that supported geometry shading. The last post proposes giving geometry its own abstraction, but I was a bit hesitant in including properties within the geometry, which leads to either a problem of flexibility or static safety. I think its time to back up a bit and describe the abstractions that are relevant to a traditional programmable rendering pipeline:

  • Pixels as children of geometric primitives. The position of a pixel is fixed and a pixel shader only specifies each pixel’s color and optional depth. It often makes sense to have other properties beyond color and depth at the pixel level for better shading computations; e.g., a normal that is computed on a per-pixel basis via a parametric equation or via normal mapping. Other properties can be accessed from previous vertex and geometry shading phases, where interpolation is used to determine the value of the property from the value of the property in the enclosing primitive’s vertices. Quality improves when properties are computed on a per-pixel basis rather than interpolated; e.g., consider Phong vs. Gouraud shading.
  • Vertices as used to define primitives. A vertex shader specifies the position of each vertex, and will compute per-vertex properties that are later used in later geometry and pixel shading.
  • Primitives that are used to form complete topologies. A primitive is either of a point, line, or triangle topology, with triangle being the primitive that most commonly reaches the rendering stage. A primitive is defined by multiple vertices, and encloses multiple pixels. Geometry shading works as a primitive translator, where one primitive can be translated into zero or more primitives of possibly a different topology. Primitives created during geometry shading are defined by specifying their vertices.
  • A topology as a collection of primitives. Its initial definition is as a set of vertices, a topology for the vertices, and optionally an index buffer to allow for arbitrary vertex sharing between primitives, or an implicit adjacency relationship with explicit breaks.
  • A geometry as a topology + vertex layout properties. Geometry shading updates both vertex topology and properties, hence it is modifying the entire geometry. Technically, geometry does not include pixel color, which is more of a skin around the geometry, but we can throw pixel color into a geometry abstraction for completeness, although a term like object might be more appropriate to describe geometry + skin.

At the end of the day, only three properties need to be explicitly specified during shading to accommodate rendering: a geometry as a topology of vertices to form primitives, a position for each vertex, and a color for each pixel in each primitive.

What we are looking for in a geometry abstraction is to allow a single value g that can by itself undergo rendering (e.g., render(g)) where the pixel, vertex, and geometry shader can all be inferred by the composition of g. As a value, g explicitly encodes everything needed for shader generation and is formed through pure functional programming techniques that include composition and transformation. To meet the rendering requirement, g not only carries around a formula that defines vertex topology, but also the formula for each vertex’s position and the formula for each pixel’s color. These formulas in turn can depend on secondary properties that are subject to transformations that are applied to the geometry or state defined outside of the geometry; e.g., a slider’s thumb position.

As described in the last post, we can define an abstraction for geometry that supports composition, duplication, and transformation. Ideally, rendering could then involve forming a geometry in a clean functional way through multiple compositions and transformations, and passing the resulting geometry into a render command. Transformations not only include modifying the layout of the geometry by rotating, scaling, and translating it, but also applying color and lighting to the geometry and the pixels contained within it, or whatever else is required for a complete rendering specification. Lighting could even be applied to the constituent parts of a geometry before they are composed. The various properties needed to make this possible include thinks like diffuse, specular, glass, and refractive materials as well as additional non-geometry constituents such as directional, ambient, and spot lights. Essentially, the geometry would then become a mini-scene graph.

Scene graphs are common in retained graphics APIs such as WPF 3D and Java 3D. Basically, a scene graph is a graph of the elements that affect scene rendering, and then becomes the basis for what is shown on the screen. Since we are only interested in what can be efficiently rendered in a shader pipeline, we have to keep the graph nodes mostly homogeneous: duplicating and transforming a geometry in the graph is fine, but composing completely different geometries with different lighting schemes is probably not going to work in the context of one rendering call (instead, render the geometries in separate rendering calls).

The primary difference then with my previous geometry abstraction is that this new geometry embeds properties and transformations at every level of composition. A level in a geometric composition can omit or duplicate properties from a higher level, and transformations (e.g., lighting or layout) should input a property from the lowest level that it exists in the geometry that the transformation is attached to; e.g., a normal property at the pixel level is preferred from a normal property at the vertex level as it is more accurate for lighting computations. The problem with this approach is that we can lose static typing: a property might not exist at any level, then what? Right now I’m willing to live with dynamic checking since it will occur relatively early in Bling when code is generated at application startup.

That’s it for now. Geometry shading will fall out of transformations that cannot be resolved statically. I’ll talk about this in my next post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: