Robotic Pandas

On abstractions for building next generation user interfaces

Archive for November, 2009

Designing a GPU-oriented geometry abstraction – Part Three

Posted by mcdirmid on 24/11/2009

In this post, I want to propose an geometry abstraction along with an example on how it works. But first I’ll clarify the goals of the abstraction:

  • Should support functional composition and transformation. This basically means that it will support operations like a + b where a and b are geometry values and the result is a geometry value or f(a) where f is a pure function from geometry value to geometry value.
  • A geometry value can be rendered by one call without any other companion values.
  • Very limited magic and hard coded properties. Only properties relevant for rendering, vertex position, pixel color, and topology, are semi-magical in that a render call will look for them. All other properties and all calculations, such as those that specify layout or apply lighting, are non-magical and expressed explicitly by manipulating/transforming geometry values.

The above goals ensure that the abstraction is easy to use and that we can easily modularize geometry-building code using standard in-language constructs such as procedures, objects, or functions (including high-order functions). Now, let’s introduce the abstraction via an example using a neutral syntax:

Read the rest of this entry »


Posted in Uncategorized | Leave a Comment »

Designing a GPU-oriented geometry abstraction – Part Two.

Posted by mcdirmid on 23/11/2009

My last post described the problem of crafting an appropriate geometry abstraction for Bling. Bling previously solved the code problem for vertex and pixel shading, but lacked decent geometry input abstractions as well as an abstraction that supported geometry shading. The last post proposes giving geometry its own abstraction, but I was a bit hesitant in including properties within the geometry, which leads to either a problem of flexibility or static safety. I think its time to back up a bit and describe the abstractions that are relevant to a traditional programmable rendering pipeline:

Posted in Uncategorized | Leave a Comment »

Designing a GPU-oriented geometry abstraction – Part One.

Posted by mcdirmid on 20/11/2009

One the inputs of rendering via programmable shading on a modern graphics card is a collection of vertices associated with some per-vertex properties used in shader computations. When programming the GPU, this collection of vertices is commonly abstracted as a vertex buffer, which is essentially just a bag of bytes. The collection of vertices describes a primitive point, line, or triangle topology along with an optional index buffer (if triangle) that describes vertex sharing.  Again, the abstractions for describing these topologies are very weak and essentially amount to flags and arrays, although we can informally describe to the vertex buffer plus its primitive topology description as a geometry. Each vertex first individually undergoes processing in a vertex shader, and then (optionally) per-primitive (point, line, or triangle) in a geometry shader where a new geometry can be specified with fewer or more vertices or even a completely different topology. After vertex and geometry shading, the position of each vertex must be specified. Finally, each pixel of the rasterized primitive is processed by a pixel shader to determine its color. This is a simplification, the GPU supports other features such as instancing, where we reuse the vertex buffer multiple times in a single rendering, and geometry shader stream out, where we save the result of vertex/geometry shading for later re-use.

Read the rest of this entry »

Posted in Uncategorized | 2 Comments »