Robotic Pandas

On abstractions for building next generation user interfaces

Archive for the ‘Uncategorized’ Category

Designing a GPU-oriented geometry abstraction – Part Three

Posted by mcdirmid on 24/11/2009

In this post, I want to propose an geometry abstraction along with an example on how it works. But first I’ll clarify the goals of the abstraction:

  • Should support functional composition and transformation. This basically means that it will support operations like a + b where a and b are geometry values and the result is a geometry value or f(a) where f is a pure function from geometry value to geometry value.
  • A geometry value can be rendered by one call without any other companion values.
  • Very limited magic and hard coded properties. Only properties relevant for rendering, vertex position, pixel color, and topology, are semi-magical in that a render call will look for them. All other properties and all calculations, such as those that specify layout or apply lighting, are non-magical and expressed explicitly by manipulating/transforming geometry values.

The above goals ensure that the abstraction is easy to use and that we can easily modularize geometry-building code using standard in-language constructs such as procedures, objects, or functions (including high-order functions). Now, let’s introduce the abstraction via an example using a neutral syntax:

Read the rest of this entry »

Advertisements

Posted in Uncategorized | Leave a Comment »

Designing a GPU-oriented geometry abstraction – Part Two.

Posted by mcdirmid on 23/11/2009

My last post described the problem of crafting an appropriate geometry abstraction for Bling. Bling previously solved the code problem for vertex and pixel shading, but lacked decent geometry input abstractions as well as an abstraction that supported geometry shading. The last post proposes giving geometry its own abstraction, but I was a bit hesitant in including properties within the geometry, which leads to either a problem of flexibility or static safety. I think its time to back up a bit and describe the abstractions that are relevant to a traditional programmable rendering pipeline:

Posted in Uncategorized | Leave a Comment »

Designing a GPU-oriented geometry abstraction – Part One.

Posted by mcdirmid on 20/11/2009

One the inputs of rendering via programmable shading on a modern graphics card is a collection of vertices associated with some per-vertex properties used in shader computations. When programming the GPU, this collection of vertices is commonly abstracted as a vertex buffer, which is essentially just a bag of bytes. The collection of vertices describes a primitive point, line, or triangle topology along with an optional index buffer (if triangle) that describes vertex sharing.  Again, the abstractions for describing these topologies are very weak and essentially amount to flags and arrays, although we can informally describe to the vertex buffer plus its primitive topology description as a geometry. Each vertex first individually undergoes processing in a vertex shader, and then (optionally) per-primitive (point, line, or triangle) in a geometry shader where a new geometry can be specified with fewer or more vertices or even a completely different topology. After vertex and geometry shading, the position of each vertex must be specified. Finally, each pixel of the rasterized primitive is processed by a pixel shader to determine its color. This is a simplification, the GPU supports other features such as instancing, where we reuse the vertex buffer multiple times in a single rendering, and geometry shader stream out, where we save the result of vertex/geometry shading for later re-use.

Read the rest of this entry »

Posted in Uncategorized | 2 Comments »

Swizzling in C# via Bling

Posted by mcdirmid on 30/10/2009

Swizzling is a GPU-supported operation that allows you to cheaply reorganize and repeat elements of a vector during an operation. Given its efficiency, it is very important to support swizzling in a language that targets GPUs. HLSL has special syntax where any scalar or vector can be swizzled; e.g., p.xyz takes a vector4 and returns its first 3 elements as a vector3, p.zyx reverses these elements, and p.xxy repeats the first element twice then pairs the result with the second element. Through C# extension methods, we can define some common swizzle combinations (e.g., xyz and zyx), but with repeated elements and considering vectors of size 4 (the max that is supported by GPUs), we’d have to define way too many extension methods to be complete. Rather, I’ve defined a few overloaded extensible extension methods where the swizzle indices are indicated as type parameters; e.g., considered swizzled quaternion transformation in C#:

Double3Bl m = -q.XYZ().Square;
m = m.Sw<Y,X,X>() + m.Sw<Z,Z,Y>();

Double3Bl n = q.W * q.XYZ();
Double3Bl r = q.Sw<X,X,Y>() * q.Sw<Y,Z,Z>();
Double3Bl t = r.Sw<X,Z,Y>() - n.Sw<Z,X,Y>();
Double3Bl u = n.Sw<Y,Z,X>() + r.Sw<Y,X,Z>();
Double3Bl p = argB;
Double3Bl v = m * p.XYZ() + t * p.YZX() + u * p.ZXY();
return 2d * v + p.XYZ();

Sometimes we use extension methods (e.g., XYZ()), and sometimes we use the Sw method, which is slightly more verbose but still reasonably concise. For type safety, we define the X, Y, Z, and W classes so that X <: Y <: Z <: W, then if the target is a 4 vector, the type parameter bounds for all the swizzle parameters is W , while if it’s a 2 vector, the type parameter bounds for all the swizzle parameters is Y, disallowing Z and W. The X, Y, Z, and W classes can then be instantiated as type parameters to determine the index they represent.

Posted in Uncategorized | Leave a Comment »

Announcing Bling 3.1!

Posted by mcdirmid on 27/10/2009

I’d like to announce a new version of Bling (http://bling.codeplex.com/) with maturing experimental support for retained and programmable 3D graphics. On the one hand, we have immediate-mode programmable graphics in Direct3D, which is flexible and fast but is more difficult to program. On the other hand, we have retained mode graphics in WPF 3D, which is easier to program but is less flexible and not as fast as straight Direct3D. Bling is a side project that experiments with something in between the two: the ease of programming that a retained graphics model provides with the flexibility and performance that comes with the ability to express custom pixel and vertex shaders.  Bling is built on top of Direct3D via the WindowsAPI Code pack, the DLR, and WPF.

Long description/example for those that are interested: Read the rest of this entry »

Posted in Uncategorized | Leave a Comment »

A web browser suitable for Harry Potter in WPF!

Posted by mcdirmid on 27/07/2009

Daily Prophet eat your heart out! Here is a prototype web browser we threw together in Bling:

image

Read the rest of this entry »

Posted in Uncategorized | Leave a Comment »

My first DirectX 10 Shader in Bling

Posted by mcdirmid on 13/07/2009

Here is the code:

var Buffer = Geometries.Sphere.ToBuffer(100);
Buffer.ForAll = (i, v) => v.Color().Bind = slider.Value.Lerp(i.SelectColor(), Colors.Red);
var EF = Effect.Shade(vertex => vertex.Color() * (vertex.Normal().Z));

And here is the result:

clip_image001

Ya, it doesn’t look like much yet, but it’s a good start. I got a lot of help from reading Conal Elliott’s vertigo paper, so it looks like this has been done before in Haskell vs. C#. In particular, we can get a lot of mileage out of parametric surfaces (automatically compute positions and normals using automatic differentiation).

Posted in Uncategorized | Leave a Comment »

Bling WPF hits V1

Posted by mcdirmid on 08/02/2009

I’d like to announce a new and improved version of Bling WPF. In this version, we have redone the wrappers around WPF databinding and pixel shading for better usability, while a lot of documentation and examples have been added to the distribution and the Codeplex page. Finally, we’ve also added some experimental support for UI physics with an example! A release for Visual Studio 2008/.NET 3.5 SP1 is available at http://www.codeplex.com/bling. For anyone unfamiliar with Bling, here are the primary features:

  • WPF Databinding without IValueConverters in C#!  For example, “button.CenterPosition.X = slider.Value * MainCanvas.Width” is valid C# code in Bling  that will setup a databinding relationship that binds button’s LeftProperty to something that will move it with the slider.
  • WPF pixel shaders in C# without HLSL code or boilerplate! A pixel shader is simply a texture-to-pixel function, e.g., “canvas.CustomEffect = (input,uv) => slider.Value.Lerp(input[uv], ColorBl.FromScRgb(new PointBl(1,1,1) – input[uv].ScRGB, input[uv].A));” is a one line pixel shader that will invert all the colors in canvas interpolated with respect to a slider’s current value. No need to write HLSL code, no need to write a custom effect class.writing a pixel shader is boiled down to its core function.
  • Bling defines many WPF convenience properties; e.g., Size is defined as (Width, Height), Right is defined as Left + Width, CenterPosition is defined as LeftTop + Size / 2. Convenience properties are just like properties that are backed directly by dependency properties; i.e., they can undergo databinding, be used in pixel shaders, and so on.
  • Bling code is completely compatible with conventional WPF code. Bling wrappers are stateless so you can use Bling functionality anywhere in your program regardless of architecture.
  • UI Physics! Did you wonder what would happen if property bindings were solved via a physics engine rather than a databinding engine? Well, ok, probably not J, but the result is cool and could possibly be the future of UI toolkits. I’ll write more about this later.

Posted in Uncategorized | Leave a Comment »

New Bling WPF release with metaballs!

Posted by mcdirmid on 08/01/2009

No, not meatballs. I’ve done a lot of work on Bling this month, the first of which is writing a paper on the technique used to build Bling. I’ve also overhauled how pixel shader effects are expressed so that even less boilerplate is required than before.  In the new release, when you want to add a signal parameter to a shader, you can simply call “Sh” on the signal and it will automatically be added to the list of the shader’s parameters.

As an example, consider code:

Bling.Shaders.Shaders.MakeDirect((txt, input, uv) => {
  FloatSh value = 0f;
  Point3DSh rgb = Point3DSh.New(0, 0, 0);
  PointSg xyscaled = canvas.Size() /
    (canvas.Size().X + canvas.Size().Y);

  uv = uv * xyscaled.Sh(txt);
  for (int i = 0; i < points.Length; i++) {
    var p = ((points[i] - canvas.LeftTop()) / canvas.Size());
    p = p * xyscaled;
    var v = (uv - p.Sh(txt));
    v = v * v;
    var at = 1f / (v.X + v.Y);
    value += at;
    rgb += (colors[i % colors.Length].Sh().RGB * at);
  }
  var area = canvas.Width() * canvas.Height();
  var at0 = (area / 400).Sh(txt);

  return ((value > at0)).Condition(
    ColorSh.New(rgb / value, 1),
    Colors.White.Sh());
});

This code mixes signal code and shader code to create a nice meta-ball effect. The xyscaled variable is a point signal that scales X and Y coordinates according to the dimensions of the container. It is computed outside of the pixel shader but is multiplied the pixel coordinate (uv) by converting it to a shader parameter (xyscaled.Sh(txt)). For each point used to create the meta-ball effect (which is formed by 8 thumbs), the point is scaled according to the canvas then re-scaled according to x and y dimensions (so the ball generated is a circle) using xyscaled, all of these computations happen through data-binding and not in the pixel shader saving precious GPU instructions and not replicating shader operations on each pixel since they don’t change. After these computations are performed outside of the shader, the point is brought into the shader (p.Sh(txt)) so it can be used in an operation with the pixel coordinate. Likewise, the area of the canvas is operated on outside of the GPU and brought into the shader using (area / 400).Sh(txt), where it is then used as the threshold for the metaball computation.

Check out the result (which is animated when you run it):

image

The meta-ball example is the main example in the new source code/distribution, which you can get from Bling’s Codeplex page.

Posted in Uncategorized | Leave a Comment »

Shading Blobs with Bling WPF!

Posted by mcdirmid on 08/12/2008

I updated Bling WPF to version 0.6, get it at the normal place (http://www.codeplex.com/bling). Mostly, I changed the DSL to get rid of more boilerplate code. Now you can create multiple input and parameter pixel shader effects with only a few lines of C# code (sorry, no XAML yet). Example of a blob shader:

var effect = new EightArgLiftedShader<Point>(); effect.ShaderFunction0 = (input, uv, points) => { FloatSh d = 0; for (int i = 0; i < texture.SegmentCount; i++) d += uv.Distance(points[i].LftSh()); d = 1 - (d / texture.SegmentCount); d = d * 2; var color = input[uv]; return ColorSh.New(color.RGB * d, color.A); }; for (int i = 0; i < texture.SegmentCount; i++) effect[i].Bind = polygons[j].RelativePoint(thumbs[i].CenterPosition()); polygons[j].Effect = effect;

An EightArgLiftedShader takes eight arguments of the same type (in this case Point). The parameters are then packaged up as an array of ShaderValue<Point> objects (points) where we then compute the average distance with the coordinate being processed (uv). The distance is then inverted and doubled to come up with a value to multiple the current color by. Outside of the shader, each point parameter of the shader is bound to the relative center point of each thumb that forms the skin of the polygon being shaded (basically, take the AABB of the polygon and compute the percentage that the thumb is inside the AABB). Result on shading three blobs:

image

A bit more 3D than a gradient brush!

Posted in Uncategorized | Leave a Comment »