Minko 2.0 New Feature: Wireframe Rendering

One of the new things coming up with the next version of Minko (Minko 1.0 is available here) is a new extension called “minko-effects”. This extension will feature new rendering effects, shaders and shader parts. One of them is the wireframe effect. Wireframe is very useful to see what the geometry looks like and it is widely used to debug 3D applications.

Rendering real-time 3D using wireframe is usually not that much of a big deal: OpenGL and DirectX make it easy to switch from normal to wireframe rendering. But it works only when using the fixed function pipeline. Not with shaders. As every shader program defines how rendering should be perform, it is actually quite logic that wireframe should be handled there. But doing wireframe in the shader is really not easy, mainly because it is a per-edge algorithm and we can only work per-vertex or per-fragment.

In this article, I will try to expose the main challenges regarding the implementation of a single pass wireframe method in a shader and how Minko makes it possible to address those issues.


We based our work on the Shader-Based Wireframe Drawing article and the minimole implementation. Of course, just like any shader in Minko, it is written with ActionScript instead of AGAL/PB3D. The single pass rendering method is very well explained in the article:

The intensity, Ip, of a fragment centered at p is computed based on the window space distances d1, d2, and d3 to the three edges of the triangle

In the single pass method, the polygon edges are drawn as an integral part of drawing the filled polygons. For each fragment of a rasterized polygon, we need to compute the line intensity, I(d), which is a function of the window space distance, d, to the boundary of the polygon which we assume is convex. The fragment color is computed as the linear combination I(d) Cl + (1−I(d)) Cf where Cl and Cf are the line and face colors, respectively.

Source: Shader-Based Wireframe Drawing

The main problem here is that we will have to compute the distance between the polygons edge and its center in screen space. But we don’t have any way to compute that distance in the shader because we only have per-vertex or per-fragment data. To solve this, we need to be able to do 2 things:

With Minko, the definition of a vertex – or vertex format – is not fixed. One can easily create new VertexFormat objects and specify existing or custom vertex components. Here is a simple example of how to:

  • Create a custom vertex component.
  • Define a custom vertex format that uses that component.
  • Create and fill a vertex stream with vertices that fit that very vertex format.

To learn more about custom vertex formats, you can read the “Work with vertex attributes in the fragment shader” tutorial on the Hub. It explains how to create a custom vertex format and use it in the shader.

Then, you just have to decorate any mesh with the WireframeMeshModifier. The WireframeMeshModifier will compute the extra data for each vertex at runtime. This data will be stored in a new vertex stream that will use our custom vertex format.

But another issue is now that a single vertex can be shared by multiple triangles. So the distance from a vertex to the opposite edge depends on the triangle we are actually drawing. The only solution is to duplicate every vertex to make sure they are used only once. Duplicating vertices has two consequences:

  • Our mesh will take more space in the graphics memory: the size of the vertex buffer will eventually break the memory limit imposed by Adobe.
  • GPU vertex cache will not be able to work properly because every vertex is unique: rendering will be slower.

Sadly, there isn’t much we can do about that. An optimization could be to average the distances from the vertex to all the opposite edges of all the triangles it belongs to. But it would only work properly when those distance are approximately the same.

Last – but not least – we will have z-buffer issues. Just like any other transparent rendering, wireframe will have a very hard time working properly if we don’t handle the z-buffer correctly. Indeed, transparent pixels will still be written in the depth-buffer and occlude other parts of the rendered mesh (other meshes will be rendered just fine as Minko re-orders every draw call to make alpha blending work). We have two options:

  1. Use CompareMode.ALWAYS for the depth test: it works great when rendering only wireframe but it doesn’t play nice with other rendering effects.
  2. Use the “kill” instruction that is now available in Minko 2.0: we can eliminate transparent pixels from the z-buffer.


Here is a little demo application:

And here is the code for this very application:

In this code you can see another new feature in Minko 2.0: the TeapotMesh primitive. It will be available directly in Minko’s core framework and it is based on the original Utah Teapot data.

The Shader

Here is the simplest version of the shader written using ActionScript. It doesn’t handle animation and the wire/surface colors are fixed. The shader provided with Minko 2.0 will be much more parametric.

Below is the equivalent in AGAL with the allocation tables, both compiled with Minko’s JIT AS shader compiler. If you want to see the AGAL assembly code (and the allocation tables) generated by Minko’s JIT compiler, you can read the “How to get ActionScript shaders compilations logs” tutorial on the Hub.

What’s next?

I’ll soon post about two other effects: atmospheric light scattering and parallax mapping. Those effects are very interesting because they respectively use multi-pass/render to texture and ray tracing in the fragment shader. Here are some screenshots:

As always, you can post on Aerys Answers if you have questions or suggestions.