3D Matrices Update Optimization

4×4 matrices are the heart of any 3D engine as far as math is concerned. And in any engine, how those matrices are computed and made available through the API are two critical points regarding both performances and ease of development. Minko was quite generous regarding the second point, making it easy and simple to access and watch local to world (and world to local) matrices on any scene node. Yet, the update strategy of those matrices was.. naïve, to say the least.


There is a new 3D transforms API available in the dev branch that provides a 50000% 25000% boost on scene nodes’ matrices update in the best cases, making it possible to display 50x 25x more animated objects. You can read more about the changes on Answers.

Continue reading “3D Matrices Update Optimization”

New Minko Feature: ByteArray Streams

I’ve just pushed on github my work for the past few weeks and it’s a major update. But most of you should not
have to change a single line of code in the best case. The two major changes are the activation of
frustum culling – who now works perfectly well – and the use of ByteArray objectst to store vertex/index
streams data.

Using ByteArray instead of Vector, why are we doing this?

As you might now, Number is the equivalent of the “double” data type and as such they are stored on
64bits. As 32bits is all a GPU can handle regarding vertex data it is a big waste of RAM. Using ByteArray
makes it possible to store floats as floats and avoid any memory waste
. The same goes with indices stored
in uint when they are actually shorts.

Another important optimization is the GPU upload. Using Number of uint requires the Flash player to
re-interpret every value before upload: each 64bits Number has to be turned into a 32bits float, each
32bit uint has to be turned into a 16bits short. This process is slow by itself, but it also prevent
the Flash player to simply memcopy the buffers into the GPU data. Thus, using ByteArray should really
speed up the upload of the streams data to the GPU
and make it as fast as possible. This difference should be even bigger on low-end and
mobile devices.

Finally, it also makes it a lot faster to load external assets because it is now possible to memcopy
chunk of binary files directly into vertex/index streams. It should also prove to be very very useful
for a few exclusive – and quite honestly truly incredible – features we will add in the next few months.

What does it change for you ?

If you’ve never been playing around with the vertex/index streams raw data, it should not change a single
thing in your code
. For example, iterators such as VertexIterator and TriangleIterator will keep working just the way
they did. A good example of this is the TerrainExample, who runs just fine without a single change.

If you are relying on VertexStream.lock() or IndexStream.lock(), you will find that those methods now
return a ByteArray instead of a Vector. You should update you code accordingly. If you want to see a good example of ByteArray manipulations for streams, you can read the code of the Geometry.fillNormalsData() and Geometry.fillTangentsData() methods.

What’s next?

This and some recent additions should make it much easier to keep streams data in the RAM without wasting too much memory and be able to restore it on context device loss. It’s not implemented yet but it’s a good first step on this complicated memory management path.

Another possible feature would be to store streams data in compressed ByteArray. As LZMA compression is now available, it could save a lot of memory. The only price to pay would be to have to uncompress the data before being able to read/write it.

Tutorial: Add pixel-perfect 3D mouse interactivity

In this tutorial we’re going to see how you can add pixel-perfect 3D mouse interactivity. I’ve already introduced a technique called “ray casting” in another article. But it works only with very basic static shapes. And sometimes, testing very complex shapes can be very painful performance wise. It’s even more expensive when you want it to be very precise.

In this article, we will see a technique called “pixel picking”. This technique uses hardware acceleration to provide pixel perfect mouse interactivity. It works very well for both static and animated models. The concept is very simple: we render the scene with one color per mesh. Then, we just have to get the pixel under the mouse cursor to know what mesh is “interactive”. Of course, things are much more complicated in the real life: this kind of stunts are pretty hard to push properly in a general purpose rendering pipeline.

But Minko provides everything required out of the box! Even better, the minko-picking extension features a simple controller – the PickingController – that provides all the mouse signals we might need! This tutorial will explain how to setup the PickingController and listen for the mouse signals.

Pixel picking test application (sources)

Create and setup the PickingController

The first step is to instanciate a new PickingController:

The constructor takes only one argument: the “picking rate” of the controller. This value will determine how many times per second the controller will try to execute the picking pass and the relevant mouse signals. The lower the picking rate, the better the performances. A picking rate of 30 should be more than enough for 99% of the applications. You can also set that value at any time using the PickingController.pickingRate property:

Setting the picking rate to the half of the frame rate will work just fine for most applications and should be completely painless performance wise. By default, the picking rate is fixed to 15.

Set the mouse events source

The job of the PickingController is to listen for the mouse events on one (or more) specific dispatcher(s) and re-dispatch them as mouse signals. The difference between the original events and the signals executed by the PickingController is that the signals are aware of the 3D scene. To setup the dispatcher to listen, you just have to call the PickingController.bindDefaultInputs() and provide the IDispatcher object to listen:

Setup the PickingController on the 3D scene

In most cases, you don’t want the whole 3D scene to be mouse interactive. Sometimes it’s just a Mesh or a Group. The PickingController can be added to any Mesh/Group so it’s easy to target precisely what is interactive and what is not. The basic use case is to add mouse interactivity on a single Mesh:

BUt you also might want to listen for the mouse signals trigerred by a whole sub-scene instead of a single mesh. For example, some skinned 3D assets have multiple meshes animated by a single skeleton. To do this, we can add the PickingController on Group:

In the code snippet above, the PickingController will execute mouse signals for all the Mesh descendants of the target group. You don’t have to worry about the descendants of the groups targeted by a PickingController: it will listen for the Group.descendantsAdded and Group.descendantsRemoved to start/stop tracking any descendant Mesh added to this part of the scene.

Thus, if your whole 3D scene is interactive, you can add the PickingController directly on the Scene node:

Listen for the mouse signals

To catch 3D mouse events, you just have to add callback(s) to any of the PickingController.mouse* signals. The available signals are:

  • mouseClick, mouseDown, mouseUp: executed when the left button is clicked, down or up
  • mouseRightClick, mouseRightDown, mouseRightUp: executed when the right button is clicked, down or up
  • mouseMiddleClick, mouseMiddleDown, mouseMiddleUp: executed when the right button is clicked, down or up
  • mouseDoubleClick: executed when the user makes a double click
  • mouseMove: executed when the mouse moves
  • mouseWheel: executed when the mouse wheel turns
  • mouseRollOver, mouseRollOut: executed when the mouse roll over/out a mesh

The following code sample will catch the left and the right click signals:

It would be too difficult to use the PickingController if the mouse signals where triggered only when an actual 3D object is under the cursor. For example, it would be pretty hard to select/unselect objects without listening to some actual 2D mouse events. The code would then quickly become very complicated to mix both 2D mouse events and 3D mouse signals.

Therefore, the mouse signals are triggered whenever the corresponding mouse event is dispatched (and when the picking rate allows it of course). As a direct consequence, the mesh : Mesh argument is null when there is no actual interactive 3D object under the mouse cursor.


You can find the complete source code of the picking example demo in the minko-examples repository on github. If you have questions/suggestions regarding this comment, you can ask them in the comments or on Aerys Answers, the official support forum for Minko.

Minko Weekly Roundup #2

What happened on the Minko planet in the past few days? It’s time for a quick review!


Venus de Milo

This demo loads a 400k polygons statue 3D model and displays it with normal mapping. It was built in a few minutes using Minko Studio. Thanks to the MK file format and lossless compression, this status is only 6MB big (textures included) againt 50MB for the original model.

Mercedes E-500

You can watch the making of this demo on Youtube. It shows a very old version of Minko Studio, but you can see how easy it was to do this kind of things already!


Pimped GitHub repository

We’ve put a lot of efforts into redacting a better README.md. This new version provides a lot of useful links to demos, tutorials and the plugins repositories. The default branch of the repository is now 2.0b to make sure people use it against the old deprecated version available on master.

Make sure you check the “Getting started with Minko” tutorial to get the sources from Minko’s GitHub repository!

HDR Bloom

We’ve finally ported the HDR bloom post-processing effect used in BlackSun into Minko 2! This implementation is much cleaner and also faster. It uses the new multi-pass linear Gaussian blur implementation.

You can find the source code of this application in minko-examples.


The CloneOptions are a very important addition: they let you control the way a scene tree is cloned. Cloning a scene tree is indeed often more complicated than simply duplicating each node. Those nodes have data providers and/or controllers attached to them. What should we do with all of those? The CloneOptions give you all the control you need to specify which controllers should be cloned, which should be left aside, etc…

More importantly, it solves a very old issue making it impossible to clone skins/skeletons without losing animations. You can now have two meshes sharing the same skin, or clone this skin to have a completely independent instance.

Software skinning

Hardware skinning is constrained by the Stage3D API. Indeed, it relies on the number of constants each vertex shader can handle (128 in the case of Stage3D). Dual quaternion skinning was already able to handle up to 51 bones with 8 influences per vertex. But in some cases it’s not enough…

To handle the use cases where hardware skinning is not possible, we’ve added software skinning. It will perform the vertex transformation on the CPU instead of the GPU. It’s slower but it can virtually handle an unlimited amount of bones/influences. So you should now be able to load any skinned 3D model!

This method is also very cool because we’re one step closer from generating vertex morphing from skinning at runtime. It means we are now capable to bufferize the skinning data – with an unlimited number of bones/influences – and create keyframed data that will have virtually no cost neither on the CPU nor on the GPU! And all of this could happen transparently at runtime, giving you much better performances after a few seconds of playback.

Normals/Tangent Space Update

The Geometry.computeNormals() and Geometry.computeTangentSpace() method have been entirely refactored to be re-entrant. The direct consequence is the possibility to recompute the normals/tangents at anytime! It’s very cool because it makes it easier to work with normals/tangents when you update the position of the vertices procedurally.

Those methods will also be more intelligent and avoid creating a new VertexStream when it’s possible. They also accept a list of triangle IDs to specify which triangles have to be updated. It’s very useful when you’ve edited only a fraction of the vertices and you just want to update only this part of the geometry.



  • The Geometry.changed is now triggered when one of its vertex streams changes.
  • “doubleSided” QuadGeometry will now have proper normals and catch light properly
  • fixed Vector4::scale() not scaling the input Vector4.
  • VertexStream.lock() and IndexStream.lock() will not assume the data hasn’t changed anymore (because they actually don’t have a clue…).
  • VertexStream.lock() and IndexStream.lock() now take an optional hasChanged : Boolean argument to specify whether the locked data has actually changed or not and avoid dispatching the “changed” signal when it’s not relevant.


Next week I’ll introduce all the amazing changes we’ve made in Minko Studio.

Tutorial: your first mobile 3D application with Minko

As you already know I’m sure, you can build Android and iOS devices with the Flash platform. And Stage3D is also available on those devices! As a matter of fact, Stage3D was especially designed to work on mobiles. And so was Minko! We put a lot of efforts in building a robust and fast engine that will work on most mobile devices. This tutorial will start where the “Your first Minko application” tutorial stopped and explain what needs to be done to get it working on mobile.

Create your mobile project

The first thing to do is – of course – create a mobile project. With Flash Builder it is very simple: you just have to go into File > New > ActionScript Mobile Project. If you need a little reminder of how to bootstrap your project/development environment, you can read the “Getting started with Minko” tutorial. The only difference compared to creating a desktop/wepp application is to uncheck “BlackBerry Table OS” in the Mobile Settings panel: Stage3D is not yet available on BlackBerry devices. There is an issue opened on the BlackBerry tracker if you want to vote for it!

Configure the application

Now our project has been created we just have to make sure it can use the Stage3D API. It implies two little changes in the app.xml file (this file is named after your main class, most of the time it’s Main-app.xml):

  1. renderMode has to be set to “direct”
  2. depthAndStencil has to be set to “true”

Here is a basic example of a properly setup app.xml file for AIR 3.2:

Bootstrap the Main class

That’s the beauty of the Flash platform, Stage3D and Minko: the project boostrap aside, the code of the application is exactly the same whether you are working on a desktop, web or mobile application! Therefore, you can bootstrap your Main class by following the “Your first Minko application” tutorial!

Basically, you just have to copy/paste the MinkoApplication sample class…

… and make your Main class extend it:

Run your mobile application for the first time

If you use Flash Builder, it will display the Debug Configurations panel when you will try to run/debug your mobile application for the first time. This panel does not have anything special regarding Stage3D or Minko, but it’s still a good thing to see the basics! There are two important fields on the panel:

  1. The “Target platform” field will specify what device you want to target for this debug session.
  2. The “Launch method” field will specify whether you want to run the application in the desktop device emulator or directly on the device. Of course, the “On device” method is better if you want to have a preview of the actual performances.

Display your first 3D object

Now that our project is setup and that we can launch it on the device or in the emulator, we will display our first 3D object. You just have to follow the “Display your first 3D object” tutorial for your mobile project. Here is what you’ll get if you choose to run it on the desktop emulating the iPhone4 device:

You can also directly download the sources for this project!

If you have questions/suggestions regarding this tutorial, please post in the comments or on Aerys Answers, Minko’s official support forum.

Tutorial: Display your first 3D object with Minko

Now that we’ve seen how to bootstrap an empty Minko application, it’s time to learn how to display a simple 3D primitive.

Step 1: The Camera

In order to display anything 3D, we will need a camera. In Minko, cameras are represented by the Camera scene node class. The following code snippet creates a Camera object and adds it to the scene:

By default, the camera is in (0, 0, 0) and looks toward the Z axis. We must remember this when we will add our 3D object in the scene: we must make sure it’s far enough on the Z axis to be visible!

Step 2: The Cube

A Mesh is a 3D object that can be rendered on the screen. It is somekind of 3D equivalent of the Shape class used by Flash for 2D vector graphics. But in 3D. As such, it is made of two main components:

  1. a Geometry object containing the triangles that will be rendered on the screen
  2. a Material object defining how that very geometry should be rendered

Creating a Mesh involves passing those two objects to the Mesh constructor:

There are many primitives available as pre-defined geometry classes in Minko: cube, sphere, cylinder, quad, torus… Those classes are in the aerys.minko.render.geometry.primitive package. You can easily swap the CubeGeometry with a SphereGeometry to create a sphere instead of cube for example.

The BasicMaterial is the material provided by default with Minko’s core framework. It’s a simple material that can render using a solid color or a texture. Here, we use it with a simple color. To do this, we simply set the BasicMaterial.diffuseColor property to the color we want to use with an RGBA format.

Remember: the camera is in (0, 0, 0) and – by default – so is our cube. Therefore, we have to slightly translate our cube on the Z axis to make sure it’s in the field of view of the camera:

We will introduce 3D transformations in details in the next tutorial.


To make it simple, our main class will extend the MinkoApplication class detailed at the end of the previous tutorial. We will simply override its initializeScene() method to create our cube, our camera and add both of them to the scene:

And here is what you should get:

If you have questions or suggestions, you can post in the comments or on Aerys Answers!

Minko ShaderLab Beta

Try it now!

Click on the like below to run the ShaderLab beta directly in your browser:

Online ShaderLab Beta Wep Application

But please remember…

Many people want to try the ShaderLab – our graphics shader programming environment. Yet, it seams we simply can’t find the time to start an actual private beta. That’s because we are focusing on Minko Studio for the public beta release.

I really think this early (buggy) release is still of some interest: you’ll be able to test drive the UI, train yourself to shader programming in an easy and fun way but – most of all – you can provide feedback. But let’s be honnest: this is an old release and it has many flaws.

  • It works with Minko 1, which imply it uses the old shader compiler that procudes sometimes sub-optimal shader code.
  • You can save and load shader source files, but you cannot publish them to test them in your live application.
  • Parts of the UI might behave weirdly.
  • You can’t share your creations “à la YouTube”.

There are other minor bugs. We know all of this. And we’re fixing it as we are integrating the ShaderLab with Minko Studio. Those updates will likely be backported into the ShaderLab web app. But do not expect minor fixes every week: it will be a complete update. Thus, feel free to give us feedback but don’t be to upset if it takes time to be fixed.


To help you getting started, here are a few samples you can load in the ShaderLab to start with:

French Flags.mks

Circular And Directional Waves v2.mks

Cel Shading.mks


I’m pretty confident this app can still be useful as it is today – especially to learn shader programming – and I hope you will like this first release despite all its flaws. If you have questions or suggestions regarding the ShaderLab, please feel free to post them on Answers.