Houdini Engine

The Houdini Engine is a compact API which extracts Houdini’s core technologies into a powerful procedural engine for film and game studios to integrate into proprietary applications. Experimental plug-ins for popular DCC apps such as Autodesk® Maya® and the Unity® game engine are under development at SideFX Labs.

https://vimeo.com/70073569

Whoa, really awesome news. I’ve dabbled in this and have some friends who work with Houdini, really excited to see what can come of using this in games.

this is really great news, procedurally generate roads, buildings, dungeons and create destruction in real time.

Awesome!

Is this going to be available to indies as well?

Or just big corporations that can afford it?

Be even more awesome if the results didn’t need to be baked out…

They don’t.

The video happens to show a self-contained dynamics simulation asset, which is a bit misleading. If you weren’t running the simulation using Houdini’s dynamics and all you wanted were the broken pieces to hand off to Unity’s RBD system, you wouldn’t have to bake anything at all. the Asset remains live in the editor, always.

What about something like procedurally animated geometry done in sops not using dynamics?

Anything animated in SOPs is the same as having deforming geometry, even though you might have a number of groups or what appear to be discrete objects. They’re not. They’re all part of the same object. Prim groups in SOPs are like submeshes. Things in SOPs don’t have 3d transforms like objects in a hierarchy, but the geometry object that contains the sopnet does.

The way I’d set it up, to satisfy the requirements of your question, is to have a Houdini Asset doing something like deforming rocks every frame with a random number, which would give me a different rock each frame, in Houdini. In Unity, I’d have a .cs script with a for loop in it that sets the frame number parameter on the Asset to be whatever the int value is. Within that for loop, I’d create a new gameobject and grab the mesh from the result of my Asset, and at the end of the loop, I’d have a boatload of different rocks.

At runtime, it’s a different story. The assets don’t cook at runtime. Well, technically they do, and only in play mode within the editor, but as it stands, you’d be crazy to give that much runtime budget to what is currently an Editor/level-load-time solution. Not that the libraries are porky, but the assets you build can be. We’ve tried to make it so the results of the asset are optimized and make sure we retain all the Houdini functionality.

I see your location is London, but if there’s any way you can make it to SIGGRAPH, stop by the SESI booth and we can chat.

cheers,
-John.

So I was technically right, you have to bake out the results to use them? We are long time Houdini (since PRISM) and Unity users here, but yes based in London so probably tricky to get to SIGGRAPH. Really hoping that over time this can change into something fully embedded with procedurally functionality available right in the webplayer.

if by ‘bake out’ you mean ‘generate in the editor’ then yes. Not bake to disk or any other intermediary format. Runtime cooking of HDAs is technically possible (I’ve done it), but only in the Editor’s play mode at the moment.

Yes sorry, by bake in this context I mean the result is fixed at build time. Handy that you don’t need to write out a file, even that will be very useful for us, but being a long time Houdini user I know how totally awesome it would be to use the power of houdini properly inside the actual game engine.

this is absolutely huge imo. as a developer an not an artist by trade, the thought of not only having to create assets, but spend all that time placing them and updating them, when I can be coding etc is huge.

The ability to prototype ideas quickly with more depth just increased exponentially, and here’s the key: to show off a demo/idea to a producer where you don’t have to say things like: “in the real game, there will be more trees here…” because you didn’t have time to place them :wink:

so question: will this be incorporated into unity core or will it be an asset obtainable for unity? kind of confused as to how it integrates and what this might do to unity’s pricing point?

So far all that’s been said it that it’s a plug-in. You do still need a Houdini install (such as the free Apprentice edition) - the plugin connects from within Unity to the Houdini host environment; it’s not embedded into Unity the way, say, Substances are (proceduralTextures).

Not quite.

Unity is the host environment. The plugin accesses core Houdini dll’s to make the asset sing and dance, and no session or process of Houdini is necessary. We’re currently working with an Engine-only install that essentially wraps those libraries up into their own thing, the idea being that with one Houdini license, you can author assets for many Engine users. The full Houdini install will of course contain the Engine libraries, so you don’t have to install both.

Ahh ok, that’s a step up from impressions up to now - thanks for putting me straight. That does raise more questions of course … you can’t really predict what’s in an HDA, I would imagine you’d have to support almost every aspect of Houdini (all contexts, even Mantra output?)?

So could we create HDAs that cook textures (i.e. projection mapping, etc) as well as geometry? Obviously it would be extremely cool if we can build truly dynamic multi-res content generators…nobody really enjoys texture baking.

http://www.youtube.com/watch?v=5k9QrHVhBuk , this pique my curiosity. Any more details?

A more complete video with everything explained will be posted at the end of the month. You can also see the demo live in Vancouver at Unite 2013:
http://bit.ly/14OvX1l

Robert Magee
Product Marketing Manager
Side Effects Software

So great to see my 2 favourite bits of software joining together like this! It’s like seeing your 2 best mates getting married :slight_smile:

Sorry I’m a little slow getting back to this thread, SIGGRAPH and Unite have been consuming my life of late.

The short answer, so far, is Yes You Can. You could create a HDA that renders out texture maps with Mantra. Certain limits apply, your mileage may vary, yada yada. Mantra does not currently support raytraced UV rendering, but Micropolygon Physically Based Rendering looks good, once you get the hang of it. Mantra is capable of putting out various passes like Normal, Specular, Bump, Diffuse, and even any custom attribute you can think of, like vector flow maps derived from interaction with a Houdini dynamics simulation. It can also save those as deep raster passes in a single .exr file. Not only that, if you decided you wanted to do your lightmapping somewhere besides Beast, Mantra could probably handle that as well. Multiple passes for multiple lights are just a fact of life for Mantra renders.

The coolest thing about Houdini’s rendering workflow is that it’s really hard to tell where geometry leaves off and shader operations begin. The underlying VEX operations can be used in both contexts, and shaders can render based on an unlimited number of geometry properties of most known types. This technology was driven by feature film VFX, where shader parameters driven by geometry properties are necessary for basic survival. Houdini removes most of the limits on arbitrary properties and shader interaction.

As for predicting HDA’s, you don’t really have to predict much. All HDA’s have a finite number of ways to get data in and out, so anything that both the host application and Houdini support, you can work with. All HDA’s have a parameter interface, some inputs, and some outputs. Outputs can be geometry, images, or channel data (in the case of CHOPs, which is like digital audio/DSP but for animation and geometry). To interact with the asset, you need only access the parameter interface, and the results are manifested where they’re supposed to go. In the case of the Unity plugin, resulting geometry shows up in the scene view. Images are a work in progress, and currently go to files on disk instead of an image buffer. We’re currently working on animation data as well.

But, like with all things, the devil is in the details. I can’t say with 100% confidence what will work in a production scenario and what won’t. The plugin is still embryonic, and to make it ready to breathe the same filthy air that we do, we need a lot of feedback and some solid foundations for product design.

Awesome, thanks for taking the time out to confirm all this. Really is starting to sound like you’ve managed to bottle the magic.

I was thinking more of the limitations of Unity - it’s still a 32bit client and does have a few hard limits on i.e. vertex count per object. Still, kind of a moot point I guess - there’ll be a learning curve between otl creators consumers, and best practises will develop naturally.

Can’t wait :wink: