Amplify - Virtual Texturing for Unity Pro, RELEASED! Starting at $99

.
Get Amplify Pro for only $350 USD

** Only for Unity Pro **
(requires full-screen post-processing effects)

Try Amplify Now

About Amplify

Amplify is a Virtual Texturing extension for Unity Pro. It allows scene/level designers to use a huge amount of textures without worrying much about texture memory limits or streaming. So how will this work?

To use this plug-in and take advantage of virtual texturing you don’t have to change the way you assemble your scenes. Amplify adapts to your workflow, and not the other way around. There’s is absolutely no need to do UVs any differently. While you edit your scene, the regular way, the system incrementally builds the virtual texture so you can instantly preview the results right there in the editor.

https://vimeo.com/26037134

Official Tech Demo (HD)

This product opens up the possibility - right now - of using 3d painting tools like Deep Paint 3D, 3D-Coat, Mudbox, Mari, and others, to create immersive, unique environments that would not be feasible without this kind of technology.

In this demo, texture density ranges between 3 to 5 pixel per-cm, sometimes a bit more in props, which would be considered normal in a regular game. However, in this case we rarely repeat the same texture and the detail is mostly unique.

Pro Features

  • Virtual textures up to 512K x 512K.
  • Priority support

Standard Features

  • Virtual textures up to 256K x 256K.
  • Seamless integration with Unity Editor.
  • Real-time WYSIWYG editing.
  • Per-material diffuse+coverage, normal and glossiness textures.
  • Per-material textures larger than 4K x 4K.
  • Texture repeat / tiling.
  • Trilinear filtering.
  • Variable bit rate texture compression.
  • Support for dynamic surfaces.
  • Automated incremental builds and deployment.
  • High performance.
  • Low memory footprint.
  • Standalone Windows and Mac support.

We’ve spent a lot of time and effort trying to make the transition to virtual texturing as simple and transparent as possible. Below you can see a video explaining how the VT workflow process within Unity and another one demonstrating the kind of scale we can achieve using this technology:

https://vimeo.com/17533900

Workflow Overview (HD)

A high resolution screenshot of a virtual textured Sponza scene.

Where To Buy

You may purchase securely at our website.

Contacts

Feel free to contact us if you need more information, either using our feedback page or directly using contact information available at our company page.

Looks great, though I don’t suppose there would be the possibility of a Unity standard version?

Unfortunately it’s not possible for a mere technical reason. Amplify relies on RenderTexture (a Unity Pro feature) to do it’s job properly. It would be possible, however, if direct texture uploads were fast enough.

Will it work with unity ios pro or will you support it in the future?

Right now, there are two reasons holding us back from mobile:

  1. Financial resources
  2. Educated guess that the hardware won’t be fast enough until a couple more iterations.

What is the difference between this product and the substance texture from unity 3.4?

Since the purpose of procedural techniques is extremely high compression by constraining content creation to a compound of formulas and variables, it requires a special/custom editor. You can’t really paint these textures the same way you paint on Photoshop.

The purpose of VT is aimed at artistic freedom, at the cost of storage. Using VT, you can import your hand painted texture data from applications like Photoshop. Applications like Deep Paint 3D, 3D-Coat, Mudbox, Mari, and others, actually let you paint your objects in 3D. Some even split your scene into multiple reasonable sized textures (like 4K x 4K) and let you paint across materials.

What Amplify does is basically remove the common restrictions you have on your scenes. You can go far beyond what would be possible without splitting your scene into multiple other scenes and having loading screen transitions or some tricky streaming solutions, because your textures don’t fit the available hardware resources. Instead of being restricted to a few dozen 4K x 4k textures per-scene, you can now go up to hundreds or even thousands.

Additionally, the internal texture management done by VT is nearly optimal, there’s nearly no wasting gpu memory. This means you get a lot more available memory for non-virtualized texture types, e.g. lightmaps.

Can you make a comparison video of a scene with 4K x 4k textures running in a Unity scene “without” this product vs. with this product? I’m curious what the “actual” performance benefits are from the standpoint of framerate.

Congrats on the release! This is awesome technology, glad to see it find it’s way into Unity.

Framerate performance isn’t the primary benefit of virtual textures - it might have a side benefit of reducing draw calls, but the major benefit is allowing your artists to not have to worry about memory budgets and how many or how large their textures are. It also allows much higher visual fidelity, as texture memory is spent entirely on things currently in-view rather than everything in the scene in general. With this tech you could put a unique 4Kx4K texture on every object in your scene while staying under run-time memory budget.

The main point to take away from the video he posted is the quality and resolution of every texture you see. That demo is 10GB worth of textures, way more than you’d ever be able to fit in memory normally. Maybe updating the video with comparison shots of this tech vs the same scene with textures compressed down to match the amount of memory used in run-time would better illustrate the point - it’s a massive difference.

So I see you hacked into John Carmark’s computer and stole his MegaTexture source code from ID tech 5 engine!

I AM KIDDING I AM KIDDING!! :smile:

Definitely very interested. Looking forward for the mobile version!

Redbeer, what Bael said is exactly right. I’m not even sure the demo will run, to be honest. However, we are trying nonetheless.

VT is not supposed to improve performance, on some older cards it’s even quite the opposite. Naturally, this flexibility comes at cost. A small shader performance cost (tiny on modern gpus), a low resolution tile analysis pass (low res) and, for the time being, a few CPU milliseconds also. Hopefully someday UT will step in and fix texture uploads for everyone and that will drastically improve VT CPU performance all around.

I am da Bawss, probably wouldn’t have helped much unless Carmack coded MegaTexture in Unity C# scripts. hehe

Diogo, I have a question, with Amplify, you were saying there is no need to worry much about texture memory limits or streaming. Does that mean these huge textures are continuously stream to the memory on call? Is it possible let’s say if I am to create a huge open sandbox type game (let’s say “Oblivion” or GTA series) - player only had to wait for the initial texture caching, then the rest get stream/call in seamlessly?

I am da Bawss, exactly. You only need to wait for the lowest res pages to stream (and just the visible ones), which should take only a few milliseconds. From that point on, it continuously streams new pages, on demand and in the background, depending on your camera view.

How does lightmapping integrate with Amplify?

Amplify supports scenes that use lightmaps, but they cannot be virtualized due to Unity performance restrictions on texture uploads. By that I mean that they are stored outside the virtual texture, as regular textures.

We could very easily virtualize them, but it would degrade performance too much. At the moment we only support virtualization for diffuse+alpha/gloss and normal maps.

I don’t understand, so why can’t lightmaps be virtualized? That’s one thing I was thinking to use this for. What exactly is the reason?
I would imagine lightmaps being one of the most varied (unique texture) and most compressible texture it would benefit the most from this technology.

Lightmaps are the only textures that you can not reasonably compress actually, cause they are 128bit (thus exr format, not png). Running compression on them would automatically kill the HDR data as the compression only takes ARGB32 into account at maximum.

The only reason is performance. For some reason, Unity uses way too much CPU time when uploading textures to gpu local memory. Uploading a couple of diffuse+alpha/gloss and normal pages takes around 7 ms. That’s already way too much.

Supporting this might help:
http://feedback.unity3d.com/forums/15792-unity/suggestions/399927-graphics-texture2d-setpixels-for-compressed-textu?ref=title

However, since it’s over a year old I don’t hold much hope anymore.

I doubt its gonna happen either unhappily, be it more due to the fact that it wouldn’t be crossplatform (would be a nightmare on mobile with decompress - apply - recompress)

Why not push the request of exposure of the DX context + directdraw surface handle?
That should give a significantly higher gain :slight_smile:

I honestly don’t think that would be a valid reason. As I’ve argued before, in the beta thread, there is no decompression - apply - recompression necessary in this process. The whole purpose of this is to avoid any type of conversion at all on the CPU, which is what is currently happening (e.g. vec4 to 8bit RGBA). They would simply expose the internal format, in advance, and we’d be responsible for directly providing the data in the right format.

IMHO, valid reasons: resources to make it happen; hiding internal format complexity.