My current project is a 2D RPG: the visual implementation is like the original Prince of Persia. It’s very resource intensive (2000 frames (256x128) frames in vRAM, dynamically load/unload 350 frames).
Will streaming work well for the dynamical loading/unloading?
There are two parts to the streaming equation for you to consider:
Core Unity data file streaming
This is obtained by publishing a streaming web player and the contents of your data file are streamed by level. This allows your first levels to load and play when their assets are local while the file continues streaming down in the background.
Streaming assets on demand
This is obtained using either the WWW class or AssetBundles (Pro-only). In this case you stream down various external assets on demand and when needed using your own code. Depending on what you’re doing you may need to use Destroy() to clear some objects when you’re done with them.
From what I can tell you’d be looking at the latter, not the former, meaning you’d want to use the WWW class to pull down images on the fly, otherwise users will have to wait to download all images on a given level before playback of that level can even begin. But in that case you’d have to make sure you have enough images “ahead” of playback to serve as your buffer and manual control the timing of each image’s display, skipping frames when needed, etc. You’d then also need to unload them by calling Destroy() on textures when done (assuming they’re never re-used).
In standalone mode (no web app.), I need to load/unload animation frames e.g. if the player changes a weapon, an armor etc. So I guess the Asset Bundles (PRO) cover that.
You don’t have to do that; maybe you can just load all your graphics and let the OS take care of it…at least with OpenGL, VRAM is virtualized. Asset bundles are intended for online games, as is streaming. At worst, you’d instantiate and destroy textures if you have to take care of it manually.
I’m not aware that you could ignore how much textures you try to push onto the GFX card vRAM. Even if it’s virtualized there are always visual lags when the card loads/unloads the needed textures.
Why would there be any hardware requirements for the gfx card vRAM capacity?
Just to be sure we understand each other: all character frames = 700+ MB.Even the loading time is a NO way in this case.
Too much swapping in and out of VRAM would reduce performance, in the same way that a CPU below minimum requirements would reduce performance, but still technically it would work. I remember Doom 3 “required” a 512MB card for running everything with uncompressed textures, but it was later discovered that a 256MB card actually ran pretty much fine that way.
256 x 128 x 4 bytes per pixel x 2000 frames = 250MB, uncompressed. If you use DXT5 compression then it’s 62.5MB, although for 2D sprites it’s probably better uncompressed. You could save all the frames as individual .png files and load them in when necessary (via the WWW class and “file://” instead of “http://”), then Destroy() them when done, although that sounds to me like it would be laggier than letting the OS take care of it. You can use Resources.Load for the textures, which would be faster, though you’d have to load the whole game into RAM first (but 250MB isn’t that much these days). I’ve never tried to push texture usage that far so I’m not really sure what the best solution is.
whole game installed on users hardware = 700 MB
data loaded into RAM = 250 MB
data needed to load/unload FREQUENTLY = 50 MB
I’m not interested in making this a web version, just standalone.
The Resource.Load is what I need. My question is: when I call Resource.Load does it freeze the game until the data is loaded, or is it loading in the background while the main game loop continues?
The game loop can’t continue until the data from the Resources.Load function is returned. You’d have to load quite a few textures at once to get a noticeable hiccup though.
Quick question about this, since it’s on topic: if I have a prefab that is referenced and re-used across scenes, I would expect the prefab is streamed only once and the re-instantiated. Is that correct?
I’d like to load a lot more than textures. This video highlights the kind of thing I’m interested in learning about doing:
The first game I know of that used this kind of system was Legacy of Kain: Soul Reaver (PS1 and Dreamcast). But in Metroid Prime, you get some pretty nice, large environments, but they’re all encapsulated into “rooms”, that are separated by doorways. The content for the adjoining rooms is loaded while you’re in the current room. If you’re in the current room long enough, there is no delay, but if you shoot the door to go into the next room, and you’ve flown through the previous room quickly, you have to wait for the door to open. So, what can we do with Unity to achieve this? (I’m only interested in iPhone games at the moment, so resources are limited.)
Notes: the old rooms have to be cleared from memory. That’s probably easy enough with Destroy(). But does it matter if you tell a giant chunk of stuff to be destroyed at once? Can that cause a hiccup?
What I’m looking for is akin to downloading new resources from the web, in that it is a process that should occur over time, but instead, the data would be loaded from the local disk, into RAM.
Did you find a solution to this problem? I have something similar and the load times are way too long when I use Resource.Load to load my textures as animation frames. Right now I have everything loaded on start, but I was wondering if there is a more efficient loading method for large and copious amounts of textures.
Thanks for getting back to me, yes I was looking at the asynchronous methods of the asset bundles, and that may very well what we need. For the temporary solution I have found that if you need compressed or 16bit textures, Resource.Load works great, I have pointers to the file locations and load the texture each frame of the animation and unload them using Resources.UnloadUnusedAssets() after the animation has stopped playing to clean up the memory. It seems to work extremely fast when using large 16bit textures with very little lag.
The other method I am testing is the WWW method, I have the references to the textures and I load and destroy the textures every frame of the animation to keep the memory clear. This one forces 32bit (I’m using pngs)so it takes up a bit more memory, so there is also a more noticeable lag, but you have the advantage of managing the assets in other folders of your choice and it takes less time to initially load it since it doesn’t lie in the Resources folder. This is a more flexible method, but the price is laggier loading of textures.
I guess they both need more testing, it just depends on your needs, Resources.Load method allows for a wider variety of image formats, but at the cost of a bigger build. The WWW method is better if you need more flexibility with your images in terms of where they are located, and can even be on the server if needed, but at the cost of speed and more memory.
I imagine the assetbundles would be the best of both worlds, but I don’t have Unity Pro to confirm this, if any one has tried the asynchronous loading with lots of high res textures in realtime, I would love to hear the results as well.
Thanks for sharing the details. I’m glad you found a solution that works for you. I wonder why the www method would be slower, it must be the cost of decompressing the png/jpg formats.
I decided to skip this part and optimize at the end of the project, so I have no tests to talk about yet.
I know this is an old post but I happened upon it and thought I’d put a link for future generations. I did a time test with WWW class - see results here.