Why does tex = new Texture2D(1280,1656) consume so much memory?

Calling tex = new Texture2D(1280,1656,TextureFormat.RGB24,false) eats up like 40 mb of memory!

When you import an 1280x1656 RGB24 texture into unity by just putting an image in the Assets folder, it comes out to be only around 6mb… Why is unity allocating 40 mb of memory on a new Texture2D call of the same exact format?

I have tried doing tex.Apply(false,false) in order to set its read flag to false in order to try clearing up memory. But that seems to do nothing. I tried called Resources.UnloadUnusedAssets() afterwards, did nothing… does calling new Texture2D(1280,1656,TextureFormat.RGB24,false) really just eat up 40 mb, and theres nothing you can do about it?

That doesn’t make any sense though, why would new Texture2D(1280,1656,TextureFormat.RGB24,false) in code at up 40mb but a 1280x1656 RGB24 imported through the Assets folder only use up 6mb?

Ok I did a test.

Make an empty scene with a plane, no texture on it. Build and run, it takes up 20 mb in RAM.

Then I put a 1280,1656 RGB24 texture on that plane. Build it and run, takes up 70mb in RAM!? Unity says this texture is only 6 mb in RGB24 format, why is it taking up 70 mb at runtime?

You should always destroy the previous texture before allocating a new one. Never go ahead and just replace it with another one.

but aside of that: I could at best explain 20mb and even that only on mobile.

Ready to be surprised?

1280165624 / 1,000,000 = 50MB.

Where does unity tell you it takes 20MB in ram?

It might be compression. Your graphics card does not understand any compression but DXT and will always expand the texture regardless of what its size is on your RAM.

also, one thing to remember, even though current video cards can handle non powers of two textures, it still re sizes them to powers of two.
It is also slower for it to do so. You might want to re size your image to include some alpha to get rid of the extra calculations ect.

There are many things that include increasing the used ram on your GPU. Each time you do an extra light, it has to draw the geometry twice. If you have any effect on screen effect like blur ect, it has to draw the entire scene again.

Since you said you used an empty scene, I don’t know what is going on. See what it says in the profile.

Except that’s way off. 24bit = 3 bytes. (Also, 1MB = 1,048,576.) 128016563 / 1048576 = 6.06MB.

Aside from the possible power-of-two-resize thing, another thing is that even though you’re only storing 24 bits per pixel, it’s probably using 32 bits per pixel in RAM, for the sake of efficiency. The size Unity tells you is the size on disk, not necessarily the size in RAM.

–Eric

Well I have this working where it is releasing all of the memory it’s using.

Basically I am making an image gallery.

Unity by default is using like 22mb of ram on load. I then open my gallery which on open it allocates a new Texture2D and loads an image into it via www. Unity will PEAK at 80-90 mb and then drop down to around 50-60 in usage. Then I go to the next image it peaks at 80-90, drops down to 50-60. etc. Then when I close my gallery it goes back down to 22mb. So the memory is being cleared properly its just I did not imagine loading in a 1280x1565 size jpg would eat up 30-40 mb in memory. I could understand the www call taking up 40 mb. But for it to remain 40?!

Which I don’t think there is anything to do about it because new Texture2D(1280,1656,TextureFormat.RGB24,false) is what eats up the 40 mb, not necessarily the www call.

textures loaded from outside have an overhead of 300%+, thats normal and known.

I want to try to make a jpg reader sort of how the berkelium or awesomium plugin works where it creates a single texture before hand and then it streams the data changes into it with SetPixels(). Only thing is I need to get my JPG into a 2D array, would anyone by chance know a way to get a JPG into a 2D array?

Do you know why?

I am really curious to know what exactly is happening internally to spike RAM usage so much, if done properly, just decoding a jpg cannot possibly take that much.

I haven’t tried myself, but I would guess System.Drawing.Bitmap is a good starting point.

There isn’t any System.Drawing in Unity. Do you mean an array of pixels? That’s what GetPixels does. It’s 1D rather than 2D, but that doesn’t really matter.

–Eric

To use Texture2D.Getpixels I’d have to use www, which I want to see if I can maybe work around that to have a lighter way of loading in jpgs. I’m wondering if there is some other library which could stream a jpg into an array of pixels.

Can I just take the System.Drawing library out of mono and link to it from in Unity?.. that wouldn’t work on iOS though would it?

But even with that I guess it’s still not possible to get around new Texture2D calls eating no less than 20 mb… Unless I wrote the jpg directly to the framebuffer with the GL class??

Maybe this is just hopeless…

It seems somewhat silly to me that its not possible to display a 200 kb jpg in unity without it eating 40 mb of ram, or 60 mb of ram in spike.

Couple of thoughts to try and explain the memory usage you are seeing.

Firstly as stated your current jpgs are not ‘power-of-two’, which means internally Unity will create 2 versions, both POT in size. One simply copies the source image into it 1:1 ratio, so leaving empty space where the source image ends. The other downsamples to the next nearest POT, and so the image is scaled and detail lost. At least I think it downsamples, never really looked into that but can see arguments for both methods, downsmaple to use less memory, upsample to keep detail, though it is still going to be blurred slightly.

So your 1280x1656 image becomes 2048x2048 and 1024x1024, which at 24bit become 12mb and 3mb respectively. So thats 15mb or 20mb if Unity stores it as 32bit.

Another copy may be kept if the default for downloading an image via WWW is to mark it as ‘readable’. So depending on what is kept by Unity in the case of a non-pot jpg that could be an additional 6mb-20mb.

So even at the lowest estimate your 6mb image data could be taking up as much as 21mb or worse case 40mb. Though just because the worse case here looks the same as what you are seeing I don’t think we can just assume its correct. There are likely to be other factors that will increase memory usage beyond what i’ve mentioned here, for example mipmapping if Unity keeps them on the cpu, or WWW operations are cached in case you ask for it again.

It should then be possible to reduce memory requirements but will require some effort.

Resize all jpg’s to 2048x2048, but do not rescale the image, simply copy it into the 2048x2048 image. You’ll have a border around the image which you’ll have to remove through a material, shader or by uv mapping on a plane for example. This will avoid Unity making two copies, so in theory saving the memory from creating the 1024x1024 version.

Set the texture to use point sampling to remove mipmaps - though unsure if that will really save cpu memory, since I can’t think of a reason for Unity to keep it around once uploaded to the gpu.

Try using apply(false, true) toforce Unity to ditch the texture data once uploaded to the gpu (maybe have to get/set a pixel first).

No, you can use Texture2D.LoadImage to load an image from a byte array.

The file size of the jpg is irrelevant, though, since it always has to be uncompressed in RAM regardless of whether you’re using Unity. 40MB does seem too high, but it’s always going to be far more than 200KB no matter what.

–Eric

Oh. Right, thank you.

I just realized the Texture2D.LoadImage won’t work how I want.

What I’m looking to do is allocate a new Texture2D at the size of 1024x1024 and then stream a jpg into it. I don’t know if this is even possible though… Does a jpg have to be loaded into memory in it’s full resolution to be decoded properly? Or can you somehow get a stream of pixel colors out of it so that you could say decode pixel A, it takes however much memory and cycles, which I’d imagine for 1 pixel would be minimal, then you write that pixel to the Texture2D through SetPixels.

new Texture2D(1024x1024) seems to only use 7-10mb of ram so if there was someway of writing to it data from a jpg without having to load the fully decompressed jpg into memory all at once, this would be a low ram solution to viewing jpg files of any size.

edit-- was actually just reading on wiki that jpegs are compressed to 8x8 pixel blocks… so I imagine it’d have to be possible to grab one 8x8 block at a time from a file. Does anyone know any library that might aid me in doing this?

This might be worth trying, but the problem isn’t the size of the uncompressed jpg, that should be around 6mb, its that Unity may have several copies or cached copies around - see my previous post above.

I suspect you might have to take the initial loading memory hit regardless, it depends if you can avoid Unity taking control over the texture and doing its thing. After that you should be able to reduce, memory overhead, for example resizing the texture to be POT or copying the jpg texture into a larger POT.

Had a quick search online and whilst there seem to be plenty of source code for jpg encoders, there is little to none for decoding.