I have a very large PNG file (35,000 x 18,000 : 92 megs compressed, 2.5 gigs uncompressed) and would like to show a screen’s worth of pixels (up to 4k res).
Currently I break the image up into squares sprites (in asset bundles) and show/cull the parts the moving camera should be displaying (like Google Maps).
However a friend suggested I read the byte[ ] data from a FileStream and update a texture accordingly. This way if the camera moves one pixel to the left I only need to grab the new pixels and update the array.
I’d obviously like to do this without keeping anything but the visible pixels in the memory.
A dream function would be:
Color32 GetPixelFromPNGStream(int x, int y, FileStream imageFileStream){};
…which would pluck the correct 4 bytes from the correct PNG chunk.
A few questions:
After looking at the PNG format it looks like getting the pixel data from the the chunks is a real fiddle and might require the entire PNG to be converted to another format in code (which would cause memory issues when the uncompressed PNG is 2 gigs!) - has this been solved by any 3rd party libraries? Is it a nightmare to do myself?
If my “GetPixelFromPNGStream” is achievable, does anyone have any suggestions to the most performance-friendly way to update a array of pixels in a texture (or perhaps via GL?)
SparseTextures appear to be DX12 only (and high end GL, which rules out most macs) and Amplify seems to be a bit overkill for what I’m looking to do. Perhaps this is incredibly hard and that’s why it’s so expensive!
Ah yes, I’m doing something a little bit similar to TMS right now with the Sprite square chunks - formalising the structure to TMS might be a good idea if I dont go down the byte[ ] streaming route
Sparse textures are also just based on tiles. So would keep doing what you are doing now and work with smaller tiles.
The idea to partially read the PNG is nice, but the compression does ruin things. The LZ77 compression in PNG might require any byte of the previous 32KB. Every byte in there might in turn be based on the previous 32KB. Besides that, there is really no exact way to know where a pixel is stored in a PNG file. So you’d have no idea where to start reading.
To top things off, PNG uses a predictor based on up to four previous pixels. (Which, again, might all point back to any byte in the 32KB before it.) And the predictor type is allowed to vary throughout the image.
I’d challenge that friend to come up with some pseudocode
Great explanation to the PNG compression. It’s exactly what I was hitting my head against.
I guess I could use a different format? I’m guessing there’s no compressed format that’s god a “friendly” structure? Wondering whether nabbing bytes off an uncompressed bitmap could do it ? (Treating the .bmp like a swap)
I don’t really know of a compressed format that would be much easier. Most lossless formats are based on LZ77 or LZ78. JPG is different, but I could write a similar paragraph on why I wouldn’t try that. (Block structure, different block sizes for luminance and chrominance, again no way to know where to start looking and each blocks average value is relative to the average value of the previous block.)
Uncompressed bitmap would work, but, the right solution is really the tile based solution. It allows you to still have compression and keep your data structured without needing to maintain information for each individual pixel.