Technical Question : How texture are handled...

Hi!

Long shot to post this here, but maybe someone will be able to shed some light on this fact =)

In my company, we’re making program with unity there thousand of photo are flying around the user and you can navigate throught them. All that for museum, so custom machine and known hardware.

And of course, stumble across some performances problem :wink:

Du to two fact :

  • Each quad where a photo (texture) is put is an independent quad : no static batching
  • Each Texture is different so no material batching…

Other fact, the texture are not necesserly power of two.

So in basic OpenGL this will be easy to make smooth : Upload everything in VRAM (we got big VRAM computer card) create a lots of quad, and go.
But in Unity, du to the complexe optimisation of the engine toward Video Gaming, it start to shutter, mainly it seems because it unload Texture from VRAM when the quad become invisible.

  • So is there a meaning to tell Unity : just don’t unload this texture from VRAM, keep it there?

And looking at these considerations, i’ve stumble across some fact that I’ll be interested in understanding =)

First of all, a non power of two occopy a lot more VRAM than it’s power of two counter part. A texture that shoudl occupy 1 MB on VRAM occupy 5 MB (no mipmap generation enabled). I’m aware that texture will be padded to be power of two in VRAM, but even padded, that shouldn’t be that much. Does Unity make a power of two copy too?

I understand that my question don’t concern the core public of Unity, that is oriented toward game, but I just want, for a learning purpose, to understand a bit how the engine is handling all that under the hood =)!

Thanks!

try builtin remote profiler to shed some light onto why it shutters :wink:

check that it is NOT readable [if you create from script you can do Apply with param to mark it no longer readable]
Also you can try to tell unity about how it should handle npot textures

Also 1mb->5mb might be pretty reasonable in some case [imagine 1024x1 texture padded to square pow2 - it will become 1024x1024 ;-)] so try to set npot settings to do nothing and check that non-square pow2 textures are supported by hw

Well as for the profiler, I used it, and the thing is that, for it, everything’s fine.

My fps are around 150/160fps, never drop lower than 150. In the profiler, I got some spike, but that is only 8 ms maximal for a frame (wich is still 125fps) so with the Vblank activated, that mean 60fps all along the application.

But, I can still see micro freezing in the quad moving. (1000 quads, Texture created by script, each quad have a unique texture created for it, and are Apply with false to update mipmap and true to Notreadable, are power of two, square)

So does the profiler can be trusted? Does it include Swap time? Or it’s just starting a timer at the begining of a frame and stop it just before swaping?

EDIT :

Well well, but if i compress them, it seems to stop having those freeze…compress alone wasn’t doing anything, but with all the other option (not readable etc…) seems to have a great impact on the perf!

Thx for the advice! My question still stand about the profiler that wasn’t showing me any sign of slowing in the render fonctions :stuck_out_tongue:

Double post, sorry, but different question about texture :wink:

So as said, I got some micro freeze when using non compressed texture. And that’s blocking me for my application : /

When I create 4 empty textures of 2048x2048, i got microfreezing randomly here and there. That doesn’t appears in the profiler. when Compressing it, I got a little freeze at the compression (since it’s a blocking function) but after, no freeze at all.

Sadly, i need to load texture from the outside in my case. So compressing them at runtime produce a freeze during the execution when an image get loaded that I don’t want :confused: I’ve already written a DLL to read and upload to VRAM my texture (OpenGL), that have allowed me to bypass a function of texture2D that was producing microfreezing each time I was loading a texture with WWW…

Is it normal such a difference in rendering for only 4 texture in VRAM? I used something like 120Mb of My 1Go VRAM, the graphic card is fairly recent (it’s an nVidia for the information). I understand that non compressed texture are slower to render, but that much? Prioducing little freeze randomly?

well, it seems like the “freeze” is due to uploading textures to VRAM. The usual workaround is to draw small rect with texture in question while loading. The thing is - desktop graphics api don’t allow to control this so usually system-memory to vram upload happens on first draw

Thanks a lot Alexey for all the answers! =)

Still got some problems, like a scene only composed of 4 camera (each have a quarter of viewport) and 4 GUITexture with 2k by 2k texture have terrible performance, on GT 520… (The frame rate seems to drop passed a certain window size, on 4 screens with 2 GPU)

I totally understand that there is lots of stuff happening inside of Unity for best perforamnce in a game scene that will drop performance for more specific application where pure graphic APi will be faster, but having inside view of how the engine is working help to understand and bypass all that =)

Gratefull for the help! =)

hm that sounds weird. On the other hand multi-mon setup is a bitch :wink:
Can you repro it on single monitor? If so - please bug report and attach repro case.

Ok, I feel stupid, it was it seems a fillrate problem…with a more powerfull card, the fps just goes throught the sky…

BUT got another problem =D

To try to improve the framerate, i try to use some bundle. This allow me to have precompressed texture.
But when I load my bundle with www, I got sometime in the profiler a HUGE spike ( up to 590 ms oO!) in Texture.AwakeFromLoad…
That was why I was trying to avoid by loading my texture directly in c++ and gl*…was hoping that precompressed this wont happen…

Why this function is having so much diffuculty? My bundlised texture is marked as non readable.
Does this AwakeFromLoad is from uploading to VRAM? With so i didn’t have so much slowing of my application when I was using my DLL that use glTexImage2D?

Hi Guys!!

This is difficult task for me… i want change texture in Opengl in xcode and use it in Unity but doesn t works…
Unity texture ->Change Texture in Opengl in xcode → Use it In Unity

This is my code… will somebody help me?

Unity C:
public GUITexture mio;
public Texture2D m_Texture;

void Awake () {
m_Texture = new Texture2D( 640, 480, TextureFormat.ARGB32 , false);
m_Texture.Apply();
}

void OnGUI () {

if (GUI.Button(new Rect(10,80,90,30),“update_rgb_texture”))
{

Debug.Log(“Bonjour.Buffer”);
Bonjour.pluto(m_Texture.GetNativeTextureID());
m_Texture.Apply();
GUI.DrawTexture(new Rect(0,0,Screen.width,Screen.height) , m_Texture);

}

public static void pluto(int number)
{
_updateTexture( number );
}

[DllImport (“__Internal”)]
private static extern void _updateTexture( int nTexId );

Xcode:
void _updateTexture( int nTexId )
{

CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
GLuint videoFrameTexture_;

glClear(GL_COLOR_BUFFER_BIT);

//glGenTextures(1, &videoFrameTexture_);
glBindTexture(GL_TEXTURE_2D, myid+1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(pixelBuffer)); //My Image
GLenum err = glGetError();
if (err != GL_NO_ERROR)
NSLog(@“Error uploading texture. glError: 0x%04X”, err);

GLfloat quatVertices[8] = { 10.0f, 0.0f,
10.0f, 10.0f,
-10.0f, 10.0f,
-10.0f, 0.0f};

GLfloat textureCoord[8] = { 1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f};
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
bool a = glIsTexture(myid+1);
glBindTexture(GL_TEXTURE_2D, myid+1);
GLfloat vertices[ ] = {1,0, 0,1, -1,0};

//glEnableClientState(GL_VERTEX_ARRAY);

glVertexPointer(2, GL_FLOAT, 0, vertices);

glDrawArrays(GL_TRIANGLES, 0, 2);

glDisableClientState(GL_VERTEX_ARRAY);
//glDisableClientState(GL_TEXTURE_COORD_ARRAY);

}
Ios Device

Thanks Alessandro