The last button, what should be written, is “Restart”. But the scene started with the button’s gameobject disabled, and when I enabled it, the text was like that. If the gameobject starts enabled when the scene loads, this bug doesn’t happen.
I even tried using Unity - Scripting API: WaitForEndOfFrame , but it doesn’t work.
@jukka_j Is there a workaround to this? This bug is really annoying. I can’t bind any texture before all UI is enabled, otherwise it will mess up the text.
Workaround would be to set the font asset to Unicode instead of Dynamic. But it’s final size might end up larger
@De-Panther Where do I change that? There is no dynamic option
Oh it’s for text mesh pro…normal text still no workaround. It works even if the font is dynamic. Thanks for the info!
Not in TMPro, in the font asset file inspector, the character
I could only edit the font after installing tmp though. The liberationSans font. The Arial default one that comes with the normal text component is not editable. But you’re right, I can just use a font asset with the Character field set to Unicode and the normal text component, and it will work. Thanks!
@De-Panther It’s not a bug with the Text component, it’s not even the character field. This bug only happens with the Arial font from Unity’s default resources. It may happen with other fonts as well, but… using the liberationSans font that comes with tmp works with the field being Dynamic or Unicode. Huh.
TMP use other techniques for generating dynamic characters…
Not using TMP, just the regular Text component with the liberationSans font(that comes with TMP)
It’s on the issue tracker. Apparently it’s a windows 8.1 issue only.
@jukka_j Please take a look at this one!
Sorry for the delay on this one. Can you expand a bit on the title vs the content of the example? The post title mentions GLctx.bindTexture being used, though the post instead refers to starting a scene with a GameObject disabled?
If you are manually calling GLctx.bindTexture(), I would recommend doing that in a stack guarded fashion, where before calling GLctx.bindTexture(), you’ll call GLctx.getParameter(GLctx.TEXTURE_BINDING_2D); , and after operating on the texture manually, restore the old binding. I.e.
var oldTexture = GLctx.getParameter(gl.TEXTURE_BINDING_2D);
GLctx.bindTexture(myTexture);
.. do my operations
GLctx.bindTexture(oldTexture);
This is because the Unity GL renderer is not designed from a perspective that other actors might interfere with the active GL state while the renderer is operating.
Or maybe I misunderstood the issue, since the post body did not refer to GLctx.bindTexture(), but opening a scene with a disabled GameObject?
Using bindTexture makes it so the content does not appear correctly if a UI text starts disabled, and then enabled. Only happens on windows 8.1, it’s on the issue tracker.
This sadly didn’t work.
That is very peculiar. If the above guarding does not work, it does suggest something in the browser implementation of WebGL. Which browsers do you have it occur on Windows 8.1?
I mean building the project using windows 8.1. It happens on chrome on desktop, and also android.
Ah, now I understand.
Can you file a bug report, I think we should now have somewhat better free cycles to look into this, since our devs are clear from a previous milestone deadline.
@jukka_j It’s already on the issue tracker.
Thanks! I was able to dig that up now, and uncommenting the test case in there, I was able to reproduce the issue.
When I change the code to this, it works:
video.updateTexture = (function(timestamp) {
var prevTex = GLctx.getParameter(GLctx.TEXTURE_BINDING_2D); // (*) Added
GLctx.bindTexture(GLctx.TEXTURE_2D, GL.textures[video.texturePtr]);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, true);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_S, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_T, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_MIN_FILTER, GLctx.LINEAR);
GLctx.texSubImage2D(GLctx.TEXTURE_2D, 0, 0, 0, GLctx.RGBA, GLctx.UNSIGNED_BYTE, video);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, false);
GLctx.bindTexture(GLctx.TEXTURE_2D, prevTex); // (*) Added
video.requestId = requestAnimationFrame(video.updateTexture)
});
video.addEventListener("loadeddata", (function() {
GLctx.deleteTexture(GL.textures[video.texturePtr]);
GL.textures[video.texturePtr] = GLctx.createTexture();
GL.textures[video.texturePtr].name = video.texturePtr;
var prevTex = GLctx.getParameter(GLctx.TEXTURE_BINDING_2D); // (*) Added
GLctx.bindTexture(GLctx.TEXTURE_2D, GL.textures[video.texturePtr]);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, true);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_S, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_T, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_MIN_FILTER, GLctx.LINEAR);
GLctx.texImage2D(GLctx.TEXTURE_2D, 0, GLctx.RGBA, GLctx.RGBA, GLctx.UNSIGNED_BYTE, video);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, false)
GLctx.bindTexture(GLctx.TEXTURE_2D, prevTex); // (*) Added
}));
I.e. adding “scoped” guards for the 2D texture binding state preserves the GL state caching that the Unity engine does.
Another variant that is sometimes seen is to instead scope guard the active texture unit:
video.updateTexture = (function(timestamp) {
var prevActiveTexture = GLctx.getParameter(GLctx.ACTIVE_TEXTURE); // (*) Added
GLctx.activeTexture(GLctx.TEXTURE15); // (*) Added
GLctx.bindTexture(GLctx.TEXTURE_2D, GL.textures[video.texturePtr]);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, true);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_S, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_T, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_MIN_FILTER, GLctx.LINEAR);
GLctx.texSubImage2D(GLctx.TEXTURE_2D, 0, 0, 0, GLctx.RGBA, GLctx.UNSIGNED_BYTE, video);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, false);
GLctx.activeTexture(prevActiveTexture); // (*) Added
video.requestId = requestAnimationFrame(video.updateTexture)
});
video.addEventListener("loadeddata", (function() {
GLctx.deleteTexture(GL.textures[video.texturePtr]);
GL.textures[video.texturePtr] = GLctx.createTexture();
GL.textures[video.texturePtr].name = video.texturePtr;
var prevActiveTexture = GLctx.getParameter(GLctx.ACTIVE_TEXTURE); // (*) Added
GLctx.activeTexture(GLctx.TEXTURE15); // (*) Added
GLctx.bindTexture(GLctx.TEXTURE_2D, GL.textures[video.texturePtr]);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, true);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_S, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_WRAP_T, GLctx.CLAMP_TO_EDGE);
GLctx.texParameteri(GLctx.TEXTURE_2D, GLctx.TEXTURE_MIN_FILTER, GLctx.LINEAR);
GLctx.texImage2D(GLctx.TEXTURE_2D, 0, GLctx.RGBA, GLctx.RGBA, GLctx.UNSIGNED_BYTE, video);
GLctx.pixelStorei(GLctx.UNPACK_FLIP_Y_WEBGL, false)
GLctx.activeTexture(prevActiveTexture); // (*) Added
}));
This is not necessarily as foolproof, as it assumes that the engine does not use texture unit 15 for anything. However the first case assumes that the texture unit 0 actually did have a 2D texture bound (as opposed to a 3D texture or a cube texture, so it is not 100% foolproof either - OpenGL is a bit messy like that )
Btw thanks for the video in the bug repro, that made the issue very clear to examine