Lightmap clamped when switching from MAC OS X to iOS

When my build is set to MAC, lightmap looks great specially the hot areas (burned surfaces by sunlight, needed for artistic purposes).
When switching to iOS those area become darker (even if set my uncompressed lightmap to super high exposure), it seems values just get clamped after a certain exposure point. Is there anyway to fix this problem so my lightmap on iPAD, looks similar as it does on my MAC?

After all they both have roughly the same apple gamma curve and note that in both cases lightmap is set to truecolor so no compression.

On platforms with programmable shaders we enconde lightmaps in our RGBM format, which gives the encoded light range [0,8]. On platforms like iOS, where the content could still be potentially run in fixed function, we have to resort to double LDR encoding, which gives range [0,2].

So what you’re seeing is a clamped light value due to limitations of that different encoding.

A way to fix this is to make your floor texture much brighter (and re-adjust light intensity afterwards). This way the lightmap won’t have to “pull up” the floors brightness that much and maybe you’ll fit within the range.

Already tried this, to achieve the MAC OS X burned area, I had to brighten the floor alot to a point where it got unrealistic and super washed out (areas in the shade) .
Is there another way to fix this, may be in code, enlarge the dynamic range a bit? or any other solution?

How could that happen? You can’t even build a Unity project onto such an iOS device anymore. I think it’s time to get rid of the encoding type dichotomy.

You’d have to use a custom shader, and either use medium precision in the fragment shader or multiple lines of multiplication (faster).

the fix seems to be these modifications:

fixed diff = max (0, dot (s.Normal, lightDir));

fixed3 ramp = tex2D(_Ramp,fixed2(atten,.5)).rgb;
fixed4 c;
c.rgb = s.Albedo * _LightColor0.rgb * (diff *(atten*2) );// * ramp;
c.rgb = ((c.rgb*2)*2)+c.rgb;
c.rgb*=ramp; 
return c;

But where do you exactly place them?

You can still pick GLES1.1 for both iOS and Android. Which is fixed function.

Would we want to drop GLES1.1? Yes, would be much less code for us to maintain test.

I agree, we should get rid of that someday.

But: There are both advantages and disadvantaged to the different versions. I had problems with fog under GLES2.0 and the solution at the time was to force GLES1.1: http://forum.unity3d.com/threads/90649-Matching-Fog-in-iOS-to-Editor

Supporting multiple “things” is always tough, but may give more options.

What you did was a workaround. You shouldn’t have had to have used obsolete tech to achieve your goals. People are not using OpenGL ES 1.1 because it offers any unique feature. Why is it still an option? I’ve seen people talking about performance problems, but if that’s the only case, then Unity should sort that out and then remove the option. You can’t build to an iOS device with fixed function hardware with Unity 4. Are there ARMv7 Android devices that don’t support OpenGL ES 2.0?

I get the idea that is doable through OpenGl ES 1.1, but how? where do I set this so it will apply to my iOS build?

Hello Aras, it’s me, from the future. It’s 2017 and Unity still uses doubleLDR on mobile, even though gles 1.1 is not supported any more.

Please do something about it.

@AcidArrow This thread is over 4 years old.

Please do not resurrect old and buried threads. Please look at the date of the original post and the last post.

If the thread is inactive: Do Not Post.

Please create a new thread of your own. If you feel you need to reference the old post, do so as a link.

As such, I am closing this thread.

In this particular case, if you feel that there is a critical issue with the graphics pipeline that needs to be addressed, filling out a complete bug report with a full description and reproducible content in a scaled project would help a lot.

Thanks!