Multiple light sources and accumulation buffers for GI

I am working on some global illumination stuff in Unity Pro. I have a super simple scene with a box (the Cornell box). I am trying to render that same scene multiple times (with the same camera parameters), and each pass uses a different point light source. Then I use ShaderLab to do a weighted average of each rendering pass, and present the result.

I am getting some ringing artifacts on the illumination which I can’t quite understand. My only guesses as to why this may be happening are:
-Somehow, RenderTextures are converted to integer precision at some point in the pipeline, and thus I have a precision problem.
-There are artifacts on the shadow maps, probably related to aliasing (maybe the shadow map doesn’t have enough resolution, or its format is integer as well?), and then when I add them together the artifacts add up and are more visible.

A simplified web plugin demo of my current prototype is located at http://www.instantradiosity.tk/ Right now, I am creating all of my point light sources at the same exact position (on the center of the top of the box) so I could have a clear look at these artifacts. If you press any key, a GUI with sliders appears: please leave Rho = 0 and move N (which is the number of lights). When N = 1 there is only one light source and no apparent artifacts, but when I increase N (and therefore I increase the number of point light sources and rendering passes, one light in each pass), I get these horrible artifacts.

(please don’t mind the low framerate, that is not a concern at this point)

I would appreciate any help!

Hmm, I need to step back a bit. Does Unity support floating-point color textures at all?

No, I don’t think so.

Not yet. Render textures are either 8bit/channel RGBA (fixed point), or single channel “depth” format (which is 32 bit floating point on D3D9, and depth buffer on OpenGL).

Just an idea: Could I improvise my own “floating-point color textures” by using 4 depth buffer textures, or somehow encoding a floating-point value in the 4 integers of a regular ARGB32 texture? (I only need this floating-point texture to accumulate color info in multiple passes with better precision than the current RenderTexture formats. I only need high precision in the blending of multiple passes) Of course, I would need to take care of the encoding and blending in a shader myself.
Anyways, just brainstorming aloud. I would appreciate some feedback. I really need to figure out a way to get rid of the loss of precision with integer math in my multiple-pass blending.

The regular color texture has not 4 ints.
Its 4 byte which are from the bytesize identical to a single float per pixel

You would need an ARGB128 to store 4 float channels in a single texture (4x 32bit), which does not exist currently.

You could try to encode it in 4 textures but it has performance impacts doing so, as you require 4 textures and operations to use the 4 distinct values as if they were one to do the same thing as otherwise a single would do.

Sorry, I used the wrong terms. I meant to say 4 integers in uchar format, together using 32 bits.

Are there any plans for supporting this? Most non-ancient cards support GL_ARB_texture_float (Geforce 6 and up, Radeon X700 and up)

I am concerned about that. Although, texture writes shouldn’t be too bad of a performance hit. It’s the encoding function that worries me, since I am already rendering this dozens of times per presented frame.

Now, this is the shader that adds the current pass to my accumulation buffer:

Shader "Hidden/InstantRadiosity"
{
	Properties
	{
		_MainTex ("Base (RGB)", RECT) = "black" {}
		_OneOverN ("OneOverN", Float) = 0.01
	}
	SubShader
	{
		Blend One One
		ZTest Always
		Cull Off
		ZWrite Off
		Fog
		{
			Mode Off
		}
		Pass
		{
			SetTexture [_MainTex]
			{
				ConstantColor ([_OneOverN], [_OneOverN], [_OneOverN], 1)
				combine texture * constant, constant
			}
		}
	}
	Fallback Off
}

_OneOverN is a float and equals 1/NumberOfPasses. I use that to average things. My impression is that when I multiply texture * constant, the result in each color channel is rounded and stored as byte. As I add passes, the lack of precision turns into aliasing. If anyone has any ideas on how to improve the precision or has insights on any other reasons that may be causing my problem), please let me know.

EDIT: I just found this: GLSL float to RGBA8 encoder | JeGX's Lab

vec4 packFloatToVec4i(const float value)
{
  const vec4 bitSh = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0);
  const vec4 bitMsk = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0);
  vec4 res = fract(value * bitSh);
  res -= res.xxyz * bitMsk;
  return res;
}

float unpackFloatFromVec4i(const vec4 value)
{
  const vec4 bitSh = vec4(1.0/(256.0*256.0*256.0), 1.0/(256.0*256.0), 1.0/256.0, 1.0);
  return(dot(value, bitSh));
}

I’ll give this a try, but, if possible, I’d still like to get some input from you all on my ShaderLab accumulation buffer shader.