Can you help me understand this Camera calibration between Kinect and PS Move?

I recently found a piece of code that magically combines PS Move and Kinect coordinates and fixes a displacement issue that I had. It’s nice that my problem is solved, but I have no clue how this calibration happened! I had thought that Calibration needs to happen using one of these algorithms
But I am not sure how the whole problem was broken down to AX=B problem :confused: Can someone point me to what algorithm is this solution based on? Btw, the code works like charm and transforms PS Move on to the right hand of Kinect!

	private void CalculateTransformation()
	Matrix moveMatrix;
		Matrix kinectMatrix;
		//Move Matrix of 50x4 and Kinect Matrix of 50x3 
		moveMatrix = Matrix.Zeros (samples_PSMove.Count, 4);
		kinectMatrix = Matrix.Zeros (samples_Kinect.Count, 3);
		for (int i = 1; i <= samples_PSMove.Count; i++) {
			moveMatrix [i, 1] = new Complex (samples_PSMove [i - 1].x);
			moveMatrix [i, 2] = new Complex (samples_PSMove [i - 1].y);
			moveMatrix [i, 3] = new Complex (samples_PSMove [i - 1].z);
			moveMatrix [i, 4] = new Complex (1.0f);
		for (int i = 1; i <= samples_Kinect.Count; i++) {
			kinectMatrix [i, 1] = new Complex (samples_Kinect [i - 1].x);
			kinectMatrix [i, 2] = new Complex (samples_Kinect [i - 1].y);
			kinectMatrix [i, 3] = new Complex (samples_Kinect [i - 1].z);
		//perform a matrix solve Ax = B. We have to get transposes and inverses because moveMatrix isn't square(x,y,z,w)
		//the solution is the same with (A^T)Ax = (A^T)B -> x = ((A^T)A)'(A^T)B
		Matrix transformMatrixSolution = (moveMatrix.Transpose() * moveMatrix).Inverse() * moveMatrix.Transpose() * kinectMatrix;
//		Matrix error = moveMatrix * transformMatrixSolution - kinectMatrix;
		transformMatrixSolution = transformMatrixSolution.Transpose();
		List<Vector3> orthogonalVectors = 	MathUtil.Orthonormalize(MathUtil.ExtractRotationVectors(MathUtil.MatrixToMatrix4x4(transformMatrixSolution)));
        rotationMatrix = CreateRotationMatrix(orthogonalVectors);
		transformMatrix = MathUtil.MatrixToMatrix4x4(transformMatrixSolution);

		coordinateSystem.SaveTransformDataToXML(xmlFilename, RUISDevice.PS_Move, RUISDevice.Kinect_1);
		coordinateSystem.SaveFloorData(xmlFilename, RUISDevice.Kinect_1, kinect1FloorNormal, kinect1DistanceFromFloor);
		Quaternion rotationQuaternion = MathUtil.QuaternionFromMatrix(rotationMatrix);
		Vector3 translate = new Vector3(transformMatrix[0, 3], transformMatrix[1, 3], transformMatrix[2, 3]);
		                   translate, rotationQuaternion, transformMatrix,
		                   RUISDevice.PS_Move, RUISDevice.Kinect_1);
		coordinateSystem.RUISCalibrationResultsDistanceFromFloor[RUISDevice.Kinect_1] = kinect1DistanceFromFloor;
		coordinateSystem.RUISCalibrationResultsFloorPitchRotation[RUISDevice.Kinect_1] = kinect1PitchRotation;   
		kinect1ModelObject.transform.rotation = kinect1PitchRotation;
		kinect1ModelObject.transform.localPosition = new Vector3(0, kinect1DistanceFromFloor, 0);
		psEyeModelObject.transform.position = coordinateSystem.ConvertLocation(, RUISDevice.PS_Move);
		psEyeModelObject.transform.rotation = coordinateSystem.ConvertRotation(Quaternion.identity, RUISDevice.PS_Move);
			this.floorPlane.transform.position = new Vector3(0, 0, 0);

I guess my question could be more simpler. I know that this is solving AX = B type of problem. Why has the author added one more row for PS move and not for Kinect? Is he using homogeneous coordinates? Can we do Ax=B when A is in homogeneous coordinates and B is in Cartesian?

OMG!!! i have been looking for this for a while, i cant believe i am looking at the code but i am in a similar situation, this code seemed to be for some sort of SDK, not the microsoft SDK right? or you think this will work with Kinectv2 with MS SDK?

i really need this to happen do you have more information?