Graffiti app - Vive - Controllers only

Hi we are building a graffiti app using only the vive controllers as the spray cans. We’re 3d printing the cans to fit the controllers. The user will “paint” onto an 80" monitor they are standing in front of. So, I would like to not render at all the HMD and then render another camera view (the wall being painted on) to the big monitor.

I suppose I will still need to plug the HMD’s HDMI cable in? So I will need a machine with two HDMI ports? Would I set the Camera(eye) Target Display to say Display 2 - and then my real main camera to Display 1? And then both of them set Target Eye to None (Main Display)?

Also - anyone have any thoughts on best placement of the lighthouses?

Wouldn’t the biggest issue be syncing the monitor location to be exactly where the controllers think it is? I can’t even imagine how you would start to do that.

LOL - Yes… I actually meant to include that in my questions above. I figure I’m going to attempt through trial and error. The HMD will be mounted, so always in the same location so I figure I can put a Quad in view and just keep changing its scale until it matches the monitor. Essentially we’re just using the vive to get accurate hand position. Two hands to be exact. And then we also have input with the buttons… I don’t think it’s the best solution, but I haven’t found a better one. Some of the Polhemus trackers might do the trick, but I have a Vive already.

The reason why it came to mind in the first place is because Vive doesn’t hard enforce which way is front or back. It doesn’t even have anything at all to do with the light post positions. The direction is set when each and every single user calibrates their room. I’ve set my same room up multiple times and sometimes it has me facing one way for forward, then another time it’s 90 degrees different.

So I guess maybe you can do (heaps of) trial and error to find the exact spot, but that would be lost if the room is re-calibrated.

Thanks much for the input.

I wonder if there is anyway to force the calibration, like modify a file or something? You think there has to be a way that when you calibrate the same room, the vive is setup the same way.

You display a target on the monitor, and then move the controller to that spot and click a button. Do it a few more times and you have a reasonable calibration of where the monitor is within the room space.

Thanks! I can see that working. However, how do I align the camera / quad with the pick points? I did a quick test with a second camera outputting to Display 2 - and then put a image target on a canvas. It works fine. I’m just not sure how I then align my drawing quad and a camera so that it’s full screen and aligned square with the physical monitor.

I’m not fully sure what you’re after, or what the problem is, to be honest.
I’d be tempted to ask the user to click the controller at each corner of the screen. This now gives you the spacial location of the monitor. Force the values into a planar rectangle with a bit of math.

I’m assuming you want to display a wall on it? Just create a textured plane at the coordinates. Calculate the midpoint and the equatione for a straight line coming out from the center which is perpendicular to the surface. Pick a location point for the camera along that line. Use trignometry to find the angle from the camera position to the side of your quad. Double it, this is the FOV for a camera which is filled withethe quad. Create camera at the location, set its fov. You might want to extend the quad a bit more to the sides to cope for possible aspect ratio changes, etc but I hope this gives you some ideas (and that it works)

2 Likes

Thanks much, this was really helpful. I am now creating a mesh with the four points I get from clicking the controller at the monitor corners. However the mesh is not created where the controller actually is it’s off in by several units at least. Have to figure that out, I think the controller coords mght be relative to the HMD?

https://gmrmarketing.box.com/s/0v1vxk16cm2pbkt9luw148wfqiu531f6

If you click the Box Link I have an image of what I’m getting. You can see where the controller is in the world compared to the quad I’m creating.

These are the four points I get from the controller. I am using trackedObject.transform.position

UL: -.5, 2.0, .6
UR: -.6, 2.0, 1.0
LR: -.6, 1.7, 1.0
LL: -.5, 1.7, .6

And if you look at the image above, with the Steam camera at 0,0,0 the rect I am generating based on those points is nowhere near those points… I am just setting the vertex list of the mesh to those points.

Ahhhh! Got it. The object I was using to render the mesh wasn’t at 0,0,0… my bad.

I have this mostly working. Thanks for all the help. @Innovine wonder if I could get a little more help. Using the Vive controllers and marking the four corners of the monitor sometimes works really well, and sometimes is a bit off so that the applied texture looks off as well. Not much, but the corners aren’t all square. They are really never all square. Wondering if there’s a good way to force square corners.I have Y axis handled but I’m not sure how the monitor will be positioned and so I’m not sure about forcing X and Z.

You might ask the user if the screen is 16:9 aspect ratio. I don’t know if there’s a unity API for that… Anyway, you know that in the perfect solution the four corner points should be in the same plane, so you should probably reduce the problem to 2d as quick as you can. Then, the opposite sides of the screen are parallel, and the corner angles are 90 degrees. I don’t have an exact algorithm to solve that but there should be enough info there to deal with slightly incorrect reference points.

Thanks! I’ve gotten the rect square and now the aspect ratio is the problem. After generating the mesh and adjusting the camera I am getting letterboxing on the left/right sides. Adjusting FOV just cuts off the top/bottom further. Can I adjust aspect? I thought that is supposed to be set by the display resolution.

LOL. Nice.