I’m new to unity and have to use it for a school project. I was wondering if there is a way to make the camera see only 1 color, while rendering everything else black. I’ve been looking thru documentation on Camera Textures, but I’m not sure what specific terminology I need to look up for the effect I want. Refer to the attached image:
If you’re on a Desktop platform, the easiest way is to create simple post processing shader that just discards (paints black) all the pixels that are different from what you need.
Yeah, that’s chroma keying. (except in your case you’d take the inverse of it, discarding that the chroma filter matches).
Take a look at how Open Broadcaster Software does it, they code is open source.
Pretty sure that can easily be done in a post processing step.
After looking at some examples of Chromakey shaders, I tried my hand at writing once myself, just the reversed. But it’s not working (get a black screen). I have this script on my camera that’s supposed to use the camera’s view as the texture:
[ExecuteInEditMode]
public class CameraTexture : MonoBehaviour
{
public Material EffectMaterial;
void OnRenderImage(RenderTexture src, RenderTexture dst)
{
Graphics.Blit(src, dst, EffectMaterial);
}
}
here is the shader that’s supposed to ignore all colors that isn’t rgb(1,0,1):
Shader "Custom/ColorPicker"
{
Properties {
_Color ("Color", Color) = (1,1,1,1)
_VisableColor ("Visable Color", Color) = (1,0,1,1) //Sets the color that the camera picks up
_MainTex ("Albedo (RGB)", 2D) = "red" {}
}
SubShader
{
Tags { "Queue"="Transparent" "RenderType"="Transparent" }
CGPROGRAM
#pragma surface surf Lambert alpha
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
};
fixed4 _VisableColor;
void surf (Input IN, inout SurfaceOutput o)
{
half4 c = tex2D(_MainTex, IN.uv_MainTex); // Read color from the texture
half4 output_col = c;
//ignores all colors that isn't pure megenta (1,0,1)
if(output_col.r != _VisableColor.r && output_col.g != _VisableColor.g && output_col.b != _VisableColor.b)
discard;
//output albedo and alpha just like a normal shader
o.Albedo = output_col.rgb;
o.Alpha = output_col.a;
}
ENDCG
}
FallBack "Diffuse"
}
I’m sure that the color i’m trying to pick is (1,0,1) from using the color pick to test the values. I’ve also placed the correct material with the shader onto the camera script. What am I doing wrong here? Thanks in advance.
It does pick out the magenta from the camera, but if I move the camera, the magenta sections of the previous instant sticks around. Not sure what is up, but it’s in the right direction.
Modified the code a bit. Replaced the frag function with this:
float4 frag (v2f i) : SV_Target
{
float4 col = tex2D(_MainTex, i.uv);
if((col.r != _VisableColor.r) && (col.g != _VisableColor.g) && (col.b != _VisableColor.b))
col -= float4 (1,1,1,1); //sets all non-magenta pixels to invisible
if((col.r == _VisableColor.r) && (col.g == _VisableColor.g) && (col.b == _VisableColor.b))
col += float4 (1,1,1,1); //sets all magenta pixels to white
return col;
}
Which works a lot better. And works the way I want if not for the light in the scene. If I delete the light, it works high. But if I leave the light in, it renders the highlights on top of the image, which is not what I want. Here’s what I mean:
The left is what I want, and what I get when the light is off. But if I have the light on, when I get the right. I’m assuming it has something to do with the rendering queue, but am unsure. An ideas? Thanks.
Two cameras right now. One normal camera for the overiew, and just 1 camera to see the cross section with the special reverse-chroma key shader as well. I managed to bypass the problem by just having a set of invisible models that can only be see by the second camera that has no light. Which is what I want. Here’s what I have currently, and its working decently:
Yup. That’s what I’m using to generate the stencil looking fake ultrasound image. I don’t know of any other way to make a intersection come up. If there is a better way, lemme know, but this is all I can figure out for now, and it seems to be “good enough”.
BTW, related to this school project. Is it possible use 2 mouse at the same time in Unity to control the same object? If so, how would I map the buttons in script? How would I differentiate between the two?
I’m pretty sure I’ve seen a lot of these intersection shaders already, but a quick search didn’t yield any usable results. Try searching for “Unity intersection shader” or something like that.
I’m not sure that this is even possible on Windows. If you connect two mice to the same PC, they will both control the same cursor, so Windows thinks that those two mice are just one mouse. So there’s nothing Unity-specific here, if something needs to be researched, it’s the OS in the first place.
I think gamepad (Xbox360 one or basically any USB-Wireless gamepad) may be better for controlling something in your project, and it (as well as OS itself) supports several gamepads simultaneously.
That’s actually what I’m using, lol, just modified a bit.
Another question. Is it possible to overlay the render from 1 camera ontop of another with additive rendering. Would I need a shader for that, or is there a easier way?
EDIT: I think I cna just use 2 cameras, render out 2 cameraTextures, and then use a shader to additively combined them onto a third camera. At least that’s the logic. Hope I’m going in the right direction.