I asked a question over in Unity answers last week and no one is responding, so I'm trying here on the forums to see if it is more active. I have a VERY good knowledge of OpenGL and GLSL. I am trying to do something very simple (for now) that would be very easy (almost trivial) to do using native code and OpenGL but I'm wanting to learn Unity and get similar behavior using the engine, so I need some enlightenment. None of the docs I've read have really answered my questions, so I'm here. My set up is pretty simple. I have a "ground plane" and another piece of geometry (some mesh I'll download or what not, it's not there yet). I have 2 cameras. The first camera is to render the scene (the other two pieces of geometry) and write to a render target (in this case a texture). The second camera is then supposed to write to the color buffer the straight contents of the texture. THIS IS NOT A USEFUL EXAMPLE FOR ANYTHING. I'm just wanting to use it as a proof of concept so I can do some shadow mapping later on iOS devices by encoding the object depth from the first camera into an RGB texture. I know how to get one camera to write to a render texture. My problem is that the material that exists on the 2 game objects uses the same shader even though the behavior is pretty different in the 2 rendering "passes." I've read the "Replacement Shader" docs and the "Pass Tag" type doc but none of them are very clear. How are the "tags" to be used in replacement shader specified by the game objects? How are the "queues" specified in rendering passes? All of the docs say "just use a tag of 'RenderTarget'='Opaque'" or "just set the queue to be 'Transparent+1'" and stuff like that but nothing every mentions HOW ARE OPAQUE, TRANSPARENT, etc. specified anywhere? I'm very confused. This should be a really simple set up. I just want the same geometry to use different shaders.