Search Unity

Help Wanted Any examples of Scene Color being used?

Discussion in 'Shader Graph' started by dgoyette, Jul 28, 2019.

  1. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    Under the built-in rendering, I used to be able to create a lensing effect using (I believe) the shader shown here, which I believe is the refractive glass shader that was/is included in standard assets:

    https://github.com/kw0006667/Unity3...rces/Shaders/Glass-Stained-BumpDistort.shader

    This allowed me to create a material that would refract the background behind an object. In this example, you can see the door behind the purple object appears to be bent:

    upload_2019-7-27_23-1-30.png

    This all seemed like magic to me. The refraction is happening on a plane that bisects the sphere here, and I never really understood how it didn't result in seams at the edges, but it worked really well.

    Unfortunately, under HDRP, there's no GrabPass, so this effect stopped working. I kept waiting for some replacement, and my understanding was that maybe the Scene Color node would be a similar replacement. I've tried using it, but I don't yet understand whether it will work similarly to grabpass. Specifically, with the grabpass approach, the refracting object would appear to be a completely clear object in the scene if no normal map was given.

    In trying to reproduce this under ShaderGraph, I believe I need to start by creating a shader that scales and offsets the UV of the object based on its scale and position in the camera. I was wondering if anyone know how that could be achieved? My first thought was to take the Object node, and multiply or divide by the Scale, and use that as the Tiling override for the UV, but I'm not sure how to take the position of the object relative to the camera into account. So, in short, does anyone have an example of Scene Color being used that that it creates a mainly invisible object in scene, which I could then extend and tweak?

    Thanks.
     
  2. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    I'm making some progress on the using Scene Color, but there's something I don't understand. Maybe someone can assist.

    The color output of the Scene Color node seems wrong. I created the following simple shader:

    upload_2019-7-28_13-9-29.png

    When I create a material with this shader and add it to a cube, the color of the cube is noticeable different than the background color. Here you can see the lighter square is the cube using the Scene Color node shader, while the darker blue is the background:

    upload_2019-7-28_13-10-46.png

    So that's confusing. Anyone know why that would be the case? I've tried changing as many settings as I could, and it doesn't change this appearance.
     
    foxnne likes this.
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,000
    Is your shader set to additive, or alpha blend?
     
  4. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    I've tried Additive, Alpha Blend, and Pre-Multiply. Additive looks extra blown out, while the other two look like the screenshot. I've tried this in 2019.1 and 2019.3 alpha, but I get the same results.

    upload_2019-7-29_18-20-25.png

    It's also highly possible I don't have an understanding of how Scene Color should be used. I'm trying to recreate the effect I had when using GrabPass, where I more or less want all of the content of a chunk of the screen (the region beyond a plane facing the camera), which I can then distort arbitrarily via the shader. Maybe that's not what Scene Color is for...
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,000
    What version of the HDRP are you using? Testing locally with LWRP 5.16 in 2019.1 it "just works" and the object is completely invisible.

    However HDRP 5.16 it is too bright, but also much, much more obviously so. Maybe just the background color you have is making it less obvious.
    upload_2019-7-30_10-51-19.png

    I've been complaining about the Unlit Master node's handling of alpha for a while, so it could be related.
     
  6. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    I was using HDRP 5.16 under 2019.1, and 7.0.1 under 2019.3 (and found the results to be the same between both.)

    I haven't tried a LWRP project, to compare whether the behavior is different between LWRP and HDRP.

    In the case where you say it "just works", you created a new shader and plugged Scene Color into the Color output, created a material from that shader, (presumably set the material to transparent), and then just dropped the material onto a sphere?
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,000
    Yep. You could not tell there was a sphere there. I offset the UVs a little to make sure it actually was there, and it was.

    I think in the HDRP the Unlit Master node might be straight up broken, which is weird. Changing between Additive and Alpha in HDRP 5.16 looked identical which it absolutely should not, Additive should look way brighter, which means the Color value is blending as if it's Additive regardless of the blending mode?! Either the expectations for what the Color input does between LWRP and HDRP are different, or the HDRP is broken.


    Anyway, a work around:
    Use the HDRP Unlit Master node (which you already are)
    Don't connect anything to the Color input and set it to a solid black.
    Connect the Scene Color node to the Emission.
     
    dgoyette likes this.
  8. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    Very cool. Connecting the Scene Color to emission, and setting Color to black gives the precise result I was expected. i should have tried that. Thanks very much for the advice. I'll submit a bug for the unexpected behavior of connecting Scene Color to the main Color output.
     
  9. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    I'm finally getting back to this, and I've hit a conceptual stumbling block that makes me wonder if my approach can work at all.

    Thinking back to how things worked with GrabPass, the magic there seemed to be that the GrabPass data contained the color of each pixel behind the object with the material on it. The color data was captured during the GrabPass pass, and provided to the next pass in the shader, as two independent things.

    Using SceneColor, however, doesn't seem to work the same way. Instead of capturing what's "beyond" the object with the material on it, the current appearance of that object is reflected in SceneColor. The result is a feedback loop, with an effect similar to when old versions of Windows 95 would freak out and stop repainting the screen properly (with duplicates of the window being repainted all over the screen.) For example, here's a screenshot of the plane I have that's rendering just the SceneColor node. I've tagged the corners of the plane in red. You can see the feedback loop effect:

    upload_2019-8-8_21-2-47.png

    So, if I want SceneColor to contain what is beyond the plane, then I'd need to put the plane on a layer the camera can't see. But then the plane won't show up on the camera at all. And I believe that having multiple cameras (layers on top of each other) is no longer supported under HDRP.

    So, I'm left with the question: Is there a way to make this work at all? I don't believe ShaderGraph supports multiple passes, but it seems to me that the only way for this to work is if I capture SceneColor in one pass, and render the plane only in the second pass.
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,000
    The scene color node samples from the _CameraOpaqueTexture. The name of which should give a clue to the fact it contains everything from the opaque pass.

    Basically, the old GrabPass would copy the current render target’s contents into another texture so it can be sampled from either when the grab pass shader “pass” would get rendered (it doesn’t actually render anything). In the case of named grab passes it would only make a copy on the first time that name appears and later grab passes of the same name are ignored.

    Copying the screen contents is slow as it causes the whole GPU to stall for a moment, so grab passes are incredibly inefficient. So instead the SRPs reuse the copy already being made after the opaque queue that post processes like AO use. If you enable the opaque texture in the SRP’s asset it keeps that copy around for transparent queue objects to sample from. Optionally applying some minor amount of processing it, like downsampling it or otherwise lightly blurring it.
     
  11. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    1,864
    I did end up getting something almost looking reasonable, which required changing the render pass to "After Post-processing". (All of the other render pass options resulted in the camera "seeing" the mesh and causing feedback.) I haven't got the tiling and offset correct (assuming it's actually possible to be correct in this case), but this sort of looks like what I'd want:

    upload_2019-8-9_0-4-20.png

    Weirdly, the quad is rendering a little brighter than expected again under After Post Processing, maybe because the rest of the scene has been affected by post-processing and the quad is not, I'm not sure. I am using the approach where I output the color to the Emission output rather than the Color output.

    Anyway, even if it did look good, unfortunately it captures stuff in the foreground, which definitely isn't what I want. Here the player's hand is being rendered into the quad:

    upload_2019-8-9_0-7-43.png

    So I'm not sure whether what I'm trying to accomplish (distorting everything behind a plane) is even possible at this point. Or, this feels like the wrong approach, and I should be trying to make use of built-in SRP distortion functionality.