Search Unity

Render Pipeline Advice

Discussion in 'General Graphics' started by McDev02, Jul 10, 2019.

  1. McDev02

    McDev02

    Joined:
    Nov 22, 2010
    Posts:
    664
    Hello. I plan a new project and I am unsure which way to go so I'd like to get some opinions. Of course, I will just test both options in-depth but until I can do so with VR I'd like to do some research and maybe you have some helpful insights. Basically, my options are:
    • Default deferred
    • HDRP
    • Custom Render Pipeline
    As I don't have the slightest clue how I would make my own Render pipeline I have the feeling that this option is out of the question. What I might end up with are a few tweaks to the deferred pipeline but I am not a graphics developer so I guess I have to live with what is offered to me.

    Requirements
    • Should be able to achieve realistic lighting at different exposures.
    • All dynamic, cannot use any static features (lightmap, occlusion, batching)
    • Moderate use of decals (Albedo, M, S, Normals)
    • Ideally, draws many dynamic lights and shadows
    • Shall work with Xbox and PlayStation
    • Shall work in VR
    I am aware that some of these points are contrasts. I will do everything to reduce draw calls in VR.

    Currently, VR in HDRP does not support deferred rendering but as I have to disable many lights and shadows anyway, this might not be a big deal.
    Source
    This sounds like I would need a seperate VR build, right? Otherwise the graphics pipeline for non-VR would not support every feature.

    One downside of HDRP is that usage of AssetStore assets is not that straight forward but I plan to use as few as possible anyway. Also I like to write custom shaders in a text editor, at least it supports the Node editor but I am still very skeptical as in the past I was missing nodes such as vertex data and manipulation.

    I am not sure if HDRP will make a big difference for my needs. But I like the way it handles lighting with Lux values and the (potential) performance benefits with the tile/cluster renderer. On the other hand it has more complex shaders.

    But I always cringe at this image, there is just no difference:

    So my main concern is compatibility, I am not sure which platforms or hardware you would exclude when using HDRP. And moving from HDRP to the default render pipeline in the middle of a project is just insane. Upgrading from Default to HDRP, on the other hand, could be less critical.
     
    Last edited: Jul 10, 2019
  2. McDev02

    McDev02

    Joined:
    Nov 22, 2010
    Posts:
    664
    OK so I made some research meanwhile. But first a more concrete question:

    I want to edit the "legacy" deferred rendering pipeline. But is there a way to do that with a SRP or do I simply change the shaders in the graphics settings?

    But also I do need to make changes on the render targets but for that, I need another default shader which should be applied on new materials and imported models.

    Here I only see defaults of Lightweight and HDRP, so I guess there is no render pipeline template for the builtIn deferred renderer, right? https://github.com/Unity-Technologies/ScriptableRenderPipeline

    From what I see more than 4 render targets should be OK nowadays. I want to try out a few things to put more rendering load to screen space and therefore I need one extra GBuffer.

    Conclusions
    At least for my needs, HDRP doesn't give me any advantage. It seemed to render faster at high-quality settings, e.g. with AO on. This could also be the case because I use third party image effects on the default pipeline.
    But it turned out to be significantly slower when all image effects where off. So if someone wants to achieve high FPS with lower GraphicsSettings, then HDRP is not the right option.

    Visually the difference was hard to tell. HDRP seems to have better color depth and overall better quality of the image but this is not worth the extra issues, such as third party assets have yet to be converted and custom shaders are way easier to achieve.

    In HDRP Lights per Pixel are limited to 24, that is a lot already but I don't want to limit myself here. While shadows can be disabled having many long-ranging lights is essential.

    Here is a comparison, which one is HDRP can be tould due to two things. The cars both use only the standard shaders yet, while the HDRP has clearcoat.
    cars_comp.jpg
     
    Last edited: Aug 18, 2019
  3. ekakiya

    ekakiya

    Joined:
    Jul 25, 2011
    Posts:
    79
    Add the RT to built-in deferred is difficult, But you can get RT4 with a dummy directional light to use shadow mask, then use the RT4 for other purpose.
    In built-in deferred, RT4 will be read in rendering screen shadow texture, so you must customize that shadow pass too, to ignore the shadow mask.
     
  4. McDev02

    McDev02

    Joined:
    Nov 22, 2010
    Posts:
    664
    Right, I don't need any Lightmapping or GI so I could use this render target, and maybe even remove some of t
    So could RT3 be used when HDR is on? Or is RT3 being used until reflections are converted to emission? I noticed that when writing to RT3 in a command buffer the reflections seemed to disappear, but adding reflections only worked by writing to BuiltinRenderTextureType.CameraTarget instead.

    I only need my shaders to write to a third buffer as well. I once made a custom fragment shader and I wonder, would it be enough to add the render target like this? Or do I have to tell somewhere else that this buffer is being used?
    Code (CSharp):
    1. void frag(
    2.     v2f i,
    3.     out half4 outDiffuse : SV_Target0,            // RT0: diffuse color (rgb), occlusion (a)
    4.     out half4 outSpecSmoothness : SV_Target1,    // RT1: spec color (rgb), smoothness (a)
    5.     out half4 outNormal : SV_Target2,            // RT2: normal (rgb), --unused, very low precision-- (a)
    6.     out half4 outEmission : SV_Target3            // RT3: emission (rgb), --unused-- (a)
    7.     out half4 outCustomBuffer : SV_Target4        // RT4
    8.     )
     
  5. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,526
    I would make the game in default rp and focus on the gameplay and the sound and all the rest. then if needed you can always switch out your default shaders when upgrading to another pipeline. going back to default from lw or hd is not recommended.
     
  6. McDev02

    McDev02

    Joined:
    Nov 22, 2010
    Posts:
    664
    @Lars-Steenhoff I know my needs and I rather will research what is possible and what not before I start with production. This will affect the workflow a lot. I'd never even try to port from one render pipeline to another mid-project.

    @Topic I can start simple with the Alpha channel of the normal map buffer, it has 2-bit, enough for one or even two masks: 0: Nothing; 1: maskA; 2: maskB; 4: maskA+B
    If you wonder I was trying to get deferred planar reflections and it works quite well.
    SSR has too many downsides and doesn't work with TAA anyway. In this particular example realtime reflections are even faster and especially do not scale with screen size.
    There are limitations but to some extend, uneven terrain will work just fine with that.

    So what I do is to render the reflection in a command buffer by using the mask in the Alpha of the normals RT. I use the inverse of the same to occlude reflection probes. The Blue car for example has a realtime probe but won't add that to the ground.

    planarReflections.jpg
     
    richardkettlewell likes this.
  7. ekakiya

    ekakiya

    Joined:
    Jul 25, 2011
    Posts:
    79
    You can output to RT3 in material shader.
    You cannot read RT3 in lighting pass, because the shader output to a cameraTarget, that is a same render texture as the RT3.
    Built-in deferred outputs all lights to black RT3 additively. Material shader adds emission and environment diffuse, reflection pass adds environment reflection per reflection-probe, light pass adds shading per light.
    PPS’s SSR also adds reflections to RT3, but somehow they have separated environment reflection render texture as input, so they can replace reflections(subtract old and add new reflection).

    About customOutBuffer, I didn’t succeed.
    We need something like SetRenderTarget for mesh renderer, but didn’t find.
    I ended up to use shadow mask’s RT at that time, and now using Custom render pipeline.
     
    McDev02 likes this.