Search Unity

Postprocessing issues with several cameras

Discussion in 'General Graphics' started by AlexBM, Mar 26, 2015.

  1. AlexBM


    Mar 26, 2015
    Hi everybody!

    I'm trying to setup a simple frame postprocessing, and unfortunately, built-in mechanism doesn't seem to handle several cameras well. I have following camera set-up in my 2D game:

    - First one is rendering sky and clouds, lowest depth, clear flags - solid color
    - Second one is rendering the game scene itself, clear flags - depth only
    - Third one is UI camera, clear flags - depth only.

    Issues arouse when I'm trying to apply any postprocessing effect on second camera. In unity editor it works just fine, affecting the game world below it and keeping UI untouched. However, when I'm trying to upload build to Android Device (Samsung Galaxy Note 2, Android 4.1.2) I see only UI on top of black background.

    It only starts work when I switch clear flags on second camera to Solid Color or Skybox. However, it's hardly acceptable for my game, cause I have a dynamic sky with clouds, starts and mountains, and it can't be rendered with one camera, because thus camera zooming would yield strange results, breaking an illusion of distant objects.

    I also tried to rewrite OnRenderImage function on the first camera, just for experiment. It also looked very weird: working fine on device, but ignoring all camera depth values in UnityEditor, rendering only Sky and UI, skipping the second camera at all. That makes me think that postprocessing has some fundamental limitation on cameras with depth-only clear flags, leading to some sort of undefined behaviour, but unfortunately unity guides and documentation on that matter are really scarce. Have you ever came across similar issue? I just can't figure out a proper workaround about that, without addressing to render all cameras to rendertextures and than combine it in one single camera. But I'm afraid, that is quite heavy from perfromance perspecitve. Any ideas?

    My Unity version is 4.6.2.

  2. AlexBM


    Mar 26, 2015
    Finally I've managed to nail it down. Quite obvious, when you actually know the right answer...
    So just in case somebody came across similar issue here is a solution and explanation.

    Post processing just can't work with depth-only-clear camera, and reason of that lies in the way it made. As far as I understood, if camera see that there is a post-processing script attached (with OnRenderImage function) it stops rendering itself to screen buffer and creates a temporary RenderTexture target and then passes it to OnRenderImage function as src parameter. Then you call Blit function, and what it basically does, is just rendering a full-screen quad into the screen buffer, with this texture and your custom post-process shader. Ok, everything is just fine... Or is it? Heh, here comes the problem. This temporary RenderTexture doesn't contain current screen buffer content, so if a camera with Depth-Only clear flag will render something to it, every piece of empty space will be filled with a garbage from GPU memory. It won't produce desired effect. So guys from Unity decided to switch this post-processing mechanism from depth-only cameras at all. Unfortunately, they didn't bother themselves nor explaining it in documentations, nor even throwing any warning in the console. What's worse, the OnRenderImage function keeps calling, passing to src parameter some garbage, that by lucky coincidence on some platforms contains pointer to screen content, buffer, whatsoever. And it SEEMS to work fine sometimes, except the fact the camera keeps rendering itself to screen buffer and having this post-processing Quad on top of it. And that is leading to all variety of strange issues, that can be called by single expression "undefined behaviour".

    So how to solve that? Here is guide:
    1. Create new camera that renders nothing (culling mask set to "nothing")
    2. Attach PostProcessing script to it and make following modifications
      1. Create new RenderTexture with size of the screen
      2. Set this RenderTexture as render target for both real cameras: foreground and background
      3. Override void OnPostRender() function and call there Graphics.Blit(yourRenderTexture, PostProcessMaterial);
    3. Enjoy!
    So what it basically does, is rendering a Quad with this render texture mapped on it, that is filled with your both cameras in right order. I suggest you to assign render targets to your cameras in runtime to not to spoil the game preview window while the game isn't running (that no much of use as it is, however). Also, note, that this shouldn't have any performance overhead comparing to usual PostProcessing mechanism, as long as it uses single RenderTexture as usual OnRenderImage function does.

    Here is code sample:

    Code (CSharp):
    1. public class PostProcessFilter : MonoBehaviour
    2. {
    3.     public Material PostProcessMaterial;
    5.     public Camera BackgroundCamera;
    6.     public Camera MainCamera;
    8.     private RenderTexture mainRenderTexture;
    10.     // Use this for initialization
    11.     void Start ()
    12.     {
    13.         mainRenderTexture = new RenderTexture(Screen.width, Screen.height, 16, RenderTextureFormat.ARGB32);
    14.         mainRenderTexture.Create();
    16.         BackgroundCamera.targetTexture = mainRenderTexture;
    17.         MainCamera.targetTexture = mainRenderTexture;
    18.     }
    20.     void OnPostRender()
    21.     {
    22.         Graphics.Blit(mainRenderTexture, PostProcessMaterial);
    23.     }
    24. }
    Actually, I don't understand why such important feature isn't mentioned anywhere in docs or tutorials. I believe, two-camera setup is very common in game development, as well as post-processing effects.

    Hope, that helps.

    Thanks, Alex.
    Kylotan, DMorock, chrismarch and 2 others like this.
  3. hammadmobilesoft


    Jan 31, 2014
    Thanks a lot for sharing this stuff. Looks really very promising.
    I have a case similar to this but i am stuck for almost a week now.
    I need to get blurred background.

    I have 2 camera in my full 3D scene.
    1st camera with greatest depth renders everything else in front.
    2ndt camera (Background Camera) with lowest depth renders skybox and background mesh (Layer Background)

    The problem is that when i apply the blur(optimized) image effect to the background camera i get 3 fps on my android device.

    If i remove the second camera and apply blur to just 1 camera i get 60 fps on my device.

    I can relate to the problem you were facing because image effect on the second camera is causing the main problem.

    Please suggest me a solution, i will really appreciate your help.
  4. KingOfColly


    May 25, 2013
    Worked great thanks! I used the Unlit/Texture shader for the PostProcessMaterial.
  5. RyuMaster


    Sep 13, 2010
    Thanks you so much! This helped me to solve persisting ObGrab bug which was driving me crazy for a long time
  6. neroziros


    Aug 25, 2012
    Hello there! I was wondering if anyone has experienced this same problem:

    I have been trying to implement a multi deferred camera setup to no avail. Even though I managed to get the desired visual effect using several cameras with different screen effects I'm getting horrible artifacts when there is player interaction with the UI elements.

    And this is the script I'm using to merge the multiple RT. (In the previous example I was using a single camera and the problem persisted)

    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3. using System.Collections.Generic;
    5. [ExecuteInEditMode]
    6. public class PostProcessFilter : MonoBehaviour {
    8.     public Material PostProcessMaterial;
    10.     // Camera list
    11.     public List<Camera> Cameras;
    13.     // Control parameters
    14.     private RenderTexture mainRenderTexture;
    16.     private int width;
    17.     private int height;
    19.     // Use this for initialization
    20.     void OnEnable()
    21.     {
    22.         this.GenerateRT    ();
    23.     }
    25.     void Update()
    26.     {
    27.         if (width != Screen.width || height != Screen.height)
    28.         {
    29.             this.GenerateRT();
    30.         }
    31.     }
    33.     void GenerateRT()
    34.     {
    35.         width = Screen.width;
    36.         height = Screen.height;
    38.         if (this.mainRenderTexture != null)
    39.         {
    40.             foreach (var camera in Cameras)
    41.                 camera.targetTexture = null;
    42.             this.mainRenderTexture.Release();
    43.             DestroyImmediate(this.mainRenderTexture);
    44.         }
    46.         Debug.Log("NEW RT GENERATED WITH DIMENSIONS: "+ Screen.width + " " + Screen.height);
    48.         mainRenderTexture = new RenderTexture(Screen.width, Screen.height, 16, RenderTextureFormat.DefaultHDR);
    49.         mainRenderTexture.Create();
    50.         foreach (var camera in Cameras)
    51.             camera.targetTexture = mainRenderTexture;
    52.     }
    54.     void OnPostRender()
    55.     {
    56.         if (this.PostProcessMaterial == null) return;
    57.             Graphics.Blit(mainRenderTexture, PostProcessMaterial);
    58.     }
    59. }
    plingativator likes this.
  7. plingativator


    May 2, 2013
    Thank you! I was so confused as to why the anti-alias post-processing was killing my depth effects but this seems to have fixed it all. I don't think I would have figured out this solution on my own.
  8. Whatever560


    Jan 5, 2016