Search Unity

Single camera multiple display performance

Discussion in 'General Graphics' started by nathanielp, Nov 27, 2018.

  1. nathanielp

    nathanielp

    Joined:
    Aug 9, 2016
    Posts:
    6
    Hello,

    I'm working on a project where a requirement is to render a single camera view to two displays, with a different GUI for each display. I'm trying to find the most performant way to achieve this. Any insight would be great, I've made some progress but not as much as I wanted. Any performance I can gain I can easily use towards improving the visuals.

    The easiest solution I've had was to duplicate the camera and direct each one to the corresponding display. This works well but absolutely destroys the framerate, no surprise.

    The other solution I've tried is to let a single camera render the scene while copying its rendertexture/activateTexture into a raw image visible on the second camera/display. This worked but felt like a bit of a hack and didn't work properly in the editor (It worked fine in builds which is where it needs to work).

    Rough performance:
    Single Camera: 165 fps
    Dual Camera: 75 fps
    Cam + RawImage: 110 fps

    I feel like there is a lot of room for improvement but I haven't figured it out. Maybe it is simple and I'm just being obtuse. Does anyone know a better way to achieve a single camera view on multiple displays with different UI for each display?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    This is mostly the right way to do this.

    It might be easier to have three cameras. Have your one camera that renders the scene. Have this render to a target render texture. You can set this via script on awake, or assign a render texture asset to the camera's target. The other two cameras just display the UI, and have a command buffer that blits the main camera's texture to these UI cameras during one of the events prior to drawing the UI. Maybe CameraEvent.BeforeForwardAlpha if all your UI is transparent? You can also get a small extra perf win here by setting the UI cameras' Clear Flags to Don't Clear. You're effectively clearing them by blitting the main camera's render texture. If your UI uses the depth buffer (ie: it has opaque objects), maybe use Depth Only.
     
    nathanielp likes this.
  3. nathanielp

    nathanielp

    Joined:
    Aug 9, 2016
    Posts:
    6
    Thank you for your help :) I had much better results following your advice.

    My scene had changed since I last took some performance numbers so these aren't comparable to what I wrote in the OP, but this is what I saw testing it out this afternoon.

    Single Camera: 145 fps
    Dual Camera: 80 fps
    Blitted Cams: 130 fps

    I'm pretty happy with the result and it works just fine in the editor. This is the first time I've tried using command buffers and related APIs so if you don't mind let me know if I made any glaring mistakes in my proof of concept code. For reasons I don't understand I didn't have success with CameraEvent.BeforeForwardAlpha but CameraEvent.AfterEverything did the job.

    Code (CSharp):
    1.    public Camera camera1;
    2.     public Camera camera2;
    3.  
    4.     private CommandBuffer buffer1;
    5.     private CommandBuffer buffer2;
    6.  
    7.     void Start()
    8.     {
    9.         buffer1 = new CommandBuffer();
    10.         buffer1.Blit(mainTexture, camera1.activeTexture);
    11.         camera1.AddCommandBuffer(CameraEvent.AfterEverything, buffer1);
    12.         buffer2 = new CommandBuffer();
    13.         buffer2.Blit(mainTexture, camera1.activeTexture);
    14.         camera2.AddCommandBuffer(CameraEvent.AfterEverything, buffer2);
    15.  
    16.         if (Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.OSXPlayer)
    17.         {
    18.             if (Display.displays.Length > 1)
    19.                 Display.displays[1].Activate();
    20.             else
    21.                 Screen.fullScreen = true;
    22.         }
    23.     }
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You're using camera1.activeTexture for both Blit() calls. You should be using BuiltinRenderTextureType.CurrentActive or CurrentTarget. I also don't know when mainTexture is being rendered to, but presumably it's on the main camera, which also has the lowest Depth value to ensure it renders first.

    One minor thing that won't have any impact on performance. You can use the same command buffer on both cameras, especially once you make the change noted above (though it might already work since you're already assigning the same target to both and it's still working).

    Command buffers are lists of commands. In this case the command is general enough that you don't need a unique one for each camera. Just "render a texture into the current active render target when this command runs".
     
    nathanielp likes this.
  5. nathanielp

    nathanielp

    Joined:
    Aug 9, 2016
    Posts:
    6
    I forgot to include the declaration of mainTexture in the snippet, my bad. It is just a renderTexture that the main camera is rendering to. I noticed my mistake with using camera1.activeTexture on both calls last night, woops. It worked anyway, but using BuiltinRenderTextureType.CurrentActive makes way more sense, thanks.

    Again big thanks for your help. :) The solution was very simple but wasn't obvious at all to me. I've mostly found Unity documentation to be very easy to understand but I was lost on this subject. I'll go ahead and paste the corrected code in case it helps someone in the future.

    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering;
    3.  
    4. public class CameraCopier : MonoBehaviour
    5. {
    6.     public RenderTexture mainTexture;
    7.     public Camera camera1;
    8.     public Camera camera2;
    9.  
    10.     void Start()
    11.     {
    12.         var buffer = new CommandBuffer();
    13.         buffer.Blit(mainTexture, BuiltinRenderTextureType.CurrentActive);
    14.         camera1.AddCommandBuffer(CameraEvent.AfterEverything, buffer);
    15.         camera2.AddCommandBuffer(CameraEvent.AfterEverything, buffer);
    16.  
    17.         // Activate displays in builds or set fullscreen if not using multiple displays
    18.         if (Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.OSXPlayer)
    19.         {
    20.             if (Display.displays.Length > 1)
    21.                 Display.displays[1].Activate();
    22.             else
    23.                 Screen.fullScreen = true;
    24.         }
    25.     }
    26. }
    27.  
     
    dstrictxrlab and UltimateWalrus like this.
  6. megavoid-de

    megavoid-de

    Joined:
    Sep 29, 2020
    Posts:
    8
    For anyone using URP in their project stumbling across this: This solution won't work in URP!

    There is a way using a custom URP Render Feature you can get here: https://github.com/Cyanilux/URP_BlitRenderFeature

    I have posted an example project on Github: https://github.com/megavoid/Unity-SingleCameraMultipleDisplayURP

    What you will need to get this to run:
    • 3 (or more) Cameras in your Scene
    • a Rendertexture
    • a Shadergraph
    • a Material with the Shadergraph
    • a second Renderer in your URP Asset
    What you need to do:
    • Configure the Rendertexture with the desired render resolution (you can use multiple Rendertextures for various resolutions or a Rendertexture with dynamic resolution but need a dedicated script to set this up in any case)
    • Create a Shadergraph with your Rendertexture as Input, connect it to a Texture2D Sampler and from the Sampler RGBA to Fragment Base Color. Save the Shader.
    • Configure your Main Camera with your Rendertexture as Output. Use your normal URP Renderer for the Main Camera and setup you Culling Mask to only render what you need (e.g. exclude GUI) on both output Displays. The Main Camera needs to have a lower priority (e.g. -1) than the Display Cameras.
    • Configure your Display Cameras to use your new second Renderer and a higher Priority (e.g. 0). Set the Culling Mask to only show what you need on this display (e.g. GUI layer).
    • Setup the new second Renderer only with the Blit Render Feature. In the Feature select Before Render as Event and use your Material with the Shadergraph in Blit Material.
    This way you will render your scene only once for multiple displays and can even use layers / culling masks on your Display Cameras to add to the basic view.
    One thing to keep in mind is that the resolution of the Rendertexture may differ from the display resolution. This has some (undesired) side effects: If the aspect ratio is different the image may be stretched on one axis. If the resolution of the rendertexture is too low the output may be blurry. In any case Input.mousePosition (e.g. for Raycasts) X & Y axis will have to be adjusted with the factor the Rendertexture and Display resolutions differ to work properly.
     
    GuenterWolf likes this.
  7. Al95

    Al95

    Joined:
    Nov 8, 2016
    Posts:
    6

    I couldn't get this URP version to work on my own project, even though your sample demo worked.

    Until I found the missing step:
    • Set the background type of your Display Cameras to Uninitialized.

    Thanks for sharing your URP solution @megavoid-de !
     
    GuenterWolf likes this.