Search Unity

How to get just live camera feed (without drawing it) in AR Foundation,

Discussion in 'AR' started by memoid, Jun 17, 2019.

  1. memoid

    memoid

    Joined:
    Aug 30, 2014
    Posts:
    10
    I'd like to get the live camera feed as a RenderTexture, without drawing it in the camera background. How can I do this?
    I've seen this thread, and I can add ARCameraBackground to my camera, and then a separate command to blit to my RenderTexture, but I don't want the camera drawn at all. In fact, I have my own custom background, which I draw using a UI RawImage. And if I have ARCameraBackground, it overrides my UIRawImage!

    Ideally I'm after a component that i can add to any gameobject, and give it a target RenderTexture, and it fills it with the live camera feed.

    (I achieved my desired behaviour via a very hacky two camera solution, whereby I have one Camera with ARCameraBackground attached to it, not rendering any layers, and a separate blit to a RenderTexture. And another MainCamera rendering everything and my custom background with the URRawImage. Visually it's the behaviour I want, but feels quite hacky and inefficient).
     
  2. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    Your "hacky" solution isn't too bad. The camera feed is composed of multiple textures on some platforms (e.g., ARKit) which must be combined in a shader. The shader (on both ARCore and ARKit) also takes care of screen orientation. Rendering this to an offscreen camera is already pretty efficient, so if it works for you, I think that's fine.

    You could also look at getting the raw textures and manipulating them yourself. If you subscribe to the ARCameraManager's frameReceived event, it will provide the uncombined, unrorated textures in the
    ARCameraFrameEventArgs. You'd still need to execute the shader to get the correct image, though. The ARCameraBackground creates a material with the correct shader.
     
    ROBYER1 and memoid like this.
  3. memoid

    memoid

    Joined:
    Aug 30, 2014
    Posts:
    10
    oh ok, interesting, thanks for the info. I'm not terribly familiar with Unity yet (I'm more oldschool low level opengl) so I don't know my way round the API fully. I'm ok with the idea of rendering with the shader to get the final texture. What I'm a bit less comfortable with is having an extra camera in my scene, having to set layer flags etc. I feel like as the project becomes more complicated this is potential for confusion and bugs (at least for me :). Is there a way of doing all of this in code without an extra camera? i.e. a script which you attach to an empty gameobject, and it writes to a rendertexture (with the shader)?
     
  4. memoid

    memoid

    Joined:
    Aug 30, 2014
    Posts:
    10
    Unfortunatley it turns out my double camera hack isn't working, the transformations of the 3D scene aren't correct. Especially obvious when I rotate the phone, and I can't figure out a way to fix it :/

    The functionality I would like is:
    - make a copy of the original Camera texture into a RenderTexture for later processing, but don't display it. (I can do this with a Blit after an ARCameraBackground).
    - display the Camera feed processed with a shader, but don't affect the 3D objects in the scene with that shader (so I can't apply the shader as a post effect on the camera).

    My current setup is:
    - a BGCamera to render nothing but only capture the ARCameraBackground and Blit it to a RenderTexture. This is under ARSessionOrigin with
    - culling mask: nothing
    - depth: -100
    - TrackedPoseDriver, ARCameraManager, ARCameraBackground
    - post effect.
    - a MainCamera also under ARSessionOrigin with:
    - culling mask: everything
    - depth: 0
    - TrackedPoseDriver (I also tried removing TrackedPoseDriver and making this a child of BGCamera, but I got the same results).


    As a test I tried making BGCamera render everything in the scene. If the setup was correct, I should see the camera background and the entire 3D scene blurred, and then the 3D scene rendered sharp ontop again. But instead I see the sharp 3D scene in a completely different place to the blurred 3D scene (much closer to me). And when I switch from landscape to portrait, the sharp 3D scene jumps much closer.

    Am I doing something wrong? How can I achieve the functionality I want? which I think should be quite simple (keep a copy of the camera background, and render it processed).
     
    ROBYER1 likes this.