Search Unity

Weird Question: Where In Camera Space Is The Screen?

Discussion in 'General Graphics' started by Dreamback, Apr 12, 2019.

  1. Dreamback

    Dreamback

    Joined:
    Jul 29, 2016
    Posts:
    220
    I've been working on a sort-of GPU-based raycaster. I've got the camera position and a ray coming from the camera at a specific angle. So first I have a camera render the screen to fill the depth buffer. Then in a post-processing shader, I'm trying to figure out which pixel on the screen that ray intersects, which I can then use to get the depth of the pixel from the depth texture and from that figure out the world position of the object that pixel is rendering.

    First question: am I crazy? Will this return a point on the first object struck by the ray as if it were cast from the camera in the actual geometry of the scene?

    Second question: to find the pixel on the screen that the ray intersects, I have to know where the screen is, and then do a simple plane-intersection test. Is the screen at the near clip plane? Somewhere else? Is this whole idea of the screen in camera-space nonsensical?

    Third question: Unity has a function that returns the angle of any pixel to the camera (ScreenPointToRay). Is there a function (or matrix) that does the opposite, returns which pixel is at a specific angle from the camera?
     
    Last edited: Apr 13, 2019
  2. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    That's the basics of many screen space shader like AO or SSR, so not so crazy. The depth buffer is already the intersection of a plane with a ray from teh screen, what you want is the normal buffer. Screen space effect are an approximation of scene geometry, you will not be able to have ray pass behind object as it's a "heightmap" and you will not have out of screen objects. Since the depth and normal buffer give you the first hit for "free" you would need to trace through the buffet around the normal, which mean raymarching the texture, which is expensive.

    Here some implementation: https://github.com/Patapom/GodComplex/tree/master/Tests/TestHBIL
     
  3. Dreamback

    Dreamback

    Joined:
    Jul 29, 2016
    Posts:
    220
    I think I'm missing something - wouldn't the normal buffer just give me the normal of the object at that particular pixel/view point? My problem is figuring out which pixel my camera ray is hitting in the first place to be able to get at that info. I have a very specific ray from the camera at an angle, not something I'm applying to every pixel on the screen.
     
    Last edited: Apr 13, 2019
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    well the depth buffer is a specific ray from the camera to the objects at that specific pixel, it's equivalent. Every pixel is basically the result of a ray from teh camera, so reading a specific pixel depth and normal is equivalent to obtaining a ray intersection result from the camera to teh object.
     
  5. Dreamback

    Dreamback

    Joined:
    Jul 29, 2016
    Posts:
    220
    I understand that if I read the depth buffer for a particular screen coordinate I can figure out the world position at that pixel - I already have that working. But first I have to know which pixel to read the depth buffer from, and there's the problem. I know the camera world position, and I have a ray going up 10 degrees, left 18 degrees from there (but stored as a normalized Vector). I want to find the screen coordinates that ray intersects so that I can read the depth value at that location, and thus find the object the ray intersects.

    So right now I'm trying to use the ray-intersection-with-plane formula, treating the near clip plane as the location the screen's plane would be. And I'm getting a result! But I'm not quite sure that result is correct, as I'm not sure that's where the screen really is located in relation to the camera.
     
    Last edited: Apr 15, 2019
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    The issue you're running into is the screen isn't a plane.

    The "screen" is a collection of rays in a volume in space, as defined by the projection matrix, it doesn't exist at any single plane. Each pixel is an individual ray within that projection. The rays start at a plane orthogonal to the camera's forward vector at the near clip, and end at another orthogonal plane at the far clip.

    So, if you have a ray that's coming from the camera, and you are using a perspective projection, and you want to find out which pixel you're closest to, then you just need to transform any arbitrary point in world space along your ray into screen space.
    https://docs.unity3d.com/ScriptReference/Camera.WorldToScreenPoint.html

    Vector3 screenPos = cam.WorldToScreenPoint(ray.GetPoint(1f));

    Round the screenPos.xy values to get the pixel coordinate. The screenPos.z is just as arbitrary as the ray's GetPoint(1f) and can be ignored.

    Again, this assumes the ray's starting point is the camera position, and you're using a perspective projection. If your camera is orthographic, then the pixel's rays aren't coming from the camera.
     
    neoshaman likes this.