Search Unity

  1. If you have experience with import & exporting custom (.unitypackage) packages, please help complete a survey (open until May 15, 2024).
    Dismiss Notice
  2. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice

Using a camera to do a fan of raycasts

Discussion in 'Shaders' started by UziMonkey, Jan 16, 2018.

  1. UziMonkey

    UziMonkey

    Joined:
    Nov 7, 2012
    Posts:
    206
    I am trying to implement an FOV cone that will scale to tens of enemies on the screen at a time. The code I'm seeing to do FOV cones uses raycasts, but to make it look smooth you need to do a very large amount of raycasts for each cone, I estimate each cone would need 128(!) raycasts per frame. Obviously this won't scale to tens of enemies.

    So, I thought I'd do something clever: use a camera to do 128 raycasts at once. I've set up a camera to render to a 128x1 render texture and a fragment shader that stores the distance to the camera of each fragment in the red channel. I was also trying to use the depth of the fragment, but it wasn't quite working either.

    This is what it's done, it's almost correct but there seems to be some perspective distortion going on.



    And what it's like looking at a straight wall.



    This is the shader I'm using.

    Code (CSharp):
    1. Shader "FOV Depth"
    2. {
    3.     SubShader
    4.     {
    5.         Tags { "RenderType"="Opaque" }
    6.         LOD 100
    7.  
    8.         Pass
    9.         {
    10.             CGPROGRAM
    11.             #pragma vertex vert
    12.             #pragma fragment frag
    13.            
    14.             #include "UnityCG.cginc"
    15.  
    16.             struct appdata
    17.             {
    18.                 float4 vertex : POSITION;
    19.             };
    20.  
    21.             struct v2f
    22.             {
    23.                 float4 pos : SV_POSITION;
    24.                 float4 worldPos : TEXCOORD0;
    25.             };
    26.  
    27.             v2f vert (appdata v)
    28.             {
    29.                 v2f o;
    30.                 o.pos = UnityObjectToClipPos(v.vertex);
    31.                 o.worldPos = mul(unity_ObjectToWorld, v.vertex);
    32.                 return o;
    33.             }
    34.            
    35.             float frag (v2f i) : SV_Target
    36.             {
    37.                 return distance(i.worldPos, _WorldSpaceCameraPos);
    38.             }
    39.             ENDCG
    40.         }
    41.     }
    42. }
    So there's not much going on there. Assuming the fragment world space is accurate, this should be doing more or less exactly what a raycast is doing albeit with visible geometry and not collision geometry.

    And the script I'm using for testing. Sorry, it's a big messy, but the relevant methods are RenderEye and DebugDrawEye at the bottom.

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. namespace FovShader1
    6. {
    7.  
    8. public class FieldOfView : MonoBehaviour
    9. {
    10.     [SerializeField] private int numRays;
    11.     [SerializeField] private Shader depthShader;
    12.     [SerializeField] private Camera eye;
    13.     [SerializeField] private float fov = 90f;
    14.     [SerializeField] private float viewDistance = 5f;
    15.  
    16.     private Texture2D pixelReader;
    17.     private RenderTexture eyeRenderTexture;
    18.  
    19.     private void Awake()
    20.     {
    21.         CreateTextures();
    22.         SetupEye(eye);
    23.     }
    24.  
    25.     private void CreateTextures()
    26.     {
    27.         eyeRenderTexture = new RenderTexture(numRays, 1, 0, RenderTextureFormat.ARGBFloat, RenderTextureReadWrite.Default);
    28.         eye.targetTexture = eyeRenderTexture;
    29.  
    30.         pixelReader = new Texture2D(numRays, 1, TextureFormat.RGBAFloat, false);
    31.     }
    32.  
    33.     private void SetupEye(Camera eye)
    34.     {
    35.         eye.farClipPlane = viewDistance;
    36.         eye.nearClipPlane = 0.01f;
    37.         eye.depthTextureMode = DepthTextureMode.Depth;
    38.         eye.targetTexture = eyeRenderTexture;
    39.         eye.aspect = numRays;
    40.         eye.fieldOfView = Mathf.Rad2Deg*2*Mathf.Atan(Mathf.Tan((fov*Mathf.Deg2Rad)/2f)/eye.aspect);
    41.     }
    42.  
    43.     private void Update()
    44.     {
    45.         RenderEye(eye);
    46.         DebugDrawEye(eye);
    47.     }
    48.  
    49.     private void RenderEye(Camera eye)
    50.     {
    51.         var shadowDistance = QualitySettings.shadowDistance;
    52.         QualitySettings.shadowDistance = 0;
    53.  
    54.         eye.RenderWithShader(depthShader, null);
    55.  
    56.         QualitySettings.shadowDistance = shadowDistance;
    57.     }
    58.  
    59.     private void DebugDrawEye(Camera eye)
    60.     {
    61.         RenderTexture.active = eyeRenderTexture;
    62.         pixelReader.ReadPixels(new Rect(0, 0, numRays, 1), 0, 0);
    63.         pixelReader.Apply();
    64.  
    65.         var rot = Quaternion.Euler(0, -(fov / 2) + (fov / eyeRenderTexture.width / 2), 0);
    66.         for(int i = 0; i < eyeRenderTexture.width; i++)
    67.         {
    68.             var depth = pixelReader.GetPixel(i, 0).r;
    69.             if(depth == 0)  depth = eye.farClipPlane;
    70.  
    71.             var start = eye.transform.position;
    72.             var end = rot * Vector3.forward * depth;
    73.             end = eye.transform.TransformPoint(end);
    74.  
    75.             Debug.DrawLine(start, end, Color.red, 0f);
    76.  
    77.             rot *= Quaternion.Euler(0, fov / eyeRenderTexture.width, 0);
    78.         }
    79.     }
    80. }
    81.  
    82. }
    The project is also attached to this post.
     

    Attached Files:

  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,366
    This is the problem.

    That line of code (along with the initial rotation setup at #65) assumes each "ray" has a regular angular spacing. This is not the case for rasterization. Rather each ray is equidistant from the next along the view plane. In other words in your second image where each ray hits on the wall should be the same distance apart.

    This image from Scratchpixel is a good example of how you should think about it:


    Instead of rotating a ray direction you need to calculate the pixel positions on an arbitrary plane. Also note the left and right most depths are inset by half a pixel width rather than exactly on the frustum.

    Going forward you might also want to understand the difference between depth and distance. You use both terms; your shader is explicitly returning a distance, but you refer to it as a depth in the script. Not really a problem for your code, but it's an important distinction, especially for what you're trying to do (and for what I’m about to do to that code).


    My recommendation is actually to have the shader return view depth rather than distance anyway. That'll make things a little simpler and faster on the C# side for reasons you’ll hopefully understand in a moment.

    Code (CSharp):
    1. Shader "FOV Depth"
    2. {
    3.     SubShader
    4.     {
    5.         Tags { "RenderType"="Opaque" }
    6.         LOD 100
    7.  
    8.         Pass
    9.         {
    10.             CGPROGRAM
    11.             #pragma vertex vert
    12.             #pragma fragment frag
    13.      
    14.             #include "UnityCG.cginc"
    15.  
    16.             struct appdata
    17.             {
    18.                 float4 vertex : POSITION;
    19.             };
    20.  
    21.             struct v2f
    22.             {
    23.                 float4 pos : SV_POSITION;
    24.                 float viewDepth : TEXCOORD0;
    25.             };
    26.  
    27.             v2f vert (appdata v)
    28.             {
    29.                 v2f o;
    30.                 o.pos = UnityObjectToClipPos(v.vertex);
    31.  
    32.                 // view space in the shader is -z forward
    33.                 o.viewDepth = -UnityObjectToViewPos(v.vertex).z;
    34.                 return o;
    35.             }
    36.      
    37.             float frag (v2f i) : SV_Target
    38.             {
    39.                 return i.viewDepth;
    40.             }
    41.             ENDCG
    42.         }
    43.     }
    44. }
    Now in the C# we need to figure out what direction each “ray” is in the texture. To do that we need to pick an arbitrary plane to project a grid onto. For simplicity we can choose a depth of 1 unit. To start let’s find where the left edge of the view frustum is at that 1 unit depth. Getting the Tan of half the FOV angle gives us half that width, so a local space position of Vector3(-Mathf.Tan(fov / 2f * Mathf.Deg2Rad)), 0f, 1f) gives us that left edge. We then need to figure out the distance between each pixel, which is the width of the view frustum at that 1 unit depth divided by the number of pixels. So, Mathf.Tan(fov / 2f * Mathf.Deg2Rad) * 2f / (float)numRays is the distance between each "ray" intersection of the plane. Now instead of rotating a quaternion for each ray, you just need to add that distance to the x component of the vector. As I mentioned earlier the first “ray” is inset by half a pixel, so we need to add half of the second value we calculated to the x component before we start iterating over the depth values.

    Next let’s talk about how to apply the depth value we’re getting from the shader to the vector we just made. I mentioned before I switched it from distance to depth. Here’s why. If it was still distance we’d have to take that vector we calculated and normalize it, then multiply that by the distance. That’s all fine, but since the vector we constructed already has a depth of 1, all we need to do is multiply the vector by the depth from the shader and we have the same value as if we had normalized it and multiplied by the distance, and for much cheaper.

    Those changes, plus some additional optimizations gets us this:
    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. namespace FovShader1
    6. {
    7.  
    8. public class FieldOfView : MonoBehaviour
    9. {
    10.     [SerializeField] private int numRays;
    11.     [SerializeField] private Shader depthShader;
    12.     [SerializeField] private Camera eye;
    13.     [SerializeField] private float fov = 90f;
    14.     [SerializeField] private float viewDistance = 5f;
    15.  
    16.     private Texture2D pixelReader;
    17.     private RenderTexture eyeRenderTexture;
    18.  
    19.     private void Awake()
    20.     {
    21.         CreateTextures();
    22.         SetupEye(eye);
    23.     }
    24.  
    25.     private void CreateTextures()
    26.     {
    27.         eyeRenderTexture = new RenderTexture(numRays, 1, 0, RenderTextureFormat.ARGBFloat, RenderTextureReadWrite.Default);
    28.         eye.targetTexture = eyeRenderTexture;
    29.  
    30.         pixelReader = new Texture2D(numRays, 1, TextureFormat.RGBAFloat, false);
    31.     }
    32.  
    33.     private void SetupEye(Camera eye)
    34.     {
    35.         eye.farClipPlane = viewDistance;
    36.         eye.nearClipPlane = 0.01f;
    37.         eye.targetTexture = eyeRenderTexture;
    38.         eye.aspect = numRays;
    39.         eye.fieldOfView = Mathf.Rad2Deg*2*Mathf.Atan(Mathf.Tan((fov*Mathf.Deg2Rad)/2f)/eye.aspect);
    40.     }
    41.  
    42.     // changed to late update so the movement will have been applied before rendering
    43.     private void LateUpdate()
    44.     {
    45.         RenderEye(eye);
    46.         DebugDrawEye(eye);
    47.     }
    48.  
    49.     private void RenderEye(Camera eye)
    50.     {
    51.         var shadowDistance = QualitySettings.shadowDistance;
    52.         QualitySettings.shadowDistance = 0;
    53.  
    54.         eye.RenderWithShader(depthShader, null);
    55.  
    56.         QualitySettings.shadowDistance = shadowDistance;
    57.     }
    58.  
    59.     private void DebugDrawEye(Camera eye)
    60.     {
    61.         RenderTexture.active = eyeRenderTexture;
    62.         pixelReader.ReadPixels(new Rect(0, 0, numRays, 1), 0, 0);
    63.         pixelReader.Apply();
    64.  
    65.         // Get all pixels in one call rather than multiple individual calls. Much faster
    66.         Color[] depths = pixelReader.GetPixels(0);
    67.  
    68.         // Get right camera's world space position and direction vectors for reuse later
    69.         var start = eye.transform.position;
    70.         var forward = eye.transform.forward;
    71.         var right = eye.transform.right;
    72.  
    73.         // Calculate width and steps between "rays" at 1 unit depth
    74.         float viewHalfWidth = Mathf.Tan(fov / 2f * Mathf.Deg2Rad);
    75.         float viewWidth = viewHalfWidth * 2f;
    76.         float rayStepSize = viewWidth / (float)(numRays);
    77.  
    78.         // Calculate the ray step as a world space vector
    79.         var rayStep = right * rayStepSize;
    80.  
    81.         // Calculate starting ray vector from half of the view width and inset by half a step
    82.         var rayDir = forward - right * (viewHalfWidth - rayStepSize * 0.5f);
    83.  
    84.         for(int i = 0; i < eyeRenderTexture.width; i++)
    85.         {
    86.             var depth = depths[i].r;
    87.  
    88.             if(depth == 0)
    89.                 depth = eye.farClipPlane;
    90.          
    91.             // Calculate end position by multiplying the ray vector (which has a depth of 1) and
    92.             // the depth from the pixel value, then add to the start position.
    93.             var end = start + rayDir * depth;
    94.  
    95.             Debug.DrawLine(start, end, Color.red, 0f);
    96.  
    97.             // Apply step vector
    98.             rayDir += rayStep;
    99.         }
    100.     }
    101. }
    102.  
    103. }
    And we're done.
    gpu raycast.png
     
    Invertex and UziMonkey like this.
  3. UziMonkey

    UziMonkey

    Joined:
    Nov 7, 2012
    Posts:
    206
    Thank you, thank you so much! It's real late but I'll dive into this tomorrow!
     
  4. ScetticoBlu

    ScetticoBlu

    Joined:
    Nov 26, 2020
    Posts:
    21
    Sorry for resuming this! Very interesting topic i'd say...
    I have a question not related to the shader itself, but how can I use this kind of raycast for some logic?
    I mean, enemies will have the depth information of their surroundings, ignoring what is "target" and what is "walls". How can I extend this to get usefull informations?
    Thanks! Great visual by the way!
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,366
    You could use the resulting rays to test if they overlap with the player’s position on the CPU. Or instead of rendering depth you could render the world in black and your target players as white.

    But there’s another question you should probably ask first … is doing this on the GPU faster than just doing it on the CPU?

    The answer to the second question is … probably not. It’s an entertaining exercise, but mostly pointless. It’ll be more efficient to do more narrow raycasts from your enemies to the player’s position (or maybe a few positions on the player or within their collision) to see if there’s line of sight rather than rendering the depth values for their entire field of view and copying it back from the GPU to the CPU.
     
    ScetticoBlu likes this.