Search Unity

Resolved Is it possible to position a camera based on shader vertex coordinates?

Discussion in 'Shaders' started by MMeyers23, Feb 9, 2021.

  1. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
    Hi,

    So basically I have a material one a plane that is just a simple texture. Its a black background with a white dot. The texture is a render texture though, and at runtime the white dot is moving around the plane (the white dot represents the position of an object seen by a different camera). My goal is simple but I cannot achieve it - I want another camera, facing this white dot, to move with the white dot.

    Can this be achieved?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Sure.

    But not how you're thinking about it. A camera's position needs to be determined before rendering, so a shader can't determine the position of the camera that's rendering it.

    You can however move the plane itself however you want with the shader, assuming you know where the object is going to be seen. But probably the easiest option would be to calculate the screen space position of the object in the render texture's camera in c#, and align the second camera to that relative position on the plane, again in c#.
     
  3. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
    Unfortunately, the object that is represented by the white dot is actually the sun, and it is not an object but part of the skybox texture. I have a camera that renders the whole scene black except for this sun (white circle). Since everything in the scene is black except this sun, I get a white circle that becomes half a white circle (or 1/4, etc) when an object is partially occluding it from this camera's view. But ultimately I dont want this white circle (or half circle, etc) moving around the black background, I want it always at the center and taking up the entire texture (ie fitting the square render texture as tightly to the size of the circle as possible). My solution was going to be to have another camera moving with the white dot (drawn onto a plane) so that it was always at the center (and taking up the entirety) of this FINAL camera's view.

    Bottom line: final image of white circle will be reduced to lowest mipmap level to give me a value between white and black depending on whether the sun is entirely visible or entirely occluded. It is an occlusion buffer that will drive the behavior of a lens flare. I have everything working except the movement of this third camera
     
  4. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
  5. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
    I want a camera to face that black screen up close so that the white dot takes up its full view, and I want that camera to move around so the moving white dot is always in its view at the very center
     
  6. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The sun in the skybox is based on the main directional light’s angle. If you know that, you can get the exact position of it on screen, because it’ll always be that direction relative to the camera.

    Put your render texture camera at the your main camera position’s, and aim it in the opposite direction as the main directional light is pointing using
    Quaternion.LookDirection
    .
    Code (CSharp):
    1. renderTexCam.transform.rotation = Quaternion.LookRotation(-mainDirectionalLight.forward, Vector.up);
     
  8. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
    I am using a custom skybox shader that uses a cubemap, the location of the sun unfortunately is independent of the directional light :(. I think I can achieve this though if I can look up where in my texture the white pixels are. I am trying to figure out how color arrays work. I figure if I can get an answer like "the white pixels are between x,y and x,y" in some space then I can eventually work with that to map the movement of the camera. I have no idea how to find out pixel locations based on their color though. What do you think about this?
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    If it’s a cube map, then you can easily get the direction still. Open up the cube map in an external program, then get the pixel position and calculate the direction vector from that. Then use that in the above bit of code. For example, if you have a layout like one of these:

    Use +1 or -1 on the component mentioned in the above diagram. Then the other two components are going to be how far from the center of that face to the pixel in question, divided by face resolution, times 2. Or divided by half the face resolution minus 1.

    So, lets say your cube map is 512x512 pixels per face. The sun appears on the -Z face at 405, 322 pixels from the bottom right corner of that face. The vector you want to use in the above code is:
    Code (csharp):
    1. Vector3 sunDir = new Vector3(
    2.   (405.0f / 256.0f) - 1f, // 256 == 512 / 2
    3.   (322.0f / 256.0f) - 1f,
    4.   -1f // -z face
    5. );
    Which components the two pixel positions will change depending on the face, so you might need to do some trial and error to figure out which ones to use. Some might be flipped as well, including in my example, so you may need to use the negative of the value you calculate for the components. Shouldn't be too hard to figure out, especially if you make sure you keep the render texture camera wide enough while you're getting it positioned to see what direction it's moving.

    If you have a equirectangular image you're letting Unity convert to a cube map, you can still do the same kind of thing. Find the position on the texture the sun appears at, and then do this:
    Code (csharp):
    1. Vector3 sunDir = new Vector3(
    2.   Mathf.Sin( (916.0f / 2048.0f) * Mathf.PI * 2.0f), // texture is 2048 pixels wide, sine takes radians
    3.   (653.0f / 512.0f - 1.0), // texture is 1024 pixels high, 1024/2 = 512
    4.   Mathf.Cos( (916.0f / 1024.0f) * Mathf.PI * 2.0f), // same as x, but using cosine
    5. );
    I might have the sine and cosine swapped, but that should give you the correct vector, as long as you're not rotating the skybox.
     
  10. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
    Thanks for the reply! I am having trouble understanding the concept. For example why would I want to point my render texture camera directly toward the sun? The render texture camera is used to recreate the same exact view as the regular camera, but with everything rendered black except for the sun. If i point it in a different direction than the main camera then I can no longer achieve this.

    I am definitely interested in finding the screen position of the sun (in reference to the main camera's view). I could definitely use this value to map out where the third camera should position itself relative to the plane that is displaying the render texture. I unfortunately need more explanation though about this as I am still very confused. I have an equirectangular image that is being converted to a cube map. I must admit I do not understand the application of the math used in your code although I have no doubt it is correct. How would I derive the screen position? Also, I noticed that as the camera position moves extreme distances without rotating, the sun does change its position on screen gradually. In contrast to the editor where it is always in the same spot if I dont rotate, regardless of how much I change position. This makes me question if I even grasp how skyboxes work.

    An easier concept for me to grasp is to look up the coordinates of a white pixel on the render texture. Theoretically couldnt I loop through the color array of the render texture and if the color of a pixel = white then the function would return which number pixel it is in the array, then do some math to find where that position is in UV coordinates?
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Sure, but that's stupidly slow to do. A 1920x1080 image is >2 million pixels.

    (note, not calling you stupid here, just using it as a qualifier on how slow it is. It's a totally logical and straight forward way to think about the problem.)


    And complete unnecessary because you already know where the sun is on screen. That handy vector code above gives you the relative world direction, and you can put that out at some position in front of the camera (multiply the direction vector by 10 and add the camera position) to make sure it's past the near plane, and use the
    camera.WorldToViewportPoint
    function to get the normalized view position coordinate.
    Code (csharp):
    1. // a world position at the center of where the sun will be visible at in the camera view
    2. Vector3 sunWorldPosition = sinDir * 10f + camera.position;
    3.  
    4. // the (0.0, 0.0) bottom left to (1.0, 1.0) top right position of the sun on screen
    5. Vector3 sunViewportPos = camera.WorldToViewportPoint(sunWorldPosition);
    There's also no reason to re-render the entire camera view. Pointing a camera in the right direction with a narrow FOV is going get you about the same relative coverage estimation as re-rendering the entire camera view and grabbing a portion of the screen the sun is at. Plus it works if the sun is off screen, and is cheaper because you render at a much lower resolution.

    If you really, really want it to be a perfect match, you could calculate a shifted perspective matrix.
    https://medium.com/@hirumakazuya/im...spective-projection-on-the-unity-c9472a94f083
    To match what that article is doing, you could put a quad on the screen and use it's corner positions for the math. You don't really need the quad at all, but it might make debugging a little easier.

    Take a quad mesh and make it a child of your camera.
    Code (csharp):
    1. // get the camera space position of the sun
    2. // sunDir from the above example code
    3. Vector3 quadPosition = camera.transform.InverseTransformDirection(sunDir);
    4.  
    5. // make sure it's just past the near clip plane
    6. quadPosition /= quadPosition.z * (camera.nearClipPlane + 0.001f);
    7.  
    8. // move the quad to that local position, assuming it's a child of the camera game object
    9. quad.transform.localPosition = quadPosition;
    10.  
    11. // not really needed, just make sure the quad's local rotation is zeroed out before you start
    12. // quad.transform.rotation = Quaternion.identity;
    Then manually scale the quad to a size that covers the sun, and you'll have a visual representation of the corners to use for the perspective projection's near plane.


    If you really are set on doing it they way you already are. That
    WorldToViewportPoint
    example I gave above is the relative UV position on the texture you'll find the sun. As well as the relative position of a quad (from the bottom left corner) you'd need to put another camera to see it.
     
    Last edited: Feb 10, 2021
  12. MMeyers23

    MMeyers23

    Joined:
    Dec 29, 2019
    Posts:
    101
    Thank you once again, that does work very well. I appreciate you going into detail for me!