Search Unity

How to accurately report how much world space is seen by the bounds of the camera

Discussion in 'Scripting' started by MikePOD, Aug 28, 2019.

  1. MikePOD

    MikePOD

    Joined:
    Jul 10, 2017
    Posts:
    22
    I have an accurate Earth in my scene. What I'm trying to do is make a script that accurately reports how much of the earth that the camera is seeing. For instance, when fully zoomed out, the script would report 6378137m x 6378137m. When the camera is zoomed in to say, Iceland, it would say 101,826m x 101,826m, or however much of that area is within the viewport. Any ideas on how to do this? Thanks.
     
  2. Munchy2007

    Munchy2007

    Joined:
    Jun 16, 2013
    Posts:
    1,735
    If you know the visible area when fully zoomed out and the visible area when fully zoomed in, couldn't you use InverseLerp to calculate the visible area for any given zoom value?
     
  3. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,605
    You could try casting 4 raycasts from the position of the camera, based on the FOV and so on, such that these raycasts accurately represent the 4 lines that represent the cameras field of vision in the editor.

    When zoomed in you now get 4 collision points on a sphere - or a rectangle, depending on how you wanna look at it and use it. Using these points, you should be able to calculate the area between them using math.

    The only thing you need to find some workaround for is when you are zoomed out so much that one or more raycasts do not hit the sphere. In this case you'd have to somehow calculate or approximate how much of your FOV is covered by the sphere, so you can get your results based on that. I believe you can get the pixels an object takes up on the screen somehow in shaders, so it's probably doable.
     
  4. MikePOD

    MikePOD

    Joined:
    Jul 10, 2017
    Posts:
    22
    Thanks for the idea, decided to try this out. I looked up the area of a rectangle overlayed on a sphere, and came out with
    Code (CSharp):
    1. area = 4 * Mathf.Asin(Mathf.Tan(length / 2) * Mathf.Tan(width / 2));
    So the final code is
    Code (CSharp):
    1.  
    2. topRight = Camera.main.ViewportPointToRay(new Vector3(1, 0, 0));
    3. botLeft = Camera.main.ViewportPointToRay(new Vector3(0, 1, 0));
    4. if (Physics.Raycast(topRight, out hitTR) && Physics.Raycast(botLeft, out hitBL))
    5. {
    6.     length = Mathf.Abs(hitBL.point.x - hitTR.point.x);
    7.     width = Mathf.Abs(hitTR.point.y - hitBL.point.y);
    8.     area = 4 * Mathf.Asin(Mathf.Tan(length / 2) * Mathf.Tan(width / 2));
    9. }
    But the values I'm getting seem wildly inaccurate.
     
  5. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,605
    Define wildly inaccurate? Too high, too low, by how much? Would be helpful when thinking about what's going wrong. Keep in mind that the area of this square on a sphere is a bit bigger than the area on a flat map would be, so if it's a couple percent off (rather than orders of magnitude), this may explain it.

    Did you draw a rectangle using your two points? That way you can check that you got the rays right.

    Also, i believe you actually need 4 points since your camera may be rotated, which would lead you to calculate a different area than what you'd expect, wouldnt it? Imagine rotating the camera in your head. The y-distance (or width) between your points gets smaller, which decreases the overall area if i'm not missing anything here.

    Keep in mind you can calculate the distance between two vectors by using Vector3.Distance.
     
  6. MikePOD

    MikePOD

    Joined:
    Jul 10, 2017
    Posts:
    22
    So by wildly inaccurate, I mean when the camera is directly over China, it measures the distance it's seeing as 151.5287 meters. When zoomed out a bit more, it comes to 127.735 meters somehow. Zooming out a bit more gives me NaN.
    Also I'm not sure how I would utilize all four points. Subtracting minX from maxX should get me the length, and subtracting minY from maxY should get me the width. That should only take two points. What would I use the other two for?
     
  7. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,605
    Sorry for this late answer.

    First of all, you should either use a double or, if decimal precision is not required, an int to save your area. Floats basically only have 7 digits with a dynamically placed decimal point, so it may eventually overflow or return NaN or whatever, with numbers in your expected size.

    As for the second point, imagine your camera-view to be this rectangle:

    Let's say for convenience that the old distance between x1 and x3 (length) was 20, and the old distance between y1 and y3 (width) was 10. Now that we rotated it, the distance between x1 and x3 became a lot shorter, while the other distance changed as well. This should cause problems, since we could theoretically keep rotating it until x1 and x3 have a distance of 0 (because they are above each other, but on the same x coordinate).
    That means the distance between y1 and y3 would now be the diagonal of the rectangle, which for the example values of length/width = 20/10 should be something like ~22. For the normal rectangle area calculation with length*width, this would result in 0*22 = 0, instead of the expected 200 we'd get with 20*10.

    Using the distance (Vector3.Distance) between (x1,y1) and (x2, y2) and well as (x4, y4) would net you the expected results of 20 and 10 for length and width, which would then result in 20*10 = 200 area.

    Other than that, i'd use width*length for a good approximation first, until we got the result right. Then you can try and calculate the actual area on a sphere, since i'm horrible at trigonometry and cant tell if the foruma you are using returnt the right result.