Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question Looking for assistance with Scanning function

Discussion in 'Scripting' started by RobertFitzgibbon, Aug 31, 2023.

  1. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    I'm in the process of creating a 3D Scanning function, and while my function works if I do not add rotation to the mix (2d scan), I cannot seem to create anything but a spherical scan when I rotate the object.

    The function works as follows, or at least is intended to do so. I have a 'resolution' variable per say titled Scanner_Pixel_Size. So when the scan starts we start at the top of the screen, move down 1 resolution unit, move over 1 resolution unit, raycast the object, set a new gameobject to the hit position and child it to the scanned object. Then the object spins 1 degree, raycast again etc and this all repeats until the entire screen is covered. In theory as the scanned object rotates, the child gameobjects should stay put with the rotating scanned object, but for some reason I have something else going on.

    I have two pictures below to help capture the issue I am facing. The gameobjects located at the hitpoints are shown as the yellow dots. I have also posted the majority of the script below -

    Code (CSharp):
    1. for (float zhigh = 12; zhigh >= 0; zhigh = zhigh - Scanner_Pixel_Size) {
    2.             ScanCam.transform.position = new Vector3(ScanCam.transform.position.x, zhigh, ScanCam.transform.position.z);
    3.  
    4.             for (float xhigh = 12; xhigh >= -12; xhigh = xhigh - Scanner_Pixel_Size) {
    5.                 ScanCam.transform.position = new Vector3(xhigh, ScanCam.transform.position.y, ScanCam.transform.position.z);
    6.              
    7.                 RotatingPlatform.transform.rotation = Quaternion.identity;
    8.                 for (float r = 0; r < 360; r = r + Scanner_Pixel_Size) {
    9.  
    10.                     int layer_mask = LayerMask.GetMask("Imported");
    11.  
    12.                     RaycastHit hit;
    13.  
    14.                     Vector3 noPos = Vector3.forward;
    15.                     Ray ray = ScanCam.ScreenPointToRay(noPos);
    16.  
    17.                     if (Physics.Raycast(ray, out hit, 100.0f, layer_mask)) {
    18.                          
    19.                         if (hit.transform.gameObject.layer == 6) {
    20.  
    21.                             Renderer rend = hit.transform.GetComponent<Renderer>();
    22.                             MeshCollider meshCollider = hit.collider as MeshCollider;
    23.                             Texture2D tex = rend.material.mainTexture as Texture2D;
    24.                             Vector2 pixelUV = hit.textureCoord;
    25.                             pixelUV.x *= tex.width;
    26.                             pixelUV.y *= tex.height;
    27.  
    28.                             GameObject g = Instantiate(Blank,hit.point,RotatingPlatform.transform.rotation, RotatingPlatform.transform);
    29.                             Color ScanColor = tex.GetPixel((int)pixelUV.x, (int)pixelUV.y);
    30.                             AllHits.Add(g);
    31.                             AllColors.Add(ScanColor);
    32.  
    33.                         }
    34.                     }
    35.  
    36.                     RotatingPlatform.transform.rotation = Quaternion.Euler(RotatingPlatform.transform.eulerAngles.x,r,RotatingPlatform.transform.eulerAngles.z);
    37.              
    38.                 }
    39.                 RotatingPlatform.transform.rotation = Quaternion.identity;
    40.          
    41.             }
    42.      
    43.          
    44.      
    45.         }
    one.png two.png
     
  2. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    36,563
    That sounds to me like it is...

    Time to start debugging! Here is how you can begin your exciting new debugging adventures:

    You must find a way to get the information you need in order to reason about what the problem is.

    Once you understand what the problem is, you may begin to reason about a solution to the problem.

    What is often happening in these cases is one of the following:

    - the code you think is executing is not actually executing at all
    - the code is executing far EARLIER or LATER than you think
    - the code is executing far LESS OFTEN than you think
    - the code is executing far MORE OFTEN than you think
    - the code is executing on another GameObject than you think it is
    - you're getting an error or warning and you haven't noticed it in the console window

    To help gain more insight into your problem, I recommend liberally sprinkling
    Debug.Log()
    statements through your code to display information in realtime.

    Doing this should help you answer these types of questions:

    - is this code even running? which parts are running? how often does it run? what order does it run in?
    - what are the names of the GameObjects or Components involved?
    - what are the values of the variables involved? Are they initialized? Are the values reasonable?
    - are you meeting ALL the requirements to receive callbacks such as triggers / colliders (review the documentation)

    Knowing this information will help you reason about the behavior you are seeing.

    You can also supply a second argument to Debug.Log() and when you click the message, it will highlight the object in scene, such as
    Debug.Log("Problem!",this);


    If your problem would benefit from in-scene or in-game visualization, Debug.DrawRay() or Debug.DrawLine() can help you visualize things like rays (used in raycasting) or distances.

    You can also call Debug.Break() to pause the Editor when certain interesting pieces of code run, and then study the scene manually, looking for all the parts, where they are, what scripts are on them, etc.

    You can also call GameObject.CreatePrimitive() to emplace debug-marker-ish objects in the scene at runtime.

    You could also just display various important quantities in UI Text elements to watch them change as you play the game.

    Visit Google for how to see console output from builds. If you are running a mobile device you can also view the console output. Google for how on your particular mobile target, such as this answer for iOS: https://forum.unity.com/threads/how-to-capturing-device-logs-on-ios.529920/ or this answer for Android: https://forum.unity.com/threads/how-to-capturing-device-logs-on-android.528680/

    If you are working in VR, it might be useful to make your on onscreen log output, or integrate one from the asset store, so you can see what is happening as you operate your software.

    Another useful approach is to temporarily strip out everything besides what is necessary to prove your issue. This can simplify and isolate compounding effects of other items in your scene or prefab.

    If your problem is with OnCollision-type functions, print the name of what is passed in!

    Here's an example of putting in a laser-focused Debug.Log() and how that can save you a TON of time wallowing around speculating what might be going wrong:

    https://forum.unity.com/threads/coroutine-missing-hint-and-error.1103197/#post-7100494

    "When in doubt, print it out!(tm)" - Kurt Dekker (and many others)

    Note: the
    print()
    function is an alias for Debug.Log() provided by the MonoBehaviour class.
     
  3. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32

    Thanks for the suggestion Kurt, I've been trying since yesterday but I haven't had any luck in finding where I'm going wrong unfortunately.
     
  4. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    36,563
    Strip it down to a simpler form, something that does only (for example) one of the angles around the circle (like 0, instead of all 360 degrees).

    The idea is by stripping it down you can produce a small set of output debug data that can be glanced all-at-once and reasoned about.
     
  5. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    If I'm not mistaken(which I usually am), you want the spheres to go around the object and map out its vertices? Basically getting the data and putting it into a matrix?


    wow, my head almost exploded trying to comprehend that idea. Not saying it's impossible, just extremely complicated. As it would be far simpler to just get the mesh and extract it's data.

    So what exactly is the goal of this?
     
    RobertFitzgibbon likes this.
  6. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    You're exactly correct! Basically recreate the outer skin of the mesh via rays and gameobjects. So the end goal is to get the color of each pixel, which I have working, and then keep the physical location in the world space of that pixel for a later use. So I thought I have two ways of approach either raycast, or do as you mentioned get the mesh data but if I go mesh data route I'm not sure how to get the physical location of each pixel if that makes sense.
     
  7. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    Technically, yes, that makes no sense. As a pixel would just be the part of the texture, that's skinned onto said mesh using UVs. Which like you mention, would just be a color. I thought you were looking for positional data.

    Not sure, as my memory is failing me, but I think there is a way to raycast and return a pixel color, from a skinned mesh. I'll have to dig into that some.
     
  8. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
  9. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    I have that part completed. I can get the pixel color, which let me re-explain maybe this will help understand -
    1) Look at mesh texture, gather every color difference (in this case its just white) via GetPixels
    2) Raycast Mesh and get the pixel color with hit points like the link you shared
    3) Take hit-point from step 2 and Find which color it matches from step 1, and track the location of the hit-point via spawning a gameobject at the location.
    4) For each hit-point (gameobject now) collected in each color, I then need to have a separate object (not related in this problem at all) move towards its location - this is why I need the physical location of each pixel (each mesh triangle with that color pixel applied). I know that doesn't technically translate, but I hope I can provide enough understanding to make it sound right haha

    So for example if I had a mesh with a texture applied thats half red, half blue - the script would separate the locations of the red/blue (in the physical unity space), and then another object would come in and interact with each blue hit point if I wanted etc
     
    Last edited: Aug 31, 2023
  10. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    Simplistically if just red/blue, I would make 2
    List<Vector3> red
    , and another blue. Then your raycast that gets the color, also get the position, and if color = color then appropriate list just adds that vector3 of hit point.

    However, I feel like you may mean hundreds of colors in your final result, so then I would use a list of lists, and keep track of what parent list index of it's children are in representation of said color. Obviously found by reading the texture and your GetPixel data results(index 0 = red, 1 = blue, ..., 57 = greenish blue, etc...).

    Again, sounds very complicated.. lol, but if there is a will, there is a way.
     
  11. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    Well that is pretty much what I have going now. The problem is when I add rotation (only on one axis) to the scan that well I'm not good enough at math to figure out how to translate the hit point Vector3 to its correct spot in relation to the rotation of the hit object. This is why I spawn GameObjects at the vector3 hit point and child them to the object in hopes it would move with the object while it finishes rotating 360 degrees. So if I scan at 0 degrees just normal ok works great as the first picture attached, when I turn the object 1 degree, the previous hit points are no longer in the correct location anymore or something like that, when I pause the scene and manually rotate the object, the points do move with it, and this is where I get confused.

    All those yellow dots are raycast hitpoints converted to gameobjects and gizmo'd on the screen. How can they be in mid-air around the house in a spherical shape. Thus is not possible. It has a mesh collider, it is not possible to ray hit with no collision per say. Yet in code they are all spawned at the hit point position. How can the hit point position be NOT in contact with the object? I believe they are in contact at one point with the offset of the camera going top to bottom, right to left, and the object spinning, those yellow dots are contact points at some point in time but for some reason they're not moving with the object and it comes out some spherical looking shape like the 2nd picture

    Also its not just something like the gizmos not updating, I literally have gameobjects in mid air around the house object.
     
    Last edited: Aug 31, 2023
  12. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    This is because you're only getting the world position of the hits, so if the scan object rotates, it's no longer at those positions anymore. So you would have to use a technical "local" position in space from the scan objects root. So either child the gizmos to the scan object, or create a empty object at the scan objects root position, math out the local points from the scan object, and child the gizmos to the empty object. As any child will rotate as it's parent does.

    Which leads me to ask why it needs to rotate at all? Because trying to keep track of those positions and modify them to rotate particular to distance, and re-update the list with that, sounds like a nightmare to do...

    But for sure, using local space and child-ing them would be far easier than to do the math.
     
  13. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    Yeah but doesn't creating a gameobject and then setting the parent to the spinning object do just that? I'm not gizmo'ing the hit positions, I'm gizmo'ing the gameobject positions, which changes when the parent rotating object rotates so how do I end up with hit point gameobjects in space? I would understand if I took the hit point position and it never changed, but these are gameobjects that are children of the spinning object, how does one end up in space? In theory when the hit point is made, the gameobject is made and parent set to spinning object, how at any time could it not be in contact with the spinning object?
     
  14. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    I must have read over that, lol, but surely if the gizmos are childed to the scan object, and you don't have any other code manipulating their positions, they should easily stay with the parent.

    Yes, once a object is childed to another object, it stays with it locally. Are you sure no other code is also playing with their positions?
     
  15. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    The only code I have referring to movement is moving the camera and rotating the parent of the point positions. Lines #2, 5, 7, 36, 39. No rigidbodies, no physics other then a Mesh collider, everything but drawing the gizmos themselves is in the code above. The gizmos is just right below this with a foreach gameobject in allhits, draw gizmo.
     
    Last edited: Aug 31, 2023
  16. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    I just made a test scene, just to make sure I wasn't crazy, with a cube that spins, and another object that shoots a ray at it and spawns a small prefab on hit position:
    Code (CSharp):
    1. public class RaycastSpin : MonoBehaviour
    2. {
    3.     public Transform spinner;
    4.     public GameObject objPrefab;
    5.     int timer;
    6.  
    7.     private void Start()
    8.     {
    9.         Application.targetFrameRate = 60;
    10.     }
    11.  
    12.     void Update()
    13.     {
    14.         spinner.Rotate(0, 1.0f, 0);
    15.         timer++;
    16.  
    17.         Ray ray = new Ray(transform.position, -transform.right);
    18.         Debug.DrawRay(ray.origin, ray.direction * 10, Color.yellow);
    19.  
    20.         if (timer > 60)
    21.         {
    22.             if (Physics.Raycast(ray, out RaycastHit hit, 100))
    23.             {
    24.                 Instantiate(objPrefab, hit.point, Quaternion.identity, spinner);
    25.             }
    26.             timer = 0;
    27.         }
    28.     }
    29. }
    And it does exactly what it looks like. it spawns the prefab at the hit position, while childing it to the spinning object, and the child prefabs move perfectly with the spinning object.

    So I'm not sure what you have different, but in all reality, it works just fine.
     
  17. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    Well after doing some comparisons with yours I noticed the only real difference was the for loop. When I insert a for loop into your update in replacement of the timer, it breaks the script and you too will get the spherical shape. So thank you so much for helping solve! Do you know why the for loop breaks it?
     
  18. zulo3d

    zulo3d

    Joined:
    Feb 18, 2023
    Posts:
    510
    Your scanner will only work with basic convex objects.

    Instead of scanning the object with raycasts to get a world position and color, it's possible to scan the texture of the object and convert the UV positions into world positions using barycentric coordinates. This is how light mappers work. It would be 100x more efficient.

    But perhaps you're just doing this for some sort of visual effect and aren't too bothered about accuracy?.
     
  19. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    not sure, as I've not tested that yet. But could easily assume it has to do with timing, as for loops are very fast, so maybe something doesn't get "set" right before the next iteration happens.

    I'll play around with it, and see what I can figure out.
     
  20. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    Why will it work with only basic convex objects? Thank you for sharing that though, this will help tremendously!
     
  21. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    if something isn't convex, collisions won't happen. Basically saying the area to check against is inside-out if not convex.
     
  22. RobertFitzgibbon

    RobertFitzgibbon

    Joined:
    Mar 2, 2015
    Posts:
    32
    How would I go about separating triangles that share a UV point if I'm using barycentric coordinates?
     
  23. zulo3d

    zulo3d

    Joined:
    Feb 18, 2023
    Posts:
    510
    It won't work with overlapping UVs.