Search Unity

How to ensure newly instantiated objects are ready in time?

Discussion in '2D' started by lordumbilical, Dec 5, 2022.

  1. lordumbilical

    lordumbilical

    Joined:
    May 24, 2020
    Posts:
    38
    I am instantiating some objects and setting their positions (using transform.position), but then if I immediately use overlapcollider2d, it detects all the objects are still at the origin...the new positions haven't taken affect, even though I can print the positions to console correctly. I have to run the overlap detection a full frame later to get proper detection, as if the physics system is lagging behind. Can anyone explain why, or how I can avoid this? It's very inconvenient to write in some code to wait after creating objects before I can start using them properly. Im guessing Im making a beginner mistake, but I cant find anything about this in the manual or other help topics. Many thanks.
     
  2. MelvMay

    MelvMay

    Unity Technologies

    Joined:
    May 24, 2013
    Posts:
    11,486
    Ignoring the fact that you should not use the Transform to change the position of colliders, when you change a Transform, only the Transform changes. Yep, nothing else in Unity knows about this including renderers or physics or anything else until that system runs. Renderers read it when they render. Physics knows about it when the simulation runs (by default during the FixedUpdate, not per-frame) and is forced to recreate the collider at the new position. There is no internal callback for systems that are called when the Transform changes. This has been like this for many, many years to remove side-effects (for Jobs) when changing Transforms and removing hidden costs etc.

    You should be using a Rigidbody2D to change position/rotation. Colliders don't move, Rigidbody2D do. The Transform is not the authority on position/rotation, a Rigidbody2D is which is why it writes to it.

    So, as to your problem, there's a position/rotation argument on the Instantitate method for a reason. This sets the Transform before any other components are activated. When the object is finally activated, things like physics (and other stuff) can use the Transform as the initial pose.

    In short: pass the position/rotation when you instantiate. The same goes for specifying a parent Transform should you need it.
     
  3. lordumbilical

    lordumbilical

    Joined:
    May 24, 2020
    Posts:
    38
    Thank you. That worked immediately. 8 hours wasted because of my ignorance... but now I've learnt something. Cheers.
     
    MelvMay likes this.
  4. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,735
    Melv is really good that way. :)

    To build on top of what Melv says, Unity is super-ultra-awesome because it almost always functions in a single-threaded way. Almost everything you do is RIGHT NOW, and when it comes back you're good to go.

    The main exceptions are Destroy() and Load/UnloadScene() stuff: those happen at end of frame, and of course any of the explicitly async calls (network, etc.)

    This might also help expand your brain a bit:

    Here is some timing diagram help:

    https://docs.unity3d.com/Manual/ExecutionOrder.html

    Two good discussions on Update() vs FixedUpdate() timing:

    https://jacksondunstan.com/articles/4824

    https://johnaustin.io/articles/2019/fix-your-unity-timestep
     
  5. lordumbilical

    lordumbilical

    Joined:
    May 24, 2020
    Posts:
    38
    Thanks, Kurt-Decker.

    In a similar vein (now Ive passed that hurdle), these same objects are being serialized and saved to file. Upon reloading, I find that any objects that were moving when saved are all disjointed and spread apart, as if they were saved at different points in time. I'm guessing this is the same issue: I'm saving transform positions of objects and reloading them, when I should be using the rigidbody? I guess the problem here is that these objects are children and don't have RBs of their own. (They are wall sections of a parent body, that each have their RBs removed when they are built into the wall, which means they all act as one with the wall. (It's a thing I saw on youtube.)) Same issue?
     
  6. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,735
    Sounds like it's time to start debugging! Here's how:

    You must find a way to get the information you need in order to reason about what the problem is.

    Once you understand what the problem is, you may begin to reason about a solution to the problem.

    What is often happening in these cases is one of the following:

    - the code you think is executing is not actually executing at all
    - the code is executing far EARLIER or LATER than you think
    - the code is executing far LESS OFTEN than you think
    - the code is executing far MORE OFTEN than you think
    - the code is executing on another GameObject than you think it is
    - you're getting an error or warning and you haven't noticed it in the console window

    To help gain more insight into your problem, I recommend liberally sprinkling
    Debug.Log()
    statements through your code to display information in realtime.

    Doing this should help you answer these types of questions:

    - is this code even running? which parts are running? how often does it run? what order does it run in?
    - what are the values of the variables involved? Are they initialized? Are the values reasonable?
    - are you meeting ALL the requirements to receive callbacks such as triggers / colliders (review the documentation)

    Knowing this information will help you reason about the behavior you are seeing.

    You can also supply a second argument to Debug.Log() and when you click the message, it will highlight the object in scene, such as
    Debug.Log("Problem!",this);


    If your problem would benefit from in-scene or in-game visualization, Debug.DrawRay() or Debug.DrawLine() can help you visualize things like rays (used in raycasting) or distances.

    You can also call Debug.Break() to pause the Editor when certain interesting pieces of code run, and then study the scene manually, looking for all the parts, where they are, what scripts are on them, etc.

    You can also call GameObject.CreatePrimitive() to emplace debug-marker-ish objects in the scene at runtime.

    You could also just display various important quantities in UI Text elements to watch them change as you play the game.

    If you are running a mobile device you can also view the console output. Google for how on your particular mobile target, such as this answer or iOS: https://forum.unity.com/threads/how-to-capturing-device-logs-on-ios.529920/ or this answer for Android: https://forum.unity.com/threads/how-to-capturing-device-logs-on-android.528680/

    If you are working in VR, it might be useful to make your on onscreen log output, or integrate one from the asset store, so you can see what is happening as you operate your software.

    Another useful approach is to temporarily strip out everything besides what is necessary to prove your issue. This can simplify and isolate compounding effects of other items in your scene or prefab.

    Here's an example of putting in a laser-focused Debug.Log() and how that can save you a TON of time wallowing around speculating what might be going wrong:

    https://forum.unity.com/threads/coroutine-missing-hint-and-error.1103197/#post-7100494

    When in doubt, print it out!(tm)

    Note: the
    print()
    function is an alias for Debug.Log() provided by the MonoBehaviour class.