I have a simulation (using a lot of raycasts) that would be too compute intensive to run in real time. So I compute the various raycasts at startup, generate hit-visualizing geometry in some "time buckets" (GameObjects that hold geometry for a particular time period of the simulation) based on those computations. I am then able to replay in real time by just simply turning on the time bucket GameObjects corresponding to replay time. This all works fine when the geometry of my scene is static. I should admit at this point that I think I am abusing the way Unity likes to think about the world - I do my raycasts and geometry creation from a singleton GameObject script / MonoBehavior. Specifically from its Awake() method. So that roughly looks like this: Awake: for each time period calculate all the rays needed do a raycast for each ray, hitting all objects in the scene for each hit, create some hit-visualizing geometry in a time bucket gameobject Once the Awake precomputation is finished, I have a little time slider and play button UI to allow setting of the current simulation time to show. Whenever the simulation time changes via the user/slider or via the play button/Update method, the appropriate time buckets with cached hit geometry visualizations get turned on or off. That all works fine. Until I started allowing moving objects in my simulation. Inside the Awake method that does the precomputation I now change the transform.position of various GameObjects in the scene before computing the raycasts: Awake: for each time period calculate all the rays needed update the location of any moving objects (by setting their transform.position) do a raycast for each ray, hitting all objects in the scene for each hit, create some hit-visualizing geometry in a time bucket gameobject This works somewhat: for some number of time buckets my moving objects get hit by rays. But then it appears the rays no longer hit them. The rest of the scene (with non moving objects) still gets intersected by raycasts just fine. Could this be because the Physics engine which is used to do the Raycasts, does not get a chance to re-cache object locations and whatever internal data structures it caches (axis aligned bounding boxes etc) ? I am not giving Unity a chance to update physics within my Awake method. Usually object positions etc are updated in Update or FixedUpdate callbacks with Unity having a chance to call physX update routines in between. Sorry this is such a long explanation. I have tried googling for something related to this and also searching inside the forums, but I dont really know what search terms might be productive in this context. Have people used Unity to precompute physX based calcs (like raycasts) ? If so, what would be a "best practices" way to do this (rather than a synchronous, uninterrupted tight inner loop in my game manager's Awake()) Alternatively, is there a way I can explicitly ask the PhysX engine to update its internals (after I update my moving object locations each time through the loop ?) If you have come to read this far in the post, let me thank you for your patience. If you have any advice, it would be most appreciated !