Hey there , I'm currently working on an open-source curriculum learning project where an agent needs to cross a 3d level to catch the goal. The level is more and more complex as he's becoming good. The curriculum goes as follows: - 1st level of difficulty: Move toward the goal (the goal is on the same platform). - 2nd level of difficulty: Transform your physics to wood (using the green button) in order to cross the wood bridge - 3rd level of difficulty: Cross the fire by crossing with rock physics or waiting the fire off. - 4rth level of difficulty cross the rotating wood bridge. I do not want to use vision (first because it implies GPU) and second because I want then for the replay to use assets that I'm currently making that are nicer than the training environment. Work in progress However, because this agent does not have any vision, I use 3d ray perception sensors and that the placement of the wood bridge is random it fails miserably to learn to cross the bridge. My current game observations: - transform.InverseTransformDirection(rigidbody.velocity). - isRock: bool (if false it means the agent is made of wood). My current raycast: I have four questions: - Do you have some ideas on how I can handle that? I was thinking about creating a game object below called and tagged as void and allow the sensors to detect it. It's a good strategy? - Does Raycast 3d sensor get the position of the detected object? - Do you think it's a good idea to add goal position as an observation? - Do you think it's a good idea to stack the raycast obs? Again thanks for your help, UnityML is a really amazing tool to do RL .