Search Unity

Using Ray Perception 3D or GridSensor observation for a dodging and shooting casual game?

Discussion in 'ML-Agents' started by simonini_thomas, Feb 3, 2021.

  1. simonini_thomas

    simonini_thomas

    Joined:
    Apr 30, 2015
    Posts:
    33
    Hey there!

    I'm working on a demo to see how RL can help casual games studios to make NPC using RL (instead of Behavior Trees or FSM etc). So I modified the Tanks game made by Unity in 2015.



    It's a multi-agent environment, where you have 2 tanks (for the prototype then 4) that need to kill its opponent. I use the PPO (proximal policy optimization) RL algorithm.


    I've already made a multi-agent environment with this system of shooting and dodging "Snowball fight" (using Ray Perception 3D), but it was much easier: the snowball had no physics (so it goes straight, z position of snowball is frozen).



    The difference in this new experience is that the bullet has physics so it moves on the x,y,z axis as you can see here:


    Plus, contrary to snowball fight, it's not onCollisionEnter that defines the death but, the impact of the explosion (the closer you are from the impact the higher the damage you'll have).


    To simplify a little bit, I discretized the action space: the agent can only shoot in 3 different speeds (instead of a range of speed).

    So I've made a very simple version, using Eidos Grid Sensor the elo is increasing and I see that the agents becoming better, but I suspect the agent to have no information about the distance of the bullet or the height, which is problematic if he wants to dodge the enemy bullet.


    I have 3 questions:
    • What I need is to have a perception system that detects the bullet game object , but also its transform.position (x,y,z). By stacking (for instance 4), my agent will be able to see if the bullet is coming towards him (like in snowball fight) but also based on the bullet.transform.height to know if the bullet is about to touch the floor and hence explode. I used GridSensor, but after reading the Eidos github issue and the blogpost on Unity about it, I think that gridsensor does not detect position but only if on a specific cell of the grid there is a detectable game object right?
    • On the other hand, does Ray Perception sensor 3D get transform information about detected objects? Some elements of this sensor are confusing, I think it would help to have a doc about ray perception sensor 2D/3D and GridSensor in depth I don't know if it's on the roadmap, but I think it would help a lot of people using MLAgents.
    • Finally, do you think it's a good idea to add to the observation the transform.fire.position (the cannon position) to help our agent to know what is facing?
    Again, thanks for this amazing library and the documentation that allows us to use RL in Unity.

    Thanks for your help,

    Oh and if you want to follow this RL open source project you can follow my Twitter account

    Have a nice day,
     
  2. Luke-Houlihan

    Luke-Houlihan

    Joined:
    Jun 26, 2007
    Posts:
    303
    I'm not sure the RayPerceptionSensor3D or the Grid Sensor are the best choices for tracking the shells. RayPerception is modeled after robotics applications to model something like LIDAR plus some game specific wrappings like tags, and Grid appears to be geared toward high level awareness of surroundings or a sort of drop in performant replacement for camera input. You can make either work for this use case however they are probably more complicated than you need which increases training time.

    See docs for GridSensor here - https://github.com/Unity-Technologi...~/Grid-Sensor.md#example-of-grid-observations

    On a casual glance, no I don't see anything about positions or velocities being observed in this sensor. You'll have to add those observations yourself.

    The ray does contain positional data however I don't believe the RayPerceptionSensor3D provides that as an observation to the agent.

    No, the cannon position would not help. You would want to consider providing the rotation of the cannon as this would allow the agent to infer where it was aiming.

    If I were you I'd just use the RayPerceptionSensor3D or Grid Sensor to to track walls and enemy tanks. The shells can just be manually added as vector observations (position relative to agent and velocity). That's all the agent needs to infer the trajectories.
     
    simonini_thomas likes this.
  3. simonini_thomas

    simonini_thomas

    Joined:
    Apr 30, 2015
    Posts:
    33
    Hi @Luke-Houlihan, first of all thank you very much for your answers. I didn't know that the subfolder mlagents.extensions had a specific documentation so my apologies for the first two questions because the doc is really well made + your explanations made everything clear.

    I'm going to try your strategy of using vector observations for shells, I didn't thought about it and it's seems the best one given the environment.

    Anyway, I will post the results of this strategy when it's done, thanks again for your feedback,
     
    Luke-Houlihan likes this.