Search Unity

Stacking raycast sensors to get depth of field.

Discussion in 'ML-Agents' started by m4l4, Jul 30, 2020.

  1. m4l4

    m4l4

    Joined:
    Jul 28, 2020
    Posts:
    81
    Hi, i don't know if this is a new idea (i doubt but i haven't find it around).

    raycast sensors are awesome to get an idea of the surrounding environment, but they give you a very limited 2d view of what's around. you put 10 tags in it, then your agent goes toward the apple, eat it, only to discover there was a dragon behind it.... lame.

    I get nice results using a single raycast sensors, but lately i started using them in stacks in my Carnivore vs Herbivore simulation and the results are amazing.

    3 raycast sensors on the same agent, each one working on 1-2 specific layers. first one detects friendly entities, second one is for enemies, third one is for food and water. That way your agent gains a depth of field, giving him the chance of taking better decisions.
    (eg. 2 foods, same distance, but one has an enemy behind = ...easy choice)

    Yes, you increase the number of inputs (still way less than using camera pixels as inputs), but after some training, the difference is in plain sight.
    Both herbivores and carnivores are able to maneuver in an extremely more efficient way.

    Actions look less random and more coordinate, Carnivores are starting to surround herbivores to get an easy prey (i should make them share the food to increase cooperation), and herbivores roam in packs to better defend against enemies (they can shoot and freeze opponents at short range, if a carnivores enter a pack of herbivores, it'll be frozen till death).

    mix that with tags and your agent will be able to detect good food, behind a frozen enemy, behind a poisoned friend....

    Not sure if it's new or not, but maybe that can be interesting to someone.
    I'm excited for the results and i wanted to share :)