Search Unity

Obtain the custom visual observation (semantic segmentation) from Unity to train online RL agent

Discussion in 'ML-Agents' started by yangpengzhi, Aug 15, 2022.

  1. yangpengzhi

    yangpengzhi

    Joined:
    Nov 22, 2021
    Posts:
    15
    Sorry to bother you guys.

    In my project, the RL agent requires semantic segmentation as an observation. And I found this issue mentioning a way to do that.

    However, after adding the
    ImageSynthesis
    component to the agent. I found the repo only provided a way to save the semantic images to train a supervised model as an offline method. While I need the images to be used directly as part of the observation for online RL algorithms training in python. (similar to this issue in the forum)

    So I tried:
    1. Add the
    Camera Sensor
    or
    Render Texture Sensor
    to the agent. But both didn't solve the problem: the observation didn't get the right images.
    2. Modify the source code directly and send the images through a
    SideChannel
    to python. But this is not straightforward and not very easy to implement. As the OutgoingMessage class doesn't contain a message for sending images.

    Any suggestions or guidance would be appreciated. Thank you!
     
    Last edited: Aug 15, 2022
  2. yangpengzhi

    yangpengzhi

    Joined:
    Nov 22, 2021
    Posts:
    15
    Have worked it out. Please take a check at this issue if there are similar problems.