Search Unity

Question Headless mode and Rendering

Discussion in 'ML-Agents' started by dewet99, Feb 16, 2023.

  1. dewet99

    dewet99

    Joined:
    Jun 13, 2021
    Posts:
    7
    Good day,

    I have an MLAgents project where each agent uses visual observations. I'd like to disable physically rendering the images and just collect the pixel values to pass as observations to my training algorithm, but I'm not quite sure if that is what Headless mode will allow me to do.

    Essentially, I want to be able to train my agents without having them render what they see.

    I hope my explanation is clear enough. Does anyone have an idea of how I'd go about doing that?

    Thanks in advance :)
     
  2. SF_FrankvHoof

    SF_FrankvHoof

    Joined:
    Apr 1, 2022
    Posts:
    780
    You'd need to render to have pixel values in the first place :p.

    But of course you don't need to render to a screen...
    Unity - Scripting API: RenderTexture (unity3d.com)
    Great description by the way: "Render textures are textures that can be rendered to."
     
  3. hughperkins

    hughperkins

    Joined:
    Dec 3, 2022
    Posts:
    191
    AFAIK, dedicated server mode still renders graphics, unless you use `--no-graphics` option. However, this is just what I've understood from reading the manual. You probably should just try a small PoC, and see what happens :)
     
  4. hughperkins

    hughperkins

    Joined:
    Dec 3, 2022
    Posts:
    191
    by the way, the rendering quality is different by default for normal vs headless:

    Screen Shot 2023-02-16 at 04.57.33.png Screen Shot 2023-02-16 at 04.57.40.png
     
  5. dewet99

    dewet99

    Joined:
    Jun 13, 2021
    Posts:
    7
    Thanks for the reply.

    I'm not sure I understand how the rendertexture sensor will improve performance (as that is the purpose of disabling rendering), but it's definitely worth a try. For implementing a rendertexture sensor, is my thinking as described in the steps below correct?

    1. Create RenderTextureSensor component in Agent body
    2. Create a rendertexture object in the scene, call it eg. TargetTexture
    3. Assign the Agent's Camera Output Texture as TargetTexture
    4. Assign the RenderTextureSensor component's Render Texture as TargetTexture
    The camera will then render to the texture in stead of the screen, and the agent will use that texture as its observations. Then, when running training, the observation in decision_steps.obs will contain the render texture pixel values?

    Is my understanding correct, or am I missing a step or two?
     
  6. SF_FrankvHoof

    SF_FrankvHoof

    Joined:
    Apr 1, 2022
    Posts:
    780
    Yes, that sounds about right ;)
     
  7. dewet99

    dewet99

    Joined:
    Jun 13, 2021
    Posts:
    7
    An update, for anyone that might want to know in the distant future.

    Using render textures in stead of camera observations gave a boost of about 10fps (in editor), but running training with renderTexture observations on a server build yielded, to the best of my knowledge, no observations for the agent to train from. I will look into using xvfb, as mentioned in this post.
     
    xixiha5230 likes this.