Search Unity

Question best practice for mlagent learning on 2020.3 LTS and AMD 6700XT GPU?

Discussion in 'ML-Agents' started by RR7, Jun 2, 2023.

  1. RR7

    RR7

    Joined:
    Jan 9, 2017
    Posts:
    254
    i'm new to this please be gentle.

    what would be the best practice for a new AI project (must be 2020.3 for now) - if i upgrade the mlagents package in the karting demo to 2.0.1 it no longer compiles. should i follow the very latest guide, is there a newer kartign demo that doesnt deploy in 2020.3 by default?

    the karting tutorial comes with ML agents 1.0.8 package, i setup the mlagents 0.16 as per the older guide and it would learn.

    however, i, maybe incorrectly, thought that i'd be able to use my AMD 6700XT card for a performance boost, i replaced tensorflow with the directML version which detects the dx12 gpu, however i'd say its slower than the CPU learn, does this seem right? how would i confirm its actually using the gpu anyway?

    "should have got an nvidia gpu" may well be correct but i didn't as i didn't know, and while i don't expect the AMD card to be as fast as nvidia, i did expect it to outperform the CPU learn quite a bit. am i way off base with that too?
     
  2. RR7

    RR7

    Joined:
    Jan 9, 2017
    Posts:
    254
    okay so i used the latest 2021.3 LTS version for testing. the latest mlagents toolkit uses PyTorch and not tensorflow.

    (loading that into 2020.3 seems to be the best way to solve that issue if needed)

    there is a pytorch-directml package, which apparently uses any dx12 GPU and not cuda, does anyone know how i can make mlagents-learn use this instead? or is it some override in torch itself?
     
  3. unikum

    unikum

    Joined:
    Oct 14, 2009
    Posts:
    58
    Did you ever solve this?
     
  4. RR7

    RR7

    Joined:
    Jan 9, 2017
    Posts:
    254
    nah, the older one you can replace the tensorflow with the directml version and have it use directml as default. the newer mlagents use pytorch and there isnt a default override, so you'd need to compile the unity kit with the direct ml version of pytorch in mind using the correct command line parameters.

    for now i'm using CPU, it only helps once you start using a large number of agents.

    i've not yet managed to get any real 'joy' with machine learning for what i want to do, i end up wasting hours playing with things then training and not gaining much.

    i'll come back to it. but really, the best advice i can give is don't buy an AMD graphics card unless slightly cheaper gaming is the only requirement.