Hi, i am using the 1.0 MLAgents and i manage to set up (hypotetically) the tensorflow with the GPU tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5 coreClock: 1.62GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s but as a matter of fact, also when i have 20 agents training together in parallel, my CPU is at 30%, my GPU is at 3%. any suggestions on how to use as much as possible the GPU?? thanks
fyi, this might be useful (but in a different way): https://github.com/Unity-Technologies/ml-agents/issues/1246 You are better off using CPU for training, and GPU for inference (running).