Training speed is crucial in deep learning, but there seems to be some confusion about how it can be optimized with ML-Agents (at least for me). My initial thinking was that I'll need to invest in a bigger GPU. But after reading this discussion (https://github.com/Unity-Technologies/ml-agents/issues/4129), I got the impression that although GPUs can help with regard to rendering when using visual observations, they don't accelerate training otherwise. Or do they? This post (https://forum.unity.com/threads/cpu-vs-gpu.918869/#post-6018836) again seems to suggest that GPUs can be leveraged for training, given the right Tensorflow version and CUDA drivers. I'd also be interested in how GPU accelerated training (if possible) compares to CPU training with multiple executables. Finally, there's the time-scale issue: referring to these posts (https://forum.unity.com/threads/can...an-training-acceleration.919295/#post-6021062 https://forum.unity.com/threads/speed-of-the-training.907058/#post-5977154), I wonder if there're some real world data on if and when high time-scales become a problem for training. It would be great to have all the relevant info on training speed in one place. Thanks!