Hi there, I'm currently attempting a very involved training setup - 3 behaviours training simultaneously, 10 agents per behaviour per env, num-envs=4 on a PC that I custom built for ML experimentation (i9-9900K). Training is decently fast, but I can see the my CPU is still being underutilized - Each unity environment is at ~3% or so, and python sits at about 10%, meaning total CPU usage is at 20-40% at maximum according to Window's performance monitor. What's also interesting is that the Unity environments have significant framedrops when I look at them, even with timescale = 1 (presumeably because they are waiting for the python side of things?) Does anyone have any tips for me to just get more CPU utilization? I'm using what might be a small buffer size of 20480 (batch size 1024) per behaviour, should that be increased for the number of environments? should I be using more or fewer envs? It's interesting that I can run other programs no problem while training is running - I'd really like to try and max out the speed I get as each training run can take a week in this instance *cracks whip*. Any hints or suggestions welcome! P.S. I've recently update to ML-Agents 1.0 from 0.5 or so - the improvements in usability and interfaces are enormous, thanks so much for all the work that has been put into this. IMO one of the highest quality components in the entire Unity ecosystem.