Hi, I am currently working on a project on Reinforcement Learning when I came across ML-Agents and have been using it for some time. I am trying to set-up a few playing fields for my agents to play in at the same time (to speed up training), but in the Python API, realized that the agent_ids are not sequentially initialized with the playing field. Therefore, I am unable to assign which agents are working in the same field (i.e. the agents id are jumbled up). I require this information as I am trying to implement MultiAgent RL. There are 2 methods I can think of. Firstly, is to set different team ids, such that they can be differentiated. Secondly, I could hard code the variable agent_ids in each DecisionSteps. Therefore, my question is, if I set-up the different playing fields with different team (i.e. field 1 has teams 1 and 2, field 2 has teams 3 and 4 etc), will the training occur such that the behaviors are consolidated? I am asking because I realized that doing this creates more brains and thus I am not sure if their training will be consolidated. Also, is it possible to hard code the agent_ids, as I believe this is done in Unity rather than the Python API. If it is possible, how do i access this initialization? Any help will be appreciated. Thank You!