Hello, I'm currently working on an AI for a first-person shooter game using ML-Agents. As part of my project, I need to track various benchmark metrics like accuracy, etc., on the Python side to evaluate the performance of my trained models. While the existing observation and reward structure is great for training, it doesn't cater well to this use case as these benchmark metrics don't directly influence the agent's learning but are vital for my evaluation. I have explored the possibility of extending the communication protocol to accommodate this. My plan is to modify the Protobuf definitions and include these additional benchmark metrics in either AgentInfoProto or AgentActionProto messages. Afterward, I'll regenerate the Protobuf classes for both C# and Python and modify the relevant Unity and Python code to send and receive this data. However, I'm not entirely sure about the process and consequences of these changes, especially regenerating the Protobuf classes. I'm also thinking about implementing a new method in the Agent interface that could facilitate sending these metrics to the Communicator. Could you please provide some guidance or suggestions on this matter? Are there any potential issues that I should be aware of? Or perhaps is there an upcoming feature or a more recommended approach that might help with this use case? Thank you in advance for your help.