A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate in the Unity community.
Separate names with a comma.
Hello. Unfortunately I don't have a device which supports ray-tracing handy to test this. Just to confirm, you are able to use the ML-Agents...
Unfortunately we currently do not support ending training in inference mode using the max_steps. Your idea of setting the learning rate to zero...
The keep_checkpoint has no impact on training performance. It is used to determine how many old models are saves during the training process. See...
Can you share more about what you modified from Walker to implement your agent? It could be possible that the body of your robot does not...
Hello. Yes, a high standard deviation corresponds to the agent having a variety of different final rewards in the training episodes. For tasks...
It looks like the DecisionPeriod is indeed a public field on the requester, so you should be able to directly modify it:...
Hi. As per the message, it may be the case that the example you are running is not compatible with later versions of ML-Agents. I would recommend...
You should be able to store the received float information in a public variable in your SideChannel class, and then query it by your agent, if the...
Hello, Since agents can call RequestDecision on their own, and all agents can do so at completely different code-defined times, there is no...
Hello, Are you running a training session, or are you simply performing inference with a heuristic method? If it is inference with a heuristic,...
Having a large buffer with SAC is not an issue, so long as your machine has the available RAM.
Hello. Can you confirm that you do not see a similar increase in memory usage when running SAC without demonstrations? Because SAC uses a large...
Hi @spolg, We do not currently support connecting between agents and environments which are hosted on different machines. If you'd like to do...
Hi. You are right that by default the WallJump does not require curriculum and does not use it either. We do provide a curriculum yaml, as you...
I am glad you have been able to arrive at a model which can solve the task. The behavior of the model breaking during training is still a strange...
Hello. You can run resume to continue training a model on a new environment, but I would recommend having many different agents learning on...
Thank you for trying these experiments. I see that you also used a modified version of the RollerBall config, which is unfortunately not very...
Hello. The stacked vectors are initialized to zeros.
Hi mbaske. This seems like a very cool extension of the GridSensor. Thanks for sharing it!
Hello. Here we have a documentation page which explains the configuration file parameters:...