A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate in the Unity community.
Separate names with a comma.
I was looking into this the other day. ML-Agents is doing what it says it's doing (it sets Time.timeScale to the value passed for --time-scale)....
Hi @The-Trope, I talked with some folks that are more familiar with goal signals, and they still think it should help with training in this sort...
The team is still working on a submission to arxiv, and there's nothing else to share right now. When the arvix paper is available, I'll update you.
These sort of questions are why we don't offer any support on this :) It sounds like it would be straightforward to match the output of the...
Great, the new version came out on Tuesday. Note that the Match3 code is now part of com.unity.ml-agents (not the extensions package), and also...
Which version of the python library were you using? Was the checkpoint generated from the same version? (there's a check for this but I don't see...
You don't need to change the action mask, but I don't think there's any way for gym to use it, so you should provide it as an observation instead....
Hi, A few thoughts on your configuration: * I don't think you need curiosity for a racing game, I would recommend removing it. * With extrinsic...
Hi, We added a feature in the latest release called "Goal Signals" that might help with this. The section in the documentation is here. Note that...
Hi, This should be possible, but it's not an area where we can give much guidance or help with debugging. Assuming you can run an ONNX model on...
Hi, Are you using the example scene here? I think what's happening is gym has no concept of "masked" discrete actions. We use these a lot for the...
Hi, If you have a RayPerception Sensor Component attached to the Agent, the observations will automatically be collected and processed by the...
This isn't something we support right now - if you don't specify --env, mlagents-learn will assume that you're trying to connect to the editor,...
Sorry for the delayed response, but glad you got it sorted out...
The summary steps should be very fast; they just compute averages on a few arrays of numbers, and then pass the results off to things like...
Hi @eseller - No problem, your English is great :) You should just need to remove the "network_settings:" section underneath the...
Thanks for catching the soccer setup problem. The agents shouldn't have their MaxStep set. A PR to fix that is here...
This is a good feature request, but not something that's easy to add right now. In the meantime, I would recommend using the existing curriculum...
I talked to the research folks some more. The high-level explanation is that POCA trains a group of agents to maximize a shared common reward. It...
Hi @xogur6889, The POCA algorithm was developed by the ML-Agents research team. They're working on an arxiv submission, but it's not ready yet....