A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate in the Unity community.
Separate names with a comma.
Hi, I am using PPO to solve an environment using release 4. I can switch to release 10 with no issues and the learning process is the same....
Hi, I am wondering which mlagents version to use. Currently using the ml-agents release 4 which means: com.unity.ml-agents (C#) v1.2.0 ml-agents...
Hi, I wonder if anyone tried to wrap their OWN environment in order to work with unity. Which means they wrote a wrapper using BaseEnv. I have...
Hi, As the title asks, does the ENTIRE experience buffer is cleared when using curriculum learning? If not, we can get the same state once with...
Hi, I am developing using the ubuntu editor. 1. Does any VR headset is supported in unity and ubuntu together? 2. I currently have Varjo headset...
You can recover the values by using method: Perceive in Class RayPerceptionSensor. You will need the GetRayPerceptionInput() from Class...
You didn't configure the behavior name correctly, I bet that the agent behavior name in the editor is not the same as in the configuration file....
my current "naive" plan to is to "break" the self-play, by not changing the learning team (keep it on 0 always), and then each n episode, activate...
I wanted to use unity self-play mechanism to train two agents against each other, let's say two tanks in an empty world. However, it reduced to...
@andrewcoh_unity Seem to me that in certain situations without normalizing the network collapse ("converge") really fast to some weird local...
I had this weird issue with continuous control and when I changed normalize: false to true, it solved it. Please tell me if that helped
According to https://blogs.unity3d.com/2020/02/28/training-intelligent-adversaries-using-self-play-with-ml-agents/ So it is not a multi agent...
I just want to make sure that if I have 2 agents in the same scene, both use the same "behavior name" but one behavior type is inference and the...
From quick digging I think you might need to change the sensor code: ml-agents/com.unity.ml-agents/Runtime/Sensors/RayPerceptionSensor.cs which...
Hey, Self play use snapshot of past policies to play against. Is there a way to insert my own .nn file or other policy format as one of these...
Thank you for your reply. I understand the concept of curriculum learning, but my issue is with advancing the lessons. I use a simple +1 -1...
Hi, 1. Question about mean reward in self-play: maybe a silly question, but is is correct that the reward values that are written to tensor board...
@henrypeteet thank you for taking the time to answer my question. As you recommended I am trying to avoid "breaking" the flow, as not knowing the...
If you look in the training configuration file documentation then some parameters are associated with exploration, such as beta in PPO...
Hi, My agent have a script that manipulate it's movement during FixedUpdate (CatDynamics.cs). I have another script (CatAgent.cs) that inherits...