A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate in the Unity community.
Separate names with a comma.
Looking for some strategies to go from some replay game data -> uploading a replay to social media. I'm using unity 2019.
@Jribs The 'download a bunch at once' part is it possible to do that a single time or do they have to download them all every time the reopen the app?
I am trying to better understand Addressables/Asset Bundles I want the user to be able to reach points in my game where another say 100 levels is...
@Thorce I was only able to cut 30 minutes off a 9hour run by using the GPU and I have a much larger network than you so I'd think I would see even...
interesting @Thorce I will have to try that out.
@Luke-Houlihan it's been my understanding that CPU > GPU for PPO, am I wrong?
I am training agents to go from the start of my level to the end of the level. I am trying to figure out how to calculate an optimal reward...
@GamerLordMat I think you are correct. I think I am getting a boost on the first half of the level from the auto-normalization because the bounds...
https://forum.unity.com/threads/how-to-get-value-of-distance-between-agent-obstacles-with-using-rayperceptionsensor3d.1043338/ @Luke-Houlihan in...
I'm curious why they would not normalize these using the ray distance. I think I must be getting a large benefit from that.
I havn't been able to find much about how to go about configuring the architecture for the reward signal networks. In fact I cannot understand at...
I assume that they are in fact normalized but I am just puzzled how I am getting better results from using the python normalization versus doing...
Yes but I am talking about the RaySensor 2d Component that comes packaged with unity ML Agents.
I have been manually normalizing all my vector observations and wasn't using the normalization hyperparamter. I tried flipping on normalization...
So I used to think that lambda effected this but I don't think it's so simple and I would suggest not touching it just yet. I also have seem some...
When training stalls out at a certain point in my levels I'm trying to diagnose what is happening so I can try and tweak hyperparamters accordingly
I have always normalized my observations myself. Is there some extra benefit to using the normalize hyperparamter over just normalizing yourself?
I want to train on EC2 Instances, mainly for the purpose of hyper parameter tuning so that I can test many different sets of hyper parameters on...
Below is an image of the Cumulative Reward Graph. The max reward is 3.1(end of the level) as you can see the agent is reaching the 3.1 point at...
And what guides your of time_horizon setting? When I originally looked at time_horizon i was excited because I need my agents to 'understand'...