Search Unity

Agent won't move after upgrade from 0.13 to 0.15

Discussion in 'ML-Agents' started by n3wb13, Mar 28, 2020.

  1. n3wb13

    n3wb13

    Joined:
    Jul 12, 2017
    Posts:
    24
    I decided to upgrade my project because 0.15 is a release candidate, but I think my project broke. I don't use deprecated methods, I also adopted change from 0.14 but for some reason my agent won't move be it Heuristic or Inference (sample project works fine), cpu usage is also high. Their was an error in the editor that disappeared after I restarted unity. I don't have the full error message but I manage to salvage some, it said, "Could not load signature of MLAgents.Demonstration Recorder:Initialize DemoStore... Could not load file or assembly 'System.IO.Abstractions..."

    What I've done so far
    • InitializeAgent() to Initialize()
    • AgentAction() to OnActionReceived()
    • AgentReset() to OnEpisodeBegin()
    • Done() to EndEpisode()
    • Create a new MonoBehaviour and store the data in the new MonoBehaviour instead of Inhering from Academy
    • Move the InitializeAcademy code to MonoBehaviour.OnAwake
    • Use Barracuda 0.6.2, 0.6.1, 0.6.0 and 0.5.0
    • Check Academy.k_ApiVersion ("0.15.0")

    What I failed to do
    • Check UnityEnvironment.API_VERSION in environment.py (I can't find it's location)
     
  2. christophergoy

    christophergoy

    Unity Technologies

    Joined:
    Sep 16, 2015
    Posts:
    735
    Hi @n3wb13,
    Have you added a DecisionRequester to your Agents? Please see the migration guide from 0.13.0 to 0.14.0. If you are still having issues, can you post your terminal output from your run of mlagents-learn? Thanks.
     
    n3wb13 likes this.
  3. n3wb13

    n3wb13

    Joined:
    Jul 12, 2017
    Posts:
    24
    DecisionRequester was the answer. Thank you so much.
     
    christophergoy likes this.
  4. ChrissCrass

    ChrissCrass

    Joined:
    Mar 19, 2020
    Posts:
    31
    Decision requester is a band-aid component to help people switch to the new system where RequestDecision() is a requirement.

    On demand decision making is more difficult to get right when you are making decisions are variable intervals, but it is also much more efficient, and much more interesting from a generalized learning perspective.

    In any case, I recommend that people create simple counters in their agent classes which can track fixed update ticks and request decisions according to a set interval. Here's an example:
     
    Last edited: Mar 31, 2020
  5. ChrissCrass

    ChrissCrass

    Joined:
    Mar 19, 2020
    Posts:
    31
    Code (CSharp):
    1. public int maxSteps = 512;
    2.         public int decisionSparsity = 10;
    3.         public int currentStep;
    4.         public int decisionCounter = 0;
    5.      
    6.         public void RunDecisionCounter()
    7.         {
    8.                 if (currentStep > maxSteps)
    9.                 {
    10.                     EndEpisode();
    11.                     decisionCounter = decisionSparsity;
    12.                 }
    13.                 else
    14.                 if (decisionCounter <= 0)
    15.                 {
    16.                     RequestDecision();
    17.                     decisionCounter = decisionSparsity;
    18.                 }
    19.                 decisionCounter -= 1; // decrements decision counter every fixed update
    20.         }
    21.      
    22.         public override void OnEpisodeBegin() // formerly agent reset
    23.         {
    24.                 currentStep = 0;
    25.         }
    26.         public override void OnActionReceived(float[] vectorAction)
    27.      
    28.                 AddReward(points);
    29.                 // agent action stuff here
    30.                 currentStep++;
    31.         }
    32.         public void FixedUpdate()
    33.         {
    34.                 RunDecisionCounter();
    35.         }