Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Question How mlagents tackle the curse of dimensionality problem in multiagent reinforcement learning

Discussion in 'ML-Agents' started by Hsgngr, Aug 25, 2020.

  1. Hsgngr

    Hsgngr

    Joined:
    Dec 28, 2015
    Posts:
    61
    In a very popular article called "Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms" it has been said that "the joint action space that increases exponentially with the
    number of agents may cause scalability issues, known as the combinatorial nature of multiple agent reinforcement learning (MARL)"

    In another article as "A survey and critique of multiagent deep reinforcement learning" says that "One way to tackle the curse of dimensionality challenge within multiagent scenarios is the use of search parallelization". Do we have anything like this in mlagents ? Does it count as parallelization when agents work in the same scene but in different environments ?

    Thanks