Search Unity

  1. We are migrating the Unity Forums to Unity Discussions. On July 12, the Unity Forums will become read-only.

    Please, do not make any changes to your username or email addresses at id.unity.com during this transition time.

    It's still possible to reply to existing private message conversations during the migration, but any new replies you post will be missing after the main migration is complete. We'll do our best to migrate these messages in a follow-up step.

    On July 15, Unity Discussions will become read-only until July 18, when the new design and the migrated forum contents will go live.


    Read our full announcement for more information and let us know if you have any questions.

Delivering Personalised Gameplay Through ML-Agents, Approaches

Discussion in 'ML-Agents' started by joelognn, May 5, 2020.

  1. joelognn

    joelognn

    Joined:
    Aug 8, 2018
    Posts:
    9
    Hi,

    I work with an educational games startup, and we have been thinking about ways by which we can deliver performance-based content to our users. Reinforcement learning in ML-Agents appears to be a good fit for this, as the agent could suggest certain problems to the user, and be punished or rewarded accordingly based upon whether or not the user is correct. In theory, this could enable the game to adapt to how the user is playing, by trying to maximise reward with a certain player without incurring punishment.

    Now if it were possible to train an ML-Agent using reinforcement learning on-device, during gameplay, adjust the weights dynamically and persist the model, this would be ideal. However, it seems like the only way to train an agent is through a Unity instance coupled to a Python environment. Is this correct? If so:

    1. Are there any ways to collect data from players and train later? In essence, to provide a minimally-trained model to the user at the start, retrain it (possibly each day based upon their playing data), and then deliver a retrained model back to them? It appears that the training in Python is coupled to sensors in the Unity editor environment?

    2. Failing that, are there any means by which multiple Unity instances (probably through WebGL) can be linked to a single Python instance? I understand that this would limit the degree of personalisation of our model, but it may be an interesting experiment.

    Of course, ML-Agents may not be the best way to solve this problem, and I am open to any other suggestions on how you would approach this.
     
  2. christophergoy

    christophergoy

    Unity Technologies

    Joined:
    Sep 16, 2015
    Posts:
    735
    At the moment there is no way to do this other than from the editor with demonstration files. This something we are thinking about but we don't have a short term solution for collecting remote data from player sessions.

    We don't officially support WebGL with ML-Agents. If you were to change the code yourself you could probably get it to work, but it might take a significant amount of changes.