Search Unity

What is the best approach using ML-Agents for eventual use on physical robot arm?

Discussion in 'Editor & General Support' started by krotovd, Oct 14, 2019.

  1. krotovd

    krotovd

    Joined:
    Sep 23, 2014
    Posts:
    35
    I've been working with Unity for some time from a gaming standpoint and only recently have started getting into 'reinforcement learning'. I eventually want to use a trained model to move a physical robot arm. Since I was familiar with Unity3D I thought it only made sense to check out their ml-agents.

    My hope for posting here is to get some input from those that may have already gone through this. Is Unity's ml-agents a good option to use when eventually wanting to apply to a physical robot? I currently have a 4-DOF model robot arm with a camera attached. I am controlling the direction using "AddTorque" and "Discrete" actions, with a branch size of 5 where each index has a size of 2. I then use this to determine whether to apply torque in a positive or negative direction. In my use-case, is it recommended to use "Discrete" or "Continuous" or is there not really a recommended option?

    I am using a camera for visual observations, but I am allowing unity to apply the image in its own way. In a real-world situation, I would need to have a real camera feed coming in with which I would perform object detection on. At this point I am not sure on whether there is a way to do this in a simulated environment within Unity.

    I would love to hear anyone's input on the above project should you have any advice. I am using this as a personal POC to eventually approach my work to show that we can start creating simulators for various real world projects and apply RL to these sims. However currently it seems my robot arm has not been able to fully grasp my task of picking up a cube on one platform, rotating around and dropping it on another.

    I am going to be looking at possibly using some imitation learning and see if that helps some as well as curriculum training. I'm not sure if these will provide any better results, but tweaking and testing is all part of this game. Any advice would be greatly appreciated!
     
  2. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,778
    Firstly, do you have IK solution in place?
    You want to be able, just care about end effector of gripper. Then you apply logic/AI to end effector. You will need also some form of position feedback tracking. Depending how accurate you want get to real world. But simply gripper position and orientation should be more than enough to most cases.
     
    krotovd likes this.
  3. krotovd

    krotovd

    Joined:
    Sep 23, 2014
    Posts:
    35
    Antypodish first off I must thank you for this idea. I have 4 separate hinge joints, plus the grip movement I have been trying to keep track of currently. Your idea of setting up IK is awesome! I must say I'm embarrassed to have not thought of that myself. I have not messed much with IK in unity, but being able to just control the position of the grip and have everything else just follow is brilliant! I will definitely be going about setting that up tomorrow and then test this new approach. I am interested in this for a real world implementation, but at this point I do not have an actual bot to apply it to as of yet. However, accuracy will be important. Thanks again mate!
     
  4. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,778
    There is plenty examples out there, using robotic manipulators in Unity.

    Here is my, quite old now, presenting unfinished / experimental state however. But functional, for what I wanted to show back days.