Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Understanding ROS and Unity Architecture

Discussion in 'Robotics' started by BuildEverything, Sep 19, 2021.

  1. BuildEverything

    BuildEverything

    Joined:
    Jul 30, 2012
    Posts:
    9
    Hey there! I am a game developer working my way into robotics simulation through the Robotics Hub tutorials. ROS is completely new to me, so please bear with me.

    Does it make sense to treat Unity as an authoritative server / control interface, with multiple ROS-controlled devices as clients? What would be your approach to this?

    The tutorials seem to be pretty locked in as 1 sim environment to 1 device so I'm not sure if I'm trying to fit a square peg in a round hole.
     
  2. LaurieUnity

    LaurieUnity

    Unity Technologies

    Joined:
    Oct 23, 2020
    Posts:
    77
    Hi there; I'm not too sure what you're asking, since robotics systems are not generally structured like games.
    Yes, if you're testing a robot in a simulated environment, one typical setup would be to have Unity simulate the world, and publish information about the results of that simulation for other components to consume.
    As for what's "authoritative"... if one of your other components is, for example, controlling a real robot, then that's probably the most authoritative source of data. It really depends on what you're trying to do. ROS doesn't generally follow a strict client-server pattern - any component can communicate with any other component.
     
  3. BuildEverything

    BuildEverything

    Joined:
    Jul 30, 2012
    Posts:
    9
    Hi Laurie, thanks for the reply.

    It's pretty interesting because it seems like the fundamental design principle is more like a peer-to-peer network then, which (if I'm speculating) makes sense as there seems to be a lot of focus on each unit being autonomous. What's a bit different about my case is that it's targeting collaboration between devices. For example, one robotic arm orienting and holding an object while another arm traces the edges.

    I'm trying to approach this carefully so as to conform to established design philosophies rather than reinventing the wheel or building something too naively.

    Going on your thought of Unity simulating a world and publishing to multiple other components. Let's go with using the Nav2-SLAM tutorial project, how would you go about connecting 0..N components to one instance of Unity? I've seen it suggested to use a distinct port for each component but have concerns about scalability with that approach.

    I've been theorycrafting around doing something like the following, but need a sanity check:
    • Maintain a DB with basic information (device ID, last known state, etc)
    • Scale ROS docker instances with kubernetes using info from the DB
      • Docker instance is fed a device ID from DB document
      • In the wild, the device would be deployed with a predefined device ID and initiate a connection on startup
    • Device connects to Unity with its device ID
    • Unity provides initial information to the device
    • Logic on the Unity side acts as coordinator, sending instructions to devices and waiting for feedback

    Contrary to my randomly chosen name, I'm very open to changing this plan if there's a better approach that tackles needs and greatly appreciate any feedback on what I can do better.
     
  4. shuo_unity

    shuo_unity

    Unity Technologies

    Joined:
    Sep 25, 2018
    Posts:
    4
    Hi @StubbornDeveloper , I would like to better understand your use case first.
    1. Could you explain what do you mean by "ROS controlled device"? Are they robots, or are they some ROS publisher/subscriber/service?
    2. What do you mean by using Unity as authoritative server? It sounds like a config file or some DB can be useful to store shared information/instructions.
    3. Could you elaborate why do you need kubernetes for scaling? Are you running multiple groups of Unity dockers and Device 0.. Device N with different configurations?

    You may also reach out to the Robotics team via unity-robotics@unity3d.com.
     
  5. BuildEverything

    BuildEverything

    Joined:
    Jul 30, 2012
    Posts:
    9
    Excellent questions!

    1. The terminology should definitely be clarified. By ROS controlled device, I ultimately mean robots with an onboard computer controlled by ROS. Right now we are doing simulations but want to keep communication with the robots as close to a live environment as possible.
    2. In this case, I mean that Unity will publish environment information to the robot as well as determine which actions the robot should take in the form of commands. E.g., [MoveTo {pos:<3, 1, 2>}], [GrabObject {type:Tile pos:<5, 1, 3>}]. Robots will provide feedback, but lack significant autonomy. I know, it's not nearly as cool, but in this case we need reliability and verification over speed or cool factor.
    3. While I'm working on getting more comfortable with Docker, I am far from an expert. It could very well be that I don't need kubernetes to scale the number of docker containers. It just seems like the proper tool to solve for scaling the number of containers to simulate 0..N robots. Do you think it is unnecessary complexity? Ultimately we will have multiple groups of dockers but that will require an entirely different system to scale anyway.
    Thanks for the email, and much thanks for the support. :D
     
  6. mrpropellers

    mrpropellers

    Unity Technologies

    Joined:
    Jul 10, 2019
    Posts:
    13
    Hey @StubbornDeveloper! Could you describe the control loop you expect to be executing with a little more detail? Sounds like you want Unity to hold the autonomy logic that decides how the robots should interact with the environment, but what information is the code that lives external to Unity responsible for providing? If Unity says "MoveTo: (x,y,z)" -- what does the robot send back? Does it send absolute position updates, some output from a motor controller to be interpreted inside the simulation, or is everything in Unity being simulated as an immutable sequence, with the external robot code simply acknowledging that commands and sensor information were received?
     
  7. JeffDUnity3D

    JeffDUnity3D

    Unity Technologies

    Joined:
    May 2, 2017
    Posts:
    14,446
    @StubbornDeveloper I'm doing similar, I'm developing the control loop with the help from Mathworks Consulting in Simulink. Once I have that done, I plan to include Unity in the loop. I'm controlling a ROS VTOL rocket like Elon's Falcon9, but much smaller :)



    The rocket has an onboard Jetson Nano that publishes it's position and trajectory using a ZED2 camera, and receives flight commands via ROS. Once in flight, it will fly autonomously. Long term goal is a "self-driving" jetpack, Uber meets Iron Man!