Search Unity

  1. We are migrating the Unity Forums to Unity Discussions. On July 12, the Unity Forums will become read-only. On July 15, Unity Discussions will become read-only until July 18, when the new design and the migrated forum contents will go live. Read our full announcement for more information and let us know if you have any questions.

Docker-compose

Discussion in 'Unity Accelerator' started by MonkeyDevD, Aug 3, 2020.

  1. MonkeyDevD

    MonkeyDevD

    Joined:
    Feb 28, 2018
    Posts:
    13
    It would be nice if the docker page noted a few examples on how to setup the accelerator in some flavors.
    From the first two reads, I still have no idea which options are required for just the asset pipeline v2.

    So I will just share this here and see if we can work this out.

    Bare bones docker-compose for just a quick & easy deploy
    Code (yaml):
    1.  
    2. version: '3.3'
    3. services:
    4.     accelerator:
    5.         image: 'unitytechnologies/accelerator:latest'
    6.         ports:
    7.             - '80:80'
    8.             - '443:443'
    9.             - '10080:10080'
    10.         environment:          
    11.             - UNITY_ACCELERATOR_DEBUG=true
    12.         volumes:
    13.             - '/path/to/local/folder:/agent'    
    14.  
    Those ports are undefined, what are they used for?
    Which ones are really required?
    Which port is the one the editor uses?

    As general feedback for writing documentation, assume the reader has no prior knowledge of the thing you are describing. At the very least explain what some of the bigger items are above it (like what dashboard are you referring too with username and password?).

    I will just bump the http and https ports away a bit and see if it works, that would seriously reduce the trouble of dealing with port collisions.
     
  2. gregoryh_unity

    gregoryh_unity

    Unity Technologies

    Joined:
    Oct 1, 2018
    Posts:
    50
    Thanks for the feedback -- we are working the documentation all the time, so hopefully we can improve it as we go.

    For the ports, I think it's just thoroughness. The http server inside the accelerator runs on port 80 (inside the docker container) when running without TLS and 443 with TLS -- this http server is what provides the built-in dashboard as well as a /metrics endpoint, for example. The 10080 is the port (inside the docker container) for the adbv2 port -- the one you'd put in the editor -- and it runs a custom protocol.

    If you're having port collision issues, that would have to be outside the container (since all the ports are available inside the container) and we have no way of knowing what ports your host is using. You can change the first number in those designations to indicate what port to use on the host, if I remember my Docker options correctly.
     
  3. MonkeyDevD

    MonkeyDevD

    Joined:
    Feb 28, 2018
    Posts:
    13
    Thanks for the information, I guessed as much from context.
    But it is nice to get confirmation :D

    Looking forward to checking this out further.
     
  4. MonkeyDevD

    MonkeyDevD

    Joined:
    Feb 28, 2018
    Posts:
    13
    For other readers:

    Port collisions would happen when other containers (or local services like a webserver) would like to use the same ports.
    There are ways of dealing with this, the easiest being rebinding them to other ports.

    Setting up a reverse proxy that uses the context of a request to find the correct local port to relay to.
    A bit more setup, but it would allow multiple services to be able to run on a single device.
    Which exactly what docker makes easy :D

    - '8080:80'
    - '80443:443'

    would bump the ports out of the normal http and https spots
    allowing us to bind them on the local host, without collisions.
    you can then still access these ports using the hostname.

    Alternatively you could setup the docker to just bind as a virtual adapter taking up another IP in the DHCP and getting it's own unique hostname (which is probably easier to work with in the long run)