Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice

3d to Image comparison, getting errors

Discussion in 'ML-Agents' started by JPhilipp, Feb 25, 2020.

  1. JPhilipp

    JPhilipp

    Joined:
    Oct 7, 2014
    Posts:
    56
    Background: I want to create a 3D object out of a set of 15 base shapes, for which I'm then adjusting rotation, color and such via actions, with the goal of matching a face photo. This creates the (perhaps problematically high) amount of 15 * 14 = 210 continuous actions. The only observation of the agent is an 84x84 camera pointed at the randomized photo on a quad. The reward is how much the snapshot of another camera of the 3D creation now matches the photo, using the sum of color distance of each pixel. I'm ending each training immediately via Done() after handing out the rewards (not sure if that's even appropriate). ResetOnDone then repeats the same process.

    The Error: Even when trying various different Config Yaml settings (e.g. I tried
    time_horizon: 1, but also much higher, more normal values), I keep getting errors in the Anaconda/ Tensorflow prompt after some steps. What might I be doing wrong? Error below. Thanks!

    Code (csharp):
    1. INFO:mlagents.trainers: Main: ClayxelFaceMatcher: Step: 1000. Time Elapsed: 22.805 s Mean Reward: -65.020. Std of Reward: 0.000. Training.
    2. Process Process-1:
    3. Traceback (most recent call last):
    4.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\multiprocessing\process.py", line 258, in _bootstrap
    5.     self.run()
    6.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\multiprocessing\process.py", line 93, in run
    7.     self._target(*self._args, **self._kwargs)
    8.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 132, in worker
    9.     env.step()
    10.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\timers.py", line 262, in wrapped
    11.     return func(*args, **kwargs)
    12.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\environment.py", line 326, in step
    13.     self._update_state(rl_output)
    14.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\environment.py", line 283, in _update_state
    15.     agent_info_list, self._env_specs[brain_name]
    16.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\timers.py", line 262, in wrapped
    17.     return func(*args, **kwargs)
    18.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\rpc_utils.py", line 127, in batched_step_result_from_proto
    19.     _process_visual_observation(obs_index, obs_shape, agent_info_list)
    20.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\timers.py", line 262, in wrapped
    21.     return func(*args, **kwargs)
    22.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\rpc_utils.py", line 73, in _process_visual_observation
    23.     for agent_obs in agent_info_list
    24.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\rpc_utils.py", line 73, in <listcomp>
    25.     for agent_obs in agent_info_list
    26.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\timers.py", line 262, in wrapped
    27.     return func(*args, **kwargs)
    28.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\mlagents_envs\rpc_utils.py", line 51, in process_pixels
    29.     image.load()
    30.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\PIL\ImageFile.py", line 250, in load
    31.     self.load_end()
    32.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\PIL\PngImagePlugin.py", line 677, in load_end
    33.     self.png.call(cid, pos, length)
    34.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\PIL\PngImagePlugin.py", line 140, in call
    35.     return getattr(self, "chunk_" + cid.decode('ascii'))(pos, length)
    36.   File "e:\_misc\programs\anaconda\envs\ml-agents\lib\site-packages\PIL\PngImagePlugin.py", line 356, in chunk_IDAT
    37.     raise EOFError
    38. EOFError
    39.  
     
  2. celion_unity

    celion_unity

    Joined:
    Jun 12, 2019
    Posts:
    289
    That's pretty strange, it looks like the visual observation is getting truncated or corrupted. Two things that might help debug this?
    1) I assume you're using a CameraSensorComponent - can you set the compression type to None? That will send the data as floats instead of PNG, so it'll bypass PIL (the python image library raising the exception).
    2) Do you mind saving a Demonstration file with your observations and attaching it here? We can hopefully load it and see if something is wrong with the visual data. You'll probably need to set your "agent" to Heuristic behavior type (since it won't be able to do inference, and training will crash).

    Also, what are the pip versions of PIL and/or Pillow do you have installed?
     
    JPhilipp likes this.
  3. JPhilipp

    JPhilipp

    Joined:
    Oct 7, 2014
    Posts:
    56
    Thanks! Yes, I'm using a CameraSensorComponent. I'm still on ML-Agents 0.13, and it looks like the Compression option was only added in 0.14, so guess I should try to upgrade first (I had 0.14 before, but had to downgrade as it didn't play along with whatever Python libraries I had, I think). I'm using the Anaconda setup route, by the way, as I haven't yet gotten to work the newly suggested Python integration route (the Anaconda route worked fine for me).

    For reference, my versions:
    ml-agents: 0.13.0,
    ml-agents-envs: 0.13.0,
    Communicator API: API-13
    TensorFlow: 1.7.1
    Pip 18.1
    Python 3.7 (I have a bunch of different ones installed, too)
    Barracuda: 0.4.0 (0.6.0 throws errors with my other setup)

    Is my agent subclass what you mean with Demonstration file? If upgrading to 0.14 fails to solve this, I can also share my full setup, but below for starters is my agent subclass.

    Code (CSharp):
    1. using UnityEngine;
    2. using MLAgents;
    3.  
    4. public class ClayxelsAgent : Agent
    5. {
    6.     [SerializeField] Transform photoQuad = null;
    7.     [SerializeField] RenderTexture clayxelsRenderTexture = null;
    8.  
    9.     Clayxel clayxelWrapper = null;
    10.     ClayObject[] clayObjects = null;
    11.     const int actionsPerClayObject = 14;
    12.     int maxActions = -1;
    13.  
    14.     string[] imageNames = null;
    15.     const string resourcesPath = "E:\\Projects\\Shapemaker\\Assets\\Resources";
    16.     Material photoMaterial = null;
    17.     Texture2D photoTexture = null;
    18.  
    19.     public override void InitializeAgent()
    20.     {
    21.         clayxelWrapper = GetComponent<Clayxel>();
    22.         clayObjects = GetComponentsInChildren<ClayObject>();
    23.         maxActions = clayObjects.Length * actionsPerClayObject;
    24.  
    25.         GetImageNames();
    26.  
    27.         Renderer renderer = photoQuad.GetComponent<Renderer>();
    28.         photoMaterial = renderer.material;
    29.  
    30.         SetRandomImageOnPhotoQuad();
    31.     }
    32.  
    33.     public override void CollectObservations()
    34.     {
    35.         // None needed, only CameraSensorComponent is automatically observed.
    36.     }
    37.  
    38.     public override void AgentAction(float[] actions)
    39.     {
    40.         DoActions(actions);
    41.         HandleReward();
    42.         Done();
    43.     }
    44.  
    45.     void DoActions(float[] actions)
    46.     {
    47.         for (int i = 0; i < clayObjects.Length; i++)
    48.         {
    49.             ClayObject clayObject = clayObjects[i];
    50.             Transform clayTransform = clayObject.transform;
    51.  
    52.             int n = i * actionsPerClayObject;
    53.  
    54.             clayTransform.localPosition = new Vector3(
    55.                 actions[n++] * 2.5f,
    56.                 actions[n++] * 2.5f,
    57.                 actions[n++] * 1.5f
    58.             );
    59.  
    60.             const float maxAngle = 180f;
    61.             clayTransform.localEulerAngles = new Vector3(
    62.                 actions[n++] * maxAngle,
    63.                 actions[n++] * maxAngle,
    64.                 actions[n++] * maxAngle
    65.             );
    66.  
    67.             const float minScale = 0.1f;
    68.             const float maxScale = 3f;
    69.             clayTransform.localScale = new Vector3(
    70.                 minScale + actions[n++] * (maxScale - minScale),
    71.                 minScale + actions[n++] * (maxScale - minScale),
    72.                 minScale + actions[n++] * (maxScale - minScale)
    73.             );
    74.  
    75.             clayObject.color = new Color(
    76.                 (actions[n++] + 1f) * 0.5f,
    77.                 (actions[n++] + 1f) * 0.5f,
    78.                 (actions[n++] + 1f) * 0.5f,
    79.                 1f
    80.             );
    81.  
    82.             clayObject.blend = actions[n++] * 1.5f;
    83.  
    84.             float roundness = (actions[n++] + 1f) * 0.5f * 0.5f;
    85.             const float mirrorXOption = 2.0f;
    86.             clayObject.attrs = new Vector4(roundness, 0f, 0f, mirrorXOption);
    87.         }
    88.  
    89.         clayxelWrapper.needsUpdate = true;
    90.         clayxelWrapper.Update();
    91.         // Clayxel.reloadAll();
    92.     }
    93.  
    94.     void HandleReward()
    95.     {
    96.         float colorDistance = 0f;
    97.  
    98.         Texture2D renderTexture2D = new Texture2D(
    99.             clayxelsRenderTexture.width, clayxelsRenderTexture.height,
    100.             TextureFormat.RGBA32, false
    101.         );
    102.         RenderTexture.active = clayxelsRenderTexture;
    103.         renderTexture2D.ReadPixels(
    104.             new Rect(0, 0, clayxelsRenderTexture.width, clayxelsRenderTexture.height), 0, 0
    105.         );
    106.         renderTexture2D.Apply();
    107.  
    108.         int width = clayxelsRenderTexture.width;
    109.         int height = clayxelsRenderTexture.height;
    110.         Color[] colorsSource = photoTexture.GetPixels(0, 0, width, height);
    111.         Color[] colorsClayxels = renderTexture2D.GetPixels(0, 0, width, height);
    112.  
    113.         for (int i = 0; i < colorsSource.Length; i++)
    114.         {
    115.             colorDistance +=
    116.                 Mathf.Abs(colorsSource[i].r - colorsClayxels[i].r) +
    117.                 Mathf.Abs(colorsSource[i].g - colorsClayxels[i].g) +
    118.                 Mathf.Abs(colorsSource[i].b - colorsClayxels[i].b);
    119.         }
    120.  
    121.         float reward = 100f - colorDistance * 0.01f;
    122.         // print(reward);
    123.         AddReward(reward);
    124.     }
    125.  
    126.     public override void AgentReset()
    127.     {
    128.         SetRandomImageOnPhotoQuad();
    129.     }
    130.  
    131.     public override float[] Heuristic()
    132.     {
    133.         float[] actions = new float[maxActions];
    134.  
    135.         float randomizeMax = Input.GetKey(KeyCode.Space) ? 1f : 0.1f;
    136.         for (int i = 0; i < actions.Length; i++)
    137.         {
    138.             actions[i] = UnityEngine.Random.Range(-randomizeMax, randomizeMax);
    139.         }
    140.  
    141.         return actions;
    142.     }
    143.  
    144.     void SetRandomImageOnPhotoQuad()
    145.     {
    146.         int randomIndex = UnityEngine.Random.Range(0, imageNames.Length);
    147.         string path = imageNames[randomIndex];
    148.         photoTexture = Resources.Load(path) as Texture2D;
    149.         photoMaterial.mainTexture = photoTexture;
    150.     }
    151.  
    152.     void GetImageNames()
    153.     {
    154.         imageNames = System.IO.Directory.GetFiles(resourcesPath + "\\Faces", "*.jpg");
    155.         for (int i = 0; i < imageNames.Length; i++)
    156.         {
    157.             imageNames[i] = imageNames[i].Replace(resourcesPath + "\\", "");
    158.             imageNames[i] = imageNames[i].Replace(".chip.jpg", ".chip");
    159.         }
    160.     }
    161.  
    162. }
    163.  
     
    Last edited: Feb 26, 2020
  4. JPhilipp

    JPhilipp

    Joined:
    Oct 7, 2014
    Posts:
    56
    In Anaconda it still shows "ml-agents: 0.13.0" (causing a mismatch between Unity's v14), even though I've just spent hours to do all the Python upgrades, Conda upgrades, Pip upgrades, Tensorflow and Tensorboard etc. uninstall and upgrades, removed my old ML-Agents, grabbed the 14 one, restarted Win 10 multiple times, and so on.

    I now also did a complete uninstall of all Python version, then re-installed the suggested Python 3.7, did various Path adding and restarts -- trying to get the new suggested non-Anaconda install route to work -- but I'm now still getting errors during "pip3 install mlagents" about "Could not find a version that satisfies the requirement tensorflow<2.1,>=1.7 (from mlagents) (from versions: none)". (There were also Pip 19 to 20 version upgrade warnings, which I got rid off after several tries and restarts.) I'm now back trying to install Anaconda again, but running into the same issue. Using the alternative "pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.7.1-py3-none-any.whl" did something, but now I'm getting
    "ImportError: No module named '_pywrap_tensorflow_internal'" when trying "mlagents-learn config/trainer_config.yaml --run-id=Main --train" in Anaconda. And now, I'm getting "Could not find conda environment: ml-agents" when trying "activate ml-agents".

    The whole Python dependency chain of the terrific Unity ML Agents is a big stumbling block to me. Is there something on the roadmap that would create a nearly-one-click Windows install for those uninitiated like me who just care about the Unity C# side of it? Because I wonder if it might be my best option to just wait for that.
     
    Last edited: Feb 26, 2020
  5. celion_unity

    celion_unity

    Joined:
    Jun 12, 2019
    Posts:
    289
    Sorry for the python troubles. The "Could not find a version that satisfies the requirement tensorflow<2.1,>=1.7 (from mlagents) (from versions: none)" error sounds like you might have gotten python3.8 instead of 3.7 - tensorflow doesn't currently support 3.8. Can you run "python --version" to check?

    If you can get back to the original problem - what I was looking for by "Demonstration file" was the output from adding a Demonstration Recorder to the Agent: https://github.com/Unity-Technologi...mitation-Learning.md#recording-demonstrations That will capture the observations of the Agent so we can try to load it back up.

    And when you do get up and running again, can you tell me the output of "pip3 show pillow" and "pip3 show pil"?
     
  6. JPhilipp

    JPhilipp

    Joined:
    Oct 7, 2014
    Posts:
    56
    Hi! Thanks for the reply. Anaconda command line upon "python --version" tells me Python 3.7.4. (For what it's worth I had, earlier that day, installed, then deinstalled, Python 3.8.)

    If there was a single Unity-made Exe that installs all ML Agents dependencies -- I'd give it system-wide rights and tick any box saying "this will get rid of any other Python installed", just for it to solve all problems! :) But that might be an impossibility to implement, I guess. And I guess there's also no way to magically auto-convert Tensorflow to C# to ease setup.
     
  7. mbaske

    mbaske

    Joined:
    Dec 31, 2017
    Posts:
    473
    I'm no python expert, but I always assumed that creating a fresh environment and running a setup file based install should take care of all dependencies. I've installed ml-agents quite some time ago, back when using conda was still the recommended practice. Since then I've only updated my existing environment, currently working with 0.12.
    On the other hand though, I wonder if virtualenv and conda environments are completely self-contained? If that's the case, maybe we could reserve a space in the forum or on Github for people to zip and share their working ml-agents python environments? Not sure if that would make sense or be practical, just an idea.