Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

New Learning Environment

Discussion in 'ML-Agents' started by dani_kal, Apr 23, 2020.

  1. dani_kal

    dani_kal

    Joined:
    Mar 25, 2020
    Posts:
    52
    Hello!!!
    I need your help!
    I have created my own environment and just want one agent to walk from one point to another.
    I have tried to implement the existing examples in Unity - ML-Agents, so to understand what exactly have to be implemented, with succeess following the instructions of the tutorial. But in my own environment, I do something wrong.
    If someone could help me , I would be very gratefull!!!
    Thank you in advance!!!
    When I try to train it, it appers these hugeeeeeee error message:

    Traceback (most recent call last):
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
    return fn(*args)
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
    status, run_metadata)
    File "c:\python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in __exit__
    c_api.TF_GetCode(self.status.status))
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
    [[Node: softmax_cross_entropy_with_logits/Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Softmax_4, softmax_cross_entropy_with_logits/concat_1)]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File "C:\Python36\Scripts\mlagents-learn-script.py", line 11, in <module>
    load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()
    File "c:\ml-agents\ml-agents\mlagents\trainers\learn.py", line 417, in main
    run_training(0, run_seed, options, Queue())
    File "c:\ml-agents\ml-agents\mlagents\trainers\learn.py", line 255, in run_training
    tc.start_learning(env)
    File "c:\ml-agents\ml-agents\mlagents\trainers\trainer_controller.py", line 202, in start_learning
    n_steps = self.advance(env_manager)
    File "c:\ml-agents\ml-agents-envs\mlagents\envs\timers.py", line 263, in wrapped
    return func(*args, **kwargs)
    File "c:\ml-agents\ml-agents\mlagents\trainers\trainer_controller.py", line 269, in advance
    new_step_infos = env.step()
    File "c:\ml-agents\ml-agents-envs\mlagents\envs\subprocess_env_manager.py", line 175, in step
    self._queue_steps()
    run_metadata)
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
    [[Node: softmax_cross_entropy_with_logits/Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Softmax_4, softmax_cross_entropy_with_logits/concat_1)]]

    Caused by op 'softmax_cross_entropy_with_logits/Reshape_1', defined at:
    File "C:\Python36\Scripts\mlagents-learn-script.py", line 11, in <module>
    load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()
    File "c:\ml-agents\ml-agents\mlagents\trainers\learn.py", line 417, in main
    run_training(0, run_seed, options, Queue())
    File "c:\ml-agents\ml-agents\mlagents\trainers\learn.py", line 233, in run_training
    options.multi_gpu,
    File "c:\ml-agents\ml-agents\mlagents\trainers\trainer_util.py", line 91, in initialize_trainers
    multi_gpu,
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\trainer.py", line 75, in __init__
    seed, brain, trainer_parameters, self.is_training, load
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\policy.py", line 40, in __init__
    brain, trainer_params, reward_signal_configs, is_training, load, seed
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\policy.py", line 91, in create_model
    trainer_params.get("vis_encode_type", "simple")
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\models.py", line 55, in __init__
    self.create_dc_actor_critic(h_size, num_layers, vis_encode_type)
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\models.py", line 255, in create_dc_actor_critic



    File "c:\ml-agents\ml-agents-envs\mlagents\envs\subprocess_env_manager.py", line 168, in _queue_steps
    env_action_info = self._take_step(env_worker.previous_step)
    File "c:\ml-agents\ml-agents-envs\mlagents\envs\timers.py", line 263, in wrapped
    return func(*args, **kwargs)
    File "c:\ml-agents\ml-agents-envs\mlagents\envs\subprocess_env_manager.py", line 268, in _take_step
    brain_info
    File "c:\ml-agents\ml-agents\mlagents\trainers\tf_policy.py", line 126, in get_action
    run_out = self.evaluate(brain_info)
    File "c:\ml-agents\ml-agents-envs\mlagents\envs\timers.py", line 263, in wrapped
    return func(*args, **kwargs)
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\policy.py", line 162, in evaluate
    run_out = self._execute_model(feed_dict, self.inference_dict)
    File "c:\ml-agents\ml-agents\mlagents\trainers\tf_policy.py", line 151, in _execute_model
    network_out = self.sess.run(list(out_dict.values()), feed_dict=feed_dict)
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
    run_metadata_ptr)
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 1140, in _run
    feed_dict_tensor, options, run_metadata)
    File "c:\python36\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\models.py", line 255, in create_dc_actor_critic
    for i in range(len(self.act_size))
    File "c:\ml-agents\ml-agents\mlagents\trainers\ppo\models.py", line 255, in <listcomp>
    for i in range(len(self.act_size))
    File "c:\python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1869, in softmax_cross_entropy_with_logits_v2
    labels = _flatten_outer_dims(labels)
    File "c:\python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1616, in _flatten_outer_dims
    output = array_ops.reshape(logits, array_ops.concat([[-1], last_dim_size], 0))
    File "c:\python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6980, in reshape
    "Reshape", tensor=tensor, shape=shape, name=name)
    File "c:\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
    File "c:\python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3290, in create_op
    op_def=op_def)
    File "c:\python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1654, in __init__
    self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    InvalidArgumentError (see above for traceback): Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
    [[Node: softmax_cross_entropy_with_logits/Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Softmax_4, softmax_cross_entropy_with_logits/concat_1)]]
     
  2. vincentpierre

    vincentpierre

    Unity Technologies

    Joined:
    May 5, 2017
    Posts:
    160
    Hi,

    You seem to be using an older version of ml-agents. Can you try again on the latest release and tell us if you still see this error ?
     
  3. dani_kal

    dani_kal

    Joined:
    Mar 25, 2020
    Posts:
    52
    Yes I am using mlagents 0.10.
    Fortunately I have found the solution !!!
    Using this version I had not define correct the Brain parameters in the inspector window.(Vector action -> Branch descriptions).
    Thank you for your time!!!