Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Using UITookit for an editor window ... hiding objects?

Discussion in 'UI Toolkit' started by imaginaryhuman, Nov 17, 2021.

  1. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Let's say I'm making a non-game app which can run in the unity editor, built with UIToolkit/builder. I understand I can use rendertextures to draw stuff, which can then show up in a UI element.

    My question though is, if I built an in-editor app ie editor window, and this app wants to do a bunch of rendering, which includes meshes and materials/shaders, animation over time etc, which would ultimately be rendered into said render texture(s), ..... what's the normal approach to separate or hide temporary objects so they don't clutter up or be visible in the user's scene/hierarchy/project?

    Obviously I have to create meshes SOMEWHERE, and they have to have custom materials and shaders applied, and there has to be a whole bunch of data stored, and various game objects flying around etc.... how do you generally keep this stuff "separate" from the user's files etc ....

    And along similar lines, how then do you position such custom geometry and not have it show up in the user's scene/cameras or be affected by the user's rendering settings? Is the only way to create stuff and use 'hide flags' to make them seem invisible to the user? So like, really I'm putting a bunch of procedural stuff into the hierarchy and into the current scene, and then hiding it from user-access, ie only used internally? What happens when they switch scenes etc? Is hikacking a camera 'layer' the only way to do this?

    Related to that, how would someone thus render to the rendertexture a custom 'private' scene, do you have to use just the GL commands and like Graphics.DrawMesh or can you get a camera involved and have it see only your 'private' objects?

    This all seems a bit fuzzy to me and not well documented.

    Is HideFlags.HideAndDontSave the only tool to handle this?

    Can you create a kind of temporary internal 'scene' in which all your meshes and objects etc are located, physics simulated, rendered, etc, separate from the user's current scene? And switch between them every frame, or?
     
    Last edited: Nov 17, 2021
  2. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    I see this on Graphics.DrawMeshNow() (or maybe be a command buffer or something) ...

    "camera - If null (default), the mesh will be drawn in all cameras. Otherwise it will be rendered in the given Camera only."

    I looked at layers and it seems rather impossible to create a layer separated from the user's layers in a friendly/reliable way. So if Graphics.drawmesh() lets a mesh render ONLY on a specific camera, then this somewhat bypasses the layer system right, and therefore i should be MANUALLY RENDERING all my objects into my rendertexture, passing in materials and a specific custom camera to Graphics.DrawMeshNow() and GL commands for all my "gui geometry" meshes?

    So far it sounds like the only way that will work is to create hidden objects with hide flags, which contain custom meshes representing all my custom dynamic geometry/objects that I want to show up in my editor, and then do a bunch of Graphics calls to manually render them in a specific order. Is this the only way to get stuff rendering into an editorwindow without messing with the user's scene/settings etc?

    Wouldn't there be some kind of bottlenecks this way, in that by issuing immediate-mode rendering commands the GPU processing is tied into the CPU processing and the CPU is held up until each GPU action is completed?

    Should I be using a scriptable rendering pipeline with/and/or command buffers and Graphics.ExecuteCommandBuffer()? Seems this would be more efficient.

    (think out out loud here) .... so basically we have to build a custom scene through various command buffers which are then manually executed, outputting to a rendertexture? So anything in the scene or dynamic in the 'editor' not built with UIToolkit, I have to build it as real meshes and materials right?
     
    Last edited: Nov 17, 2021
  3. SimonDufour

    SimonDufour

    Unity Technologies

    Joined:
    Jun 30, 2020
    Posts:
    567
    The good news is that UI Toolkit can use a render texture as the texture for an element transparently, so if you update the texture it should update the UI accordingly.

    The bad news is that it may be tricky to do a full rendering without camera, scene, etc. What is the mesh in question? Are you trying to have shadows, or something more basic displayed? With shadow, you won't be able to use DrawMeshNow.

    If the data is mostly static, could you use a scene to generate the visual, save them to a texture and then cycle the image later? This could be used to generate the texture once and not mess with the scene later on?

    I already did something similar where I had a scene with a camera and objects on a specific layer loaded at the same time (LoadSceneMode.Additive) as the real game scene. This simplified the management, as the scene could be fully functional and tested separately from the game. We had some dummy script what would check that the layers of all objects were correct, that also disabled any colliders, etc. One other tip is to keep the camera disabled and render it on demand via code (see https://docs.unity3d.com/ScriptReference/Camera.Render.html)
     
  4. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Thanks. Basically there will be 'mostly' 2D stuff being rendered and animated in realtime, objects flying around, various meshes/geometry being rendered, layers of blending and so on. Possibly some 3d elements but not likely to be using 3d shadows or lighting. In one way or another it'll have to output to a render texture.

    I did see that there are command buffers... where I could store up a list of various graphics operations and then blast them all at once efficiently. That's less likely to hold up the cpu with immediate-mode graphics calls. It seems that a command buffer could output to a specific camera ONLY, thereby alleviating the need to use layers at all or mess with the user's scene. I also see I can use hide flags to create hidden objects that are not in the scene, but whose meshes could be rendered via the command buffer. This is likely the approach I will try, to see what happens. Unless anyone knows of a better way?
     
    Last edited: Nov 17, 2021
    SimonDufour likes this.
  5. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    If i understand it right a command buffer has no concept of a camera or a game object transform as such, right. .. you have to specify the projection and view matrixes that cameras are based on in the graphics library, and then pass in the renderers or meshes that you have stored in gameobjects, for it to draw geometry. So it's a bit lower-level than the usual object/camera system? Does this mean that it will ignore culling layers and global lighting and other such thing as well, making the command buffer pretty isolated/private in its output? ie its basically just a way to RENDERING and doesn't really deal with game object culling or optimizing draw calls or splitting rendering between layers etc?
     
  6. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    Sounds like you want something quite elaborate that's not been done before very often, so probably there's no "right" way, but for simpler things I've been using Unity's PreviewRenderUtility.

    It's not really documented, so probably not officially supported and info online is scarce, but here's a nice page.

    You could look at the source code and see how Unity does their preview, but basically the way is to construct your scene in code and assign hideFlags to all objects, then simply render. Not sure how they prevent currently open scenes from interfering though.
     
  7. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Thanks. Does the preview scene have to be rebuilt from scratch every frame? Can a single scene persist and be modified over time? It appears in the example that OnGui is used to draw the texture in the gui, which would be called many times over time when there are gui events. And then the preview is cleaned up later if destroyed.

    The editorutility to produce a preview scene is, I presume, only available IN editor and not in a build or at runtime/play mode?
     
    Last edited: Nov 18, 2021
  8. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    You would generally create the preview scene once and reuse it as long as the window is open (and the C# domain wasn't reloaded). The OnGUI call is used to simply render just in time and then display the RenderTexture. And the utility code is Editor only.

    If you need runtime capabilities, I would recommend still going with hideFlags and building your own scene in code. Then render on demand. E.g. OnEnable > Create Preview Scene, OnDisable > Destroy Preview Scene, OnUserInput/AnimationTimer/WhateverEvent > Render.

    It actually shouldn't be much of an issue except how to separate the preview scene from any regular scene, e.g. lights or objects at the same position. I believe Unity's preview utility must have a solution, since it "just works", but that would be something to look out for in a custom implementation.
     
  9. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    From the docs is sounds like objects in the preview scene are only rendered in that scene, especially given that the camera.scene is set so that the camera only renders objects in that scene also. I THINK that means the preview scene is 'hidden' and totally separate from the main scene, especially since the code is moving objects into the preview scene, which would otherwise be empty.

    Yes its the separating the user's own scene from my 'internal scene' that is my main concern. How do it it neatly and not interfere with their work. I don't want to dump tons of geometry into their scene or mess with their camera or lighting or layers etc. It sounds like either the preview scene might work, or making a sort of customer renderer from command buffers and/or a custom scriptable render pipeline. But that's the rendering size.... making sure the objects are hidden seems to be a matter of using the hideflags.
     
  10. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Part of what complicates it for me is I'm actually trying to build an 'editor' (non-game app) which acts as an editor in unity while you're using it, and then also can be built as a standalone app like a desktop app. So I have to juggle the environments somehow.
     
  11. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    If you use UI Toolkit it shouldn't be as much hassle as it used to be. It's worth spending enough time up front designing what you really need though to avoid just coding the same thing twice.

    In the past I've decided to build a similar tool only for runtime and used it within the editor via the Game View instead of editor-only code. That made it simpler, but luckily, with UI Toolkit its relatively easy to throw all runtime stuff into an editor window or share a lot between both.

    However, without giving any legal advice, I know of a case in which a company wasn't allowed to distribute a content creation tool similar to the way Unity works as a standalone app made with Unity because it violated the terms of use. Maybe you also need to look into this part of the idea if what you're building is a content-creation tool with a viewport and scene editing capabilities.
     
  12. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Yeah it depends how far I take it as to whether it would be seen as a competitor. Also I think if you look at assets on the asset store, many of them are editors even some area game builders, game templates, level builders, and so on, which in some ways 'compete' with unity itself, yet are allowed. I don't know if the rule would change based on the context, ie once you build an app and distribute it, whether that is now considered to fall into the realm of the EULA and whether that then poses more of a risk of being a conflict of interests. Either way I don't plan to replicate unity, but I do have plans for a mainly 2d game thing ... not sure yet whether it's just for art creation/animation or whether to go further into object management, collision detection, user input etc.