Search Unity

RMGUI; High Performance, Code Based GUI

Discussion in 'Assets and Asset Store' started by LacunaCorp, Jan 13, 2019.

  1. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks, yeah this is something I've been thinking about a lot. I'll probably do some sort of custom-built DSL with two layers; one similar to CSS for styling to make it really easy to define states, animations, etc., and a main layer to define the actual hierarchy. I won't say anything final here because it's going to be one of the last things I work on (so everything else is ready first and I don't have to tweak much), but I'm also thinking about maybe having some sort of node editor in Unity so designers can visually graph out UI controls, which would then generate C# classes for use in the codebase.
     
    Mauri and zyzyx like this.
  2. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    upload_2019-5-24_22-57-52.png

    Another kinda weird example as these things go, but I've been doing a major overhaul of the shader backend. You can now add geometry, hull and domain shaders without any hassle- literally just write a method in the HLSL file named "geometry", "hull", or "domain", and the backend will compile it and inject it in all the right places. Multipass shaders are now also supported, and you can add your own properties (like ShaderLab).

    The above is a simple worldspace geometry shader test, where each face is duplicated and offset along it's normal, and it's alpha faded down. I'm sure people will be able to come up with some really cool uses for this!
     
  3. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    upload_2019-6-2_22-21-31.png
    Just keeping the thread up-to-date.

    I've been doing backend work around depth buffering. The past few days have involved a lot of matrix restructuring (not sure if I mentioned, but for abstraction purposes RMGUI uses it's own Mtx4x4 class instead of Matrix4x4) to get things in line with Unity, plus a new upsampling system to grab Unity's depth buffer and blow it up if we're supersampling.
    You can't have a render target (colour buffer) bound with a depth buffer of a different resolution, so if RMGUI's supersampling is enabled, we have to upscale the depth buffer after Unity renders the scene. I've managed to get this down to be fairly cheap; this is still running at about 1200FPS, fully animated, before optimisation.

    There were a lot of hiccups getting this to match Unity so I've been on this for about 6 days straight, but it's an important piece wrapped up. The next step is extending this to geometry shaders, at which point I'll come back with a demo showing the edges of 9-sliced sprites being extruded in full 3D (this is already done, just need to fix the depth write issues). This will more or less complete the worldspace core, allowing you to create incredibly complex 3D GUI systems entirely through code.
     
  4. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147


    Been unexpectedly pulled to fix some system changes for a few days so I've only just wrapped up the waiting worldspace work. There's a lot to cover since the last update but it pretty much all revolves around visual improvements, so hopefully you can see a marked increase in render quality since the last few videos.

    One of the fun things shown here stems from RMGUI providing edge state data to the GPU. A simple geometry shader on the slider containers is extruding a 9-sliced sprite's edges without interfering with any internal faces, i.e. we can now turn simple 2D UI into full 3D surfaces very easily. Honestly I have no idea if anyone will use these features, but I think it's pretty neat.
     
  5. Turnipski

    Turnipski

    Joined:
    Jun 27, 2015
    Posts:
    9
    This looks really amazing, I wish I could get hold of it right now!

    I'm currently developing a model-view binding framework (a little bit like React or Angular) for my game so that users can customize the UI however they want. Being able to render all UI procedurally would be much nicer than having to make prefabs and monobehaviours for every view-component type, as well as being able to use something like the Roslyn C# interpreter to let users make their own views!

    Do you have any idea when you might release this publicly? You mentioned work on the XML/CSS specification of GUI is low priority so would you be up for collaborating to get it done sooner?

    Thanks and amazing work!
     
  6. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks, really appreciate the feedback!

    Sounds like a very interesting idea which should fit in natively. There are currently 2-way bindings (kind of like property notifications in WPF) on offer which should work well for the general use case, but you can also bind data in either direction (set up a string property for example and have it automatically posted to a linked label if dirty, or set up a float property which is automatically updated when it's linked slider is dragged) which should hopefully offer enough to easily bind to a custom framework.

    Honestly I'm not sure when it will be released at this point, but this is now my full-time project. Sooner rather than later, let's say, but as ever with software I've run into a lot of unexpected variables which have caused lots of changes/revisions and slowed things down. So if things hold well, I hope to have the rest of the release features wrapped up in a few weeks, but it could be more.

    Just to be clear with the code-free side of things, it will definitely be going in, it's just a bit early because the API is still changing a lot. I'm still tweaking the way animations and keyframes are handled, for example, so I just want to hold off on the DSL (rather than XML I'll be building a flat structure to keep it clean, think no tags, just braces for organisation, keywords to define types (states, animations, etc.), and then method definitions to actually make the API draw calls) until it's clear what form everything will take. Realistically though it should only take a few hours at the end, if that.
     
  7. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Label Positioning.png

    I'm back onto controls now, which I'm planning on continuing until I have a full suite in place, which is currently based around buttons, progress bars, and sliders (already got a few others in but I'm going to go back over them; toggles, slide toggles, radio buttons, dropdowns, etc.).

    Usability has been a core consideration; shown above is just one of the ways RMGUI makes layouting quick and easy. By setting the inherited UserLabelPosition and/or ControlLabelPosition members before making an API call, you can override the placement of both user labels (the optional label you pass in, "CORE_X" as shown above) and control labels (the example we actually change above, which is the secondary percentage value (optionally) automatically generated by the progress bars).
     
    Matchstick21 likes this.
  8. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Fairly important stuff wrapped up today;

    I mentioned a while back that a depth buffer upsampling system was necessary to get supersampling drawing properly in line with Unity. As it turns out, after revising the way the supersampling backend works, it isn't, and I've managed to completely strip the upsampling step this afternoon, along with up to 2 downsampling steps. Now, whether you're using 2x, 4x or 8x supersampling, there is no upsampling step, and only one downsampling step (ever), as there is are separate dedicated downsampling shaders for 2x (averages 2*2 pixel blocks), 4x (convolution kernel) and 8x (another convolution kernel). I'm probably going to implement a lowpass filter before downsampling to improve results, but it's now working in one step.

    My editor window size is currently 1420*636px, and I'm supersampling at 2x. Unity draws the scene, and then the native backend renders RMGUI. First, we draw to a double-resolution (2840*1271) superbuffer, which has it's own (clear) depth buffer;

    upload_2019-6-29_18-33-11.png
    upload_2019-6-29_18-33-29.png

    Note that the depth buffer is being rebalanced (i.e. it's been made brighter so you can see it- it's almost black otherwise) for display; it's a lot more accurate than that.

    Downsampling then deals with 2*2 pixel blocks to produce a single pixel in the final RT, and also takes a single depth value from the source (there's not really any benefit in supersampling depth, so we can speed this part up) as our pixel output depth. That way, we can not only downsample the colour buffer in a single step, but also merge our depth buffer with Unity's (which is important for worldspace, of course, but it's also handy if you want to postprocess it with something like DOF).

    Furthermore, downsampling the depth means that we can do any depth discards in the downsampling pass, as opposed to upsampling Unity's existing scene depth information and still handing it back over later on. This also lets us properly blit transparency into the scene when downsampling.

    upload_2019-6-29_18-39-40.png

    The blit result;

    upload_2019-6-29_18-26-3.png

    All in all, this is a massive supersampling optimisation, and should also bring quality improvements, once I've tweaked the kernels and tested out a lowpass filter.

    I've also made control progress (which I was originally going to focus on, but I noticed some depth issues and ended up going back over that side of things). Anyhow, with progress bars and sliders now fully standardised, I've gone back over toggles and slidetoggles to get everything in line (in terms of how skinning and callbacks are handled). You can even have the idle and active states use different sizes/layouts, and the bounds will be properly respected, as shown below.



    (...and yes, that 2 should be subscript, not superscript- working on it)​
     
    Kiupe and DeadNinja like this.
  9. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    upload_2019-7-1_22-33-29.png

    Just another little update.

    While working on the demo I decided to update the initial data binding work. It was done a while back, and didn't reflect any of the major changes made around the workflow since that point, so it was something on my list anyhow.

    I think this is about the cleanest way possible to handle data binding without separating the systems. Basically I didn't want to have separate properties for bound/unbound data, so there's some implicit casting going on here, but as you can see from the screenshot, you shouldn't actually notice any difference when accessing properties. Plus, code completion will show you which members you can bind to, as they will be wrapped with BindableData<T>.

    You can attach a delegate void(T value) to ReadBinding and ReadWriteBinding, which will be invoked whenever the property changes (unless it was also set by a WriteBinding or a ReadWriteBinding). You can call WriteBinding::Set(T value) or ReadWriteBinding::Set(T value) to set the bound value of an Entity- if you want to abstract this you can proxy it;

    upload_2019-7-1_23-2-15.png

    Please do leave me your thoughts on this if any come to mind; I want to get this part right for everyone since data binding is a big part of the workflow for many UI teams.
     

    Attached Files:

  10. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Had some fun today!

    CP2077_0.png
    While going back over the controls, I decided to move back into screenspace to test these core systems. This warranted a new demo, and it happened that I'd found a great reference a couple of weeks back. I'm replicating the character creation interface from the Cyberpunk 2077 demo, mainly because I think it's nice and clean, and encompasses a good control set, but also to give some industry indication of what the final product looks like. Obviously I'm just getting started with this, but I was really pleased with the initial results, and decided to share the first shot.

     
  11. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147


    This has been a really handy demo for me as it's highlighted a lot of edge cases, most of which I've fixed up today. It also brought up ideas for some new features, which are now in;

    • You can call BeginSelectionGroup to start a new grouping of clickable entities, and pass a number to define how many entities in the group can be selected at once. Once called, anything you draw before calling EndSelectionGroup will be put into the group, which automatically manages their states for you. If you wanted to draw radio buttons, for example, where you can select up to 3 options, you'd call BeginSelectionGroup(3), draw your buttons, and then end the group. Then, without any extra work, the backend would make sure that only 3 of these buttons are selected at once (you can also define overflow behaviour, letting you choose to either ignore new clicks if the selection limit has already been reached, or to deselect the oldest entity in the group). You can query the group itself to get an array of selected entities for easy usage.
    • You can call LockState and UnlockState to temporarily disable the skinning of an entity by interaction. In other words, if you have a button skin with states for idle, hover, and selected, you can call LockState(InteractionState.Active) when selected to force it to stay active until you unlock it. This is useful for a whole bunch of systems (this is the same system the SelectionGroup uses to keep entities selected). You can still listen for incoming clicks, though- it doesn't stop you from using the interaction system.
     
    michelapt and SirDoombox like this.
  12. zach_peoplefun

    zach_peoplefun

    Joined:
    May 16, 2019
    Posts:
    2
    This looks great! Would you mind sharing either the entire script or a snippet of the code used to create the cyberpunk demo? Wanting to get an idea of what a more complicated UI looks like with your framework. Overall I'm really liking the direction you seem to be going with it.
     
  13. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks, no problem- just bear in mind I'm currently working on API usability so this will get a lot neater with time. I've extended it a little since the video and added some notes on the more niche features. It's fairly compact but I'm still revising a lot of the skinning areas, so some of the declarations are currently more cumbersome than they will end up, but anyhow, you can see the current state used with the Cyberpunk demo here. As it was built as a frontend mockup, it doesn't yet cover the data binding pipeline, but I'll be going back over it to link backend player data once the character creator demo is set up.
     
    Last edited: Jul 8, 2019
  14. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Looks really good ! I keep an eye on your work and I really hope you will add a non-coder layout feature such as XML or HTML like Unity is currently doing with UIElement. It would really ease the process and most of all allows to let designer create the layout and let the coder do the binding part.
     
    LacunaCorp likes this.
  15. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thank you! This keeps coming up so while I'm working on usability, I'm just going to bite the bullet and get something like this added over the next couple of days. Rather than XML, because there needs to be a way to cleanly define animations, states, and so on, I'm probably going to build a DSL/custom scripting language to handle it which will be clean and simple, which will then be built out to generated C# classes. Not sure exactly what form it will take yet, but I'm going to do the groundwork today and find a direction for it.
     
    Kiupe likes this.
  16. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Ok, so, as much as I hate to go back on myself;

    I did go over the groundwork this afternoon and drew up a lot of specs for this. It would be easy to build a system where you can define states with what is basically CSS and draw the rest with calls using those states. There are also many paths for exporting data bindings and general hooks for programmers. The issues come in when you need to define custom behaviour, as there are so many options (bear in mind the goal here was a system for programmers to build UI, so I'm trying to work this in backwards) that the end result is going to be a fairly deep scripting language, where you might as well use C# anyway.

    Some ideas covered included doing the base layout work in the DSL and then generating partial C# stubs for you to do the code behind, but that defeats the purpose of having the designer do it.

    I have to very quickly put this back on hold again as it's opened up a whole can of worms, but I'm going to do some tutorial videos pretty soon, showing how easy it is to build interfaces with this. I'd be confident enough to say that anyone could pick it up quickly, you just need to know some basic C# syntax, which I'll cover. My earlier demo was a quick knock-up mixing logic with the view, but things get very simple when we're purely dealing with draw calls. It's just a series of state definitions (which already look like CSS, plus you can define animations in the same way), and then draw calls, where we use the states to create controls. I'll see how things go, but I'm going to put some demo videos out first so people know exactly what we're talking about here, because I've probably previously made it look way more complicated than it actually is.
     
  17. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    You do what you think it's best and I'm pretty sure it will be great.
     
  18. Egi

    Egi

    Joined:
    Apr 16, 2018
    Posts:
    4
    "If you're familiar with IMGUI, you'll feel right at home here."
    I thought that you go for a IMGUI route with that lib, now you turn to databinding? why is that?
     
  19. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The call structure has an IMGUI-like syntax, but it's never actually been an immediate-mode GUI, it's retained. You can still use it like you would IMGUI, but databinding means that you only have to issue the drawcall for an element once, set it's binding, and then you don't have to worry about tracking the value manually.
     
  20. Egi

    Egi

    Joined:
    Apr 16, 2018
    Posts:
    4
    I see. I just hope it performs very well :)
     
    LacunaCorp likes this.
  21. nickk729

    nickk729

    Joined:
    Jul 28, 2019
    Posts:
    1
    Looks really good. I know back in June you expected a few weeks + some to release, has that changed? Should we expect it before the year ends? Or even sooner
     
  22. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks, glad to hear you like it.

    It's tricky to say because a lot of extra work keeps popping up unexpectedly, so while the library itself is almost finished, there's still quite a bit to do on the native side, and probably more which I'm not aware of yet. I've been a bit quiet the past few weeks due to backend work, meaning no new material to demo, but I should have some progress updates soon.
     
  23. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Time for a bit of a catchup.

    I've spent the past few weeks entirely on backend work, and while the screenshots aren't very exciting, there's been a lot of progress. To quickly sum progress up;

    • Totally new command core. Took the rendering pipeline from a per-mesh drawing system to a high-performance command buffer structure (users never touch this; it's in the background). When dirty, the buffer is rebuilt and pruned, resulting in a minimum-work series of graphics calls which can be executed in a single context.
    • Shader Properties.You can now define properties in custom shaders (shaders are written in raw HLSL, not ShaderLab/Cg, but RMGUI has a custom compiler which lets you control blending, passes, etc. in a similar way to Unity). These are generated to a const database, making it very easy to work with;
    upload_2019-8-9_19-17-48.png
    When you define shader properties...
    upload_2019-8-9_19-20-10.png
    ...you can easily use them from the generated library- no strings, completely safe​

    • Material Instancing. If you mark a shader property with an [Instanced] attribute, you'll be able to set it on individual entities without adding full drawcalls (only a cbuffer is updated). When a material is instanced, it allocates memory for it's properties, and is injected into a constant buffer.
    • Complete masking system rebuild. While I'm not supporting mobile at launch, I'm planning to roll out support ASAP. I've already got the memory alignment ready for ARM architecture (there are very strict rules for pointers to floating point types on ARM processor), and things should work fine as is with OpenGL/ES 3.0+. There are still features which won't work on GLES 2.0- which I may have to drop on older devices- but the masking has been rebuilt to work on any hardware. Outside of the tech specs, masking is very robust now. You can use absolutely any Entity as a mask, whether that's a simple sprite, a text object, a filled texture or even procedural geometry.

    upload_2019-8-9_18-49-31.png
    Demo case showing a masked vertical scroll with 20 nested masked horizontal scrolls as children. Currently running at ~3200FPS on my test bench
    • Static analysis for shaders. Custom-built HLSL analyser which strips unused methods, and does a final pass to strip any unused variables. This ensures that anything imported from a shader #include which you don't end up using doesn't end up in the output (for anyone thinking it doesn't anyway, it's true that the bytecode won't include unused methods of course, but if things like additional textures or unused buffers are brought in, they will still go into output, so this prevents wasted slots).
    • Aspect ratio flag. Styles let you set MaintainAspectRatio to force any layouts which do auto-sizing (for example, if you don't set the width or height of an Entity, a layout may expand it to fill any free space available) maintain the aspect ratio of the source. This also applies to anchoring, meaning that you can set anchors as a basepoint, but if they would cause stretching, the most appropriate axis is taken and resized to keep the ratio correct.
    There's also been a bunch of fixes, minor changes, and general quality improvements.

    I've had a few questions off-site as well about recent progress and potential release dates, and I really just can't say yet given how much unexpected work keeps appearing. I won't be releasing until I'm happy that the core is solid, and from there I'll be porting to other graphics APIs, platforms, and adding features as things progress.

    Thanks,
    -Josh
     
    Prodigga, zyzyx, michelapt and 2 others like this.
  24. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147

    Just a quick follow-up since I didn't actually show the masks last time. A vertical ScrollPanel parent with a mask, 20 horizontal ScrollPanel children with their own masks, each with 5 of their own children. Running at ~3200FPS on top of the existing Cyberpunk demo.

    Also showing the Scrollbar, which is handled in this case simply by providing a skin;

    upload_2019-8-12_14-7-58.png
    The Scrollbar is then automatically laid out, sized appropriately, and data bound to the ScrollPanel.
     
    michelapt likes this.
  25. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    PC liquid cooling loop literally exploded a couple of days back thanks to a totally out-of-the-blue pressure buildup, so I'm out of office for a little while. Currently waiting on delivery of some obscure parts so things are gonna be dead for a few more days until I can put together a whole new loop.

    Anyhow, had been making good progress up until that point.

    Lots of friction with Unity has been sorted out, critically a memory leak (maybe some irony there?) caused by their handover of textures from the engine when the window is resized, which has basically forced me to override their reference counting for D3D.

    Turns out Unity's backend also tracks the size of the viewport separately from the managed layer, meaning that when the window is resized, for exactly two frames the screen resolution reported (and used) by Unity is different to the native game view RT, which means that UI elements jitter a hell of a lot while resizing. Interestingly, UGUI also has this problem, as it uses the incorrect resolution data- if you resize the window you'll notice that UI components jump around all over the place for a couple of frames. I've worked around this by detecting resize natively in the rendering loop, and then breaking rendering for that frame if the render target resolution has changed. This means that for frames where the window is resized, there might be slightly more work to do if the UI already needs to be rebuilt (still getting >2000FPS while resizing with the Cyberpunk demo). This is because the true resolution can't be detected until rendering commences due to the engine's internal RT handling (plus it would introduce more sync points if doing it any other way so it's definitely worth the tradeoff). If the UI is already dirty (say you clicked a button), it will be rebuilt, then passed to the GPU, then rendering will be dropped because of the resize, so it will be built again for that frame and then rendering will succeed.

    Tl;dr version is, window resizing is now buttery smooth, while UGUI's is not.

    Hopefully I should be back to work within a week or so, depending on delivery times; for now I'm doing what I can on my laptop (which isn't much).
     
    michelapt, oxysofts and psuong like this.
  26. oxysofts

    oxysofts

    Joined:
    Dec 17, 2015
    Posts:
    124
    I first saw this on your Reddit post a while ago. At first I only thought it looked neat, but now after personally using UGUI in our project and now seeing the progress of RMGUI, it looks like this could be hugely beneficial for productivity.

    After using UGUI in our project, it is turning out to be a massive pain in the ass for me.
    * Way too many complicated scene hierarchies for VERY LITTLE to gain.
    * Hysterical concepts like hierarchy-based sorting of UI elements. (???)
    * The layout components (vertical/horizontal/grid/etc.) make sense in theory, but in practice they work like crap and are completely unpredictable.

    Even though I mostly understand UGUI now, it still feels like stumbling in the dark and dumb guess-work. It's a mystery to me how things could have turned out like this since hierarchical UI is hardly new, I had a blast working with JavaFX in the past. I think Unity developers understand that UGUI was a mistake which is why they are planning to bring the new editor UI system to the runtime

    I first saw RMGUI on your Reddit post a few months ago. At first I only thought it looked neat, but now after personally using UGUI in our project and seeing what RMGUI can do, it looks like it could be a very solid solution in the meantime while Unity is stuck with UGUI.

    On top of this, in our project it's likely that the UI is going to be very 3D (lots of depth like Persona 5) and this is already very difficult to achieve with UGUI. Judging by your demos, RMGUI seems to handle this quite well already.

    I have a few questions I would like to ask:

    1. In the Cyberpunk demo, it seems like the class is extending Control. This doesn't seem like a MonoBehaviour since it is calling a base constructor. How then is the UI added to the scene?

    2. Is it easy to build re-usable components out of RMGUI? E.g. right now we can create a prefab in UGUI with a script to manage that piece of UI, and then nest this prefab inside of an existing UGUI hierarchy. It seems like we would just extend from Control for every re-usable piece of UI?

    3. Can it do proper depth-sorting between UI elements? Judging by those depth buffer shots you showed above I think it can, but I want to be sure.

    a) If yes, can you also easily specify UI elements to render always on top, e.g. rendering the UI in layers?


    4. How does the layout system work? From the Cyberpunk demo code, I can see there's already some stuff for horizontal/vertical stacking, and a grid layout it looks like as well.

    a) Also, I see some panels arranged around a circle in one of your examples above. Was the positioning done manually with maths for that demo?

    b) If weird/stylized/specific positioning is needed, is it possible to get into the nitty-gritty and handwrite the positioning of the UI elements, possibly as a layout implementations just like horizontal/vertical?

    c) Can you easily re-use existing UI components/controls of your game by nesting them in another UI's layout? E.g. in the Cyberpunk demo, making a re-usable control out of lines 166-193 (horizontal box part with the portrait image on the right) and nesting it inside of other UIs in the game with any random container. (a grid cell, anchor panel, horizontal stack, etc.)

    5. I'm guessing it won't be free since you said you were working on it full-time. What pricing do you have in mind?

    6. We're currently in the middle of production, adopting any recent/new tool that hasn't been put through the test of time is a gamble. Can you talk about what the current stability/usability of RMGUI and what it is expected to be like on release? Does it S*** itself if you use it wrong or do weird stuff, or does it feel well-built and predictable? Does it usually lead to difficult debugging? Was it smooth sailing when you made the Cyberpunk UI demo or the speedometer?

    7. It seems you are doing your own text rendering. Have you thought about integrating TextMeshPro which is already very powerful and featureful?

    8. Would you be open to sharing RMGUI with other developers a bit before release? That way it can be tested by others as well for various purposes and you can receive feedback. Definitely would like to try its API myself.

    9. Which features/enhancements do you think stand between now and release?

    Keep up the great work! As I said we are in the middle of production, so the chances we adopt this decrease with every passing day, but depending on much is left it may be wise to adopt it when it comes out. If RMGUI is painless it might not even take long to transition some of our existing UI to it.
     
    Last edited: Sep 12, 2019
    LacunaCorp likes this.
  27. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks! Sounds like we've run into a lot of the same issues with UGUI.

    Ok, from the top;

    1. Correct, Control is a base class provided by RMGUI. The name is actually a little misleading so I might change this- it's a layover from when I first started. A Control is basically a UI panel. Each Control generates it's own mesh and tracks it's own references, and it provides a way for you to create interfaces. You might want to have a HUD as a single control, and a pause menu as a separate control, for example.

    2. Going from the first question, this doesn't mean that you can't create reusable "components". Since everything is drawn with API calls, you can very easily create some sort of wrapper class which proxies these calls when created, and exposes a way to populate whatever you're making. I'll definitely show an example of this at some point- there's nothing builtin for it since you can just wrap it.

    3. The depth system is split into a few parts- first, each control has a depth spacing value which defines the default z-depth (in worldspace units) between each hierarchy level. For example, with a depth of 1, if you draw a layout and then give it some children, the control will have depth n, and it's children will all have depth n+1 (and so on). From there, depth offsets are applied automatically for certain internal components where necessary (you don't have to worry about any of this, though), but on top of this, you can set an individual Entity's Vector3 offset z position to apply a depth offset. This will recursively affect any of it's children, i.e. if you set offset.z to 1, all of it's descendants will also have their depth offset by 1 unit.

    4. In short, there are currently AnchorPanels (general-purpose layout where you manually position any children via anchor values), StackPanels (horizontal/vertical layout groups), ScrollPanels (the same as StackPanels, but where you can have them overflow and optionally use mouse drag, the scrollwheel, or a scrollbar to scroll their children), Grids (WPF-style Grids, not Unity-style grids- I might add Unity-style grids however and rename these to tables- tl;dr is you can supply row and column widths and heights to create a Grid, and then set the Entity.Row/Column/RowSpan/ColumnSpan of any of it's children to have them automatically laid out into the grid- you can even anchor them to grid panels from here and use marginal/padding values), and RadialGroups (as you mentioned, this is the one shown in the one of the demos- you just supply a radius, start angle and spacing angle, then add children- they're automatically positioned around the pivot).

    a. As above, this is mostly automatic- you just supply metadata for the layout and everything is calculated for you.

    b. For the most part, high-level layouts only touch Entity._rect, so if all you want to do is create a new layout system to position children, then it could be as simple as iterating the children and setting their rect values (x min, y min, x max and y max). This would work, but internally things like the StackPanels are doing a lot more, including spatial culling to do early clipping of Entities and the like, so it's doable, but I'd really encourage using the builtin tools as they've been refined for performance linked to what's going on under the hood. Obviously there are always niche use cases, but I'm relatively confident in saying that you should be able to build just about anything out of the box through a combination of layouts and manual positioning where necessary.
    Having said that, I'll be honest, editing the library core is not going to be easy. A lot of the core code is straight up raw pointers to unmanaged memory- RMGUI uses it's own Mesh system for performance, which is essentially just a chunk of memory of vertex buffer structures- plus there's a lot of interop code for the C++ rendering backend, so if you want to edit the core then in places it's, well, accidentally mess up a single bit and some stuff will segfault (it is handled cleanly though, so at worst the native backend will abort with a usually-helpful error message). So if you do want to go deep into this side (which I really don't think anybody will need to), it's going to involve manual memory management, mirroring any applicable changes from the managed layer to the C++ backend, and being very comfortable with pointers/"unsafe" C#.

    c. I think this mostly goes back to question 2- you can't nest Controls themselves, but you can create reusable wrappers to share "prefabs" of a sort between Controls.

    5. This is an ongoing thing, but generally it will find a way to draw what you tell it to. If you leave out important style data, (say, not providing any width, height, or anchors to deduce it from), then the library will do it's best to work out an acceptable value. In a StackPanel, there's an option to automatically expand children, for example, which will evenly divide the free space between any children which need it. As far as anything going majorly wrong, you can't really break anything, but you might get some unexpected results from time to time. Put it this way- in the vast majority of test cases so far, I've been able to write the code, hit play, and see that everything is where I wanted it. Building the demos has been a great help on this side of things, because it wasn't all smooth sailing, and that's highlighted areas for improvement (which I've mostly addressed). Again, this is all an ongoing process, and I'll be putting a heavy focus on this going into the future, but I'm happy to say that at this point it's probably about as predictable as a general IMGUI library.

    6. I have actually thought about this, but haven't done any work on it so far. It's something I'll at least bear in mind in future, as I'm sure people will want things like textures fonts at some point, or even 3D effects. I've sort of planned this as having people work on this type of lettering externally and importing it as a graphic, but it may be beneficial to see if it's possible to get TMP playing nicely with it all. The key benefit of RMGUI's text engine is that it doesn't actually use any strings internally (not that the end user sees any of this, it just feels the same as setting a UGUI text label, execpt you can also data bind the text value), so for things like control labels, there is zero garbage allocated when they are updated, e.g. Sliders internally update their linked label's NativeString char* data, using RMGUI's builtin char pointer utilities (I've included utilities such as String.Format which work on raw pointers), without ever allocating additional string memory.

    7. I definitely want to do some sort of testing phase, though I'm not sure what form it will take, exactly. I'm interested in finding out what people think of the general usability, especially around how intuitive people think the State system is, since the whole library revolves around this method of providing Entity properties. I'll have more info on this closer to release, since there's still a lot liable to change.

    8. Haha, good question- I'm very nitpicky with anything close to release, so I'm sure I'll obsess over a lot more than this, but all I really need to do is implement the rest of the core control suite- dropdowns still aren't implemented, and I need to redo text input. This is the main focus, alongside a lot of minor additions and tweaks. Most of the unforeseen time comes from getting Unity to play nicely with everything (I'm currently wrapping up another instance of this before I get back to the core), so it's hugely variable. Once I have the controls in (I'm also definitely verging on renaming the user Control base class now), I'll be going over usability everywhere and streamlining the API, and then final general testing.

    Thanks for all the questions- you've highlighted some really helpful stuff here. Hopefully this gives you a good general idea of what's going on, of course let me know if I've missed anything.
     
  28. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Hello,

    You mentioned different types of panels (anchors, stacks and scroll) do you plan to add more ? A Flexbox Panel which would behave like a CSS Flexbox would be great giving that CSS Flex box are really powerful.
     
  29. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The StackPanel actually contains a lot of this functionality, and you can toggle different parts of it (things like automatic expansion of children, full control over alignment (left/centre/right, bottom/centre/top), etc.) but what I didn't include is the ability for stacks to do line wrapping. This would be a really handy feature in some cases, so thanks for bringing this to my attention- I'll add the ability to have StackPanels (and subsequently ScrollPanels, which inherit from StackPanel) wrap to the next line/column if they overflow.

    Just a note- the axes are implicit with this, so if you do BeginHorizontal, then you can have the stack automatically expand any of it's children to take up the max height available (if their height isn't already set, i.e. it won't mess up any manual heights you assign), and then share any free space available along the horizontal axis for any entities with unset widths. The opposite applies for BeginVertical, where the widths can be automatically expanded to the use max space available, and any free vertical space can be shared among any Entities with unset heights.
     
  30. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @LacunaCorp how does dearImGui approach compare to what you are doing ?
    it's visually pleasing and has many UI/X components already - have you considered using that as backend ? - having proper user friendly access to it in Unity would be awesome, but that's probably not what you are using, is it ?
     
  31. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The best way I can describe RMGUI is, it feels like you're using an IMGUI system, except every call returns an Entity which you can modify whenever you want. This also allows you to easily deal with animation and keyframing, without any headache, and lets you use data binding instead of polling controls every frame.

    There's no external backend- everything has been built from scratch. RMGUI is almost totally standalone outside of Unity (things like Vector3 and Cameras from Unity are used for convenience), and works by taking your calls to the API and outputting vertex and index buffer streams. Because data is retained, this approach also removes a lot of potential garbage allocation per frame, and is significantly faster than similar immediate mode systems as all we need to rebuild in a frame is what is dirty- if a button was clicked, for example, only that button has to be rebuilt, while everything else can be reused from the previous update.
     
  32. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Quick follow-up from something I mentioned the other day;



    Long story short, UGUI jitters when resizing because of a logical issue in the handling of resolutions in the native engine source. RMGUI provides a workaround for this- here you can see a comparison between the resizing behaviour of UGUI and RMGUI.
     
    psuong likes this.
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    that makes sense - though I'd forgive a small GC alloc / perf. bump here and there - for constantly on, runtime UI which is part of the game that is probably very welcome approach
    it would be a stronger contender if UIElements weren't heading for runtime ;/, but that will take some time still I imagine
    I'll keep an eye on this meanwhile -]
     
    oxysofts likes this.
  34. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    2019-09-22_20-33-35.gif

    Just a quick update while I have the chance to grab some visuals to post. I'm deep in backend work so there's not a lot I can screenshot/record, but with text work starting to wrap up, I can at least showcase some of the recent changes. Note that the small font on the left is going to be reworked, I'm just running them all through the same material here to demo. I'll still offer bitmap text over MSDF if you don't need the additional quality for large text, or effects.

    Also on this side of things, the shader compiler now outputs optimal structures for any properties you define. To prevent straddling of GPU registers, 16-byte alignment is automatically ensure for generated cbuffers. When you compile any RMGUI shaders, their properties are automatically reorganised to try and adhere to these boundaries- if this can't be done with the properties you've using, silent fillers will be used to pad out either 4, 8, or 12 byte ranges to ensure that there are no alignment issues.

    I think I already mentioned this, but it's been finalised- you can mark shader properties with an [Instanced] attribute to set them on a per-entity basis with minimal overhead. This means that you don't have to create lots of materials to control the properties of individual entities- they can share the same material, and will automatically be drawn with the same non-instanced properties (if you set a non-instanced property on an entity, any entities sharing the material will also use the updated value), but if you set an instanced property, that entity will automatically post it to it's own separate buffer without affecting any other entities sharing it's material.
     
    Prodigga, michelapt and Matchstick21 like this.
  35. MostHated

    MostHated

    Joined:
    Nov 29, 2015
    Posts:
    1,235
    After having dabbled in WPF, then moving on to a Go/Dart/Flutter combo, something like this sounds fantastic. Any sort of extremely rough ETA before some sort of initial release? End of the year, sometime next year?
     
    Prodigga likes this.
  36. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks!

    I should know by now that I never make deadlines, but I'm working on this non-stop, so I'm hoping before the end of the year. Honestly I was hoping to have the first release out at least a couple of months ago, but things keep unexpectedly popping up, and the main thing I'm focusing on is making sure this is solid before I send it out to anyone. Maybe sooner, maybe later, but I really can't say anything definite just yet.
     
  37. MostHated

    MostHated

    Joined:
    Nov 29, 2015
    Posts:
    1,235
    Hey, no worries. That is why I made sure to state it was only hoping for an extremely rough ETA. "Sometime possibly, maybe, but also possibly not the end of the year" is exactly the kind of answer I was hoping for, lol. Keep up the great work, I look forward to seeing updates!
     
    LacunaCorp likes this.
  38. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    upload_2019-9-26_14-16-36.png

    Added a couple more new systems while working on the text revamp.

    The default shader has a define now to enable subpixel rendering, the result of which is shown above. Given that monitors are made up of bands of red, green and blue pixels (the vast majority of general consumer monitors are in RGB order, left to right), the edges of glyphs can effectively triple their horizontal resolution by using only one of the RGB channels (and zeroing the other two), to draw into only a third of a physical pixel. If you zoom in (as demonstrated below), you'll see what looks like chromatic aberration, but it's very hard to spot when zoomed out as your brain combines the signals, so there's no noticeable colour distortion.

    upload_2019-9-26_14-17-44.png
    Subpixel rendering, RGB left-to-right, blown up 500%

    I've decided to move the text rendering into three user-definable paths- MDSF (ideal for large, or animated text, and also lets you use the builtin text effects), Bitmap (uses pre-rendered, hinted bitmaps for a set range of text sizes, and provides crystal-clear small text, although you can't use effects with these currently), and Hybrid, a mode which automatically switches between MSDF and Bitmap rendering depending on the final physical size of the text. I can see the Hybrid mode being useful for cases where you don't want any FX or fancy animation, but you just want to automatically draw text at the highest possible quality without worrying about tweaking the settings.

    upload_2019-9-26_13-52-31.png

    There's also a handy addition to RMGUI's shader syntax, in case you decide to write your own shaders. You can add a [Range(min, max)] attribute to have the library automatically clamp property ranges (totally on the CPU- there are no shader checks).

    There's still quite a lot to do, but I should hopefully have some final screenshots of the text engine in the new few days.
     

    Attached Files:

    Last edited: Sep 26, 2019
    psuong likes this.
  39. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Lots going on, just wanted to keep this up to date (and remind myself, it's a big list!).

    I was wrapping up text when I considered that people are going to want all sorts of different effects within the same string, think RTF where you might have some characters bold, some in a different colour, etc. This is currently possible, but it requires a cbuffer update, meaning multiple drawcalls.

    There were also some legacy issues which would mean the RMGUI can't run on platforms which don't support integer usage. Most of those can emulate ints with floats, but I still want to make sure that it's not going to be an issue.

    Material instances also need to post to cbuffers, adding additional drawcalls. The cost associated with this because of a large switch, as the work is just mapping and memcpying to a cbuffer before issuing a new call. The thing with text is that the buffers are often small (maybe only a few tris), and while they're very quick to render on the GPU, the actual piping of that data is relatively slow. In other words, the CPU can't feed the GPU quickly enough.

    I ripped out the entire rendering pipeline on Sunday and did a redesign, which I'm currently implementing. It fixes all of the above issues, and the end result will be that entities with different materials, and strings with completely different properties for individual characters, can be drawn in one drawcall, flat. That's upto 65535 vertices for strings. The only exception is when strings are sandwiched between control geometry, as they need to be rendered in the correct order for transparency, but I have an idea of how to address this in the future.

    Some details for the nerds;

    • Vertices are currently 88 bytes. This is because of how much data I had to pack into the legacy system. I've reduced this to 24 bytes, meaning an immediate reduction to ~27% of the original vertex buffer size. Naturally, this means significantly less data to memcpy, and bus to the GPU.
    • All of my internal data has been moved from the vertex structure to the property system. In other words, my internal shaders are now written on the same system you can write your own shaders, so everything will be coherent. All of that per-vertex data is now per Entity, meaning probably about 64 bytes max for my own stuff (padded and aligned to 128 bits to prevent straddling cache lines on the GPU, as these are put into a StructuredBuffer) for between 4 and 16 vertices (4 for a normal Entity, 16 for a sliced Entity).
    • CBuffer usage has been dropped for my EBuffer system. A GlobalEBuffer is generated, which holds all of the possible properties, and each Entity is given one of these in a StructuredBuffer.
    • Material properties occupy the same system, meaning no CBuffers are required for material data. It is simply piped to an EBuffer.
    • Each vertex has a buffer ID. The vertex shader uses this to pull an EBuffer from the global heap.
    • You don't see any of this. The ShaderCompiler analyses your HLSL to find any properties you use. It then generates a local EBuffer structure and injects it into your pixel shader input. Any methods where you use properties are analysed- if there is a MainPixelIn structure in the signature, it's argument name is pulled, otherwise a structure is injected. The rest of the method is then swapped out, so that any properties you use are taken from the EBuffer. For example, any usages of the color property are replaced with inputVariableName.ebuff.color. The end result is that you can type as though you're using CBuffer properties, but you're actually accessing injected members which aren't present until after code generation.
    For example, look at the internal Sample method;

    upload_2019-10-1_12-38-41.png

    color, tile, and uvQuad are declared as follows;

    upload_2019-10-1_12-26-23.png

    Which is internally output to;

    upload_2019-10-1_12-27-58.png

    All in all, this is going to be an enormous improvement over the old system. I was still getting a solid 2800FPS with those string test cases, but this should keep it up there for lots of strings, where you can set absolutely everything from text weight and colour to shadowing and outline, per character, in one drawcall. Again, this applies to all geometry, so Entities are going to benefit as well, meaning that we can pack completely different materials into the same drawcall. This is the same system you can use to write your own shaders, meaning that you can add to these properties, and set them per Entity, which should give users massive control over VFX. I'd love to see some burn/dissolve shaders running on this!

    I'll update once this is finished. Fingers crossed the new pipeline will be in by the end of the week, then I can do some benchmarks.
     

    Attached Files:

    Last edited: Oct 1, 2019
    psuong, Matchstick21 and Prodigga like this.
  40. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Sorry for the radio silence, there's been a lot going on the past few weeks.

    The backend is now switched out, and the shader compiler has undergone a major overhaul to reflect these changes.

    To summarise the previous notes, RMGUI now has a system in place so you can add properties to shaders, and when the shaders are compiled, a class is generated with global ShaderProperty types you can use to easily set them (i.e. there are no strings involved, and no typing out shader property names by hand).

    More importantly, you can set those properties per Entity without adding any drawcalls. You could have 100 different Entities in the same mesh, with completely different shader properties, and they'd all be drawn in a single drawcall. There's no extra setup for you as this is builtin to the core of the library- it just works! These properties can be accesssed from Vertex, Pixel, or Geometry shaders (I've removed hull and domain support for now as I can't really imagine anybody using them, and it's going to take a lot of extra work with the compiler to get them in, but I would go over it in future if people request it).

    I'm now back onto the library itself- the rendering backend is done for the time being.
     
  41. oxysofts

    oxysofts

    Joined:
    Dec 17, 2015
    Posts:
    124
    Will it be possible to render the UI with standard Unity shaders? Or shaders generated by Amplify Shader Editor? Big deal-breaker if not.
     
  42. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The system is based on a custom HLSL abstraction (like ShaderLab) and precompiler built specifically for RMGUI, so external shaders would have to be converted. There are many reasons for this, including stripping and optimisation of HLSL, but following on from recent posts, it also injects the necessary code for the property system to work in a single drawcall.

    Can I ask what sort of use case you have in mind? It supports Unity-style defines so you can set blend modes, culling, etc., and other than that it's just normal HLSL. This is the default builtin shader, which I've already set up as a quick intro.
     
    Last edited: Nov 13, 2019
  43. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Making animation and styling as simple as possible was an important goal for me building the library core. I hadn't originally designed the shader property system to follow this idea- it was just a way to let users send values to their own shaders- so I decided to extend the code generator this morning, while I'm still tweaking style and animation management. These changes should be a huge improvement when working with custom shaders.

    Properties are created in the shader, as usual

    Here we set up a float, Vector3 (float3 is automatically mapped to Vector3), and a Color32 (the [PublicType(Type)] attribute has been added to let you force some conversions automatically, i.e. generating a float4 as a Color32 instead of a Vector4, in this case). A list of valid conversions is available in RMGUI.hlslinc.

    upload_2019-11-19_10-49-16.png

    When the shaders are compiled, partial extensions to the library are generated

    Shader properties are turned into real, managed properties (in C#), and injected into Entity (more on that in a minute), and Style. This means that your custom shader properties can be accessed directly as properties of Entities, or inside of Style objects, letting you seamlessly create animation clips with them, like the rest of the properties in the library.

    Styles;

    Each shader property receives a matching managed property in Style;

    upload_2019-11-19_11-1-1.png

    Animations;

    We can use the generated Style code to easily animate shader properties;

    upload_2019-11-19_11-1-47.png

    Properties;

    Or we can always just access the generated properties by hand;

    upload_2019-11-19_11-0-2.png

    When dealing with properties, you might not want to dump all of the generated code into Entity. This is just the default- you can choose a target location for the generated code with the [TargetClass(Type)] shader property attribute. For example;



    This results in TestFloat being accessible to Label (and it's descendants), but not to any of it's ancestors;

    upload_2019-11-19_11-8-6.png



    All in all, the end result is that your custom shaders are now very tightly integrated with the core library, thanks to codegen. Animating everything from Entity position to shader effect params is now stupid simple!
     
    Last edited: Nov 19, 2019
    Rtyper, Peter77 and Prodigga like this.
  44. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Another little update from me.

    I've basically put the Value system on steroids, and I think it's going to be a very welcome change. Since it's used in so many places, this expansion means that you have a lot more control over how Entities are positioned, and it's easier than ever to do so.



    Value is used throughout the library for positioning, whether you're specifying a width, height, margin, anchors... you name it.





    Construction

    Values can be constructed, with integers being interpreted as pixel Values, and floats being interpreted as normalised Values. Normalised Values are relative to an Entity's parent (or the window, if the Entity is the root), where 0.1F, for example, means 10% of the parent's width or height, depending on where the Value is being used.





    For convenience, integers are implicitly converted to pixel Values, and floats are implicitly converted to normalised Values.




    You can explicitly specify the ValueType if you want to use an integer to specify a normalised Value, or a float to specify a pixel Value.





    There are also factory-style construction methods. Point is, there are lots of ways to initialise a Value, so there is a clear and concise solution available for every situation.





    Combining Values

    There may be times where you want to mix pixel and normalised Values. Normally when building GUI, this would mean anchoring the Entity, then offsetting it, but RMGUI provides a way to combine both operations into one.

    You can simply combine Values with math operators, and they will be evaluated when the UI is built.


    For example, if you wanted to anchor the right edge of an Entity to 50% of it's parent's width, then dial it back by 16 pixels, it's just a matter of subtracting a pixel Value from a normalised Value.





    Again, just to be clear- Values are evaluated whenever the UI is built. This means that you don't have to keep them up to date. The UI is also rebuilt when the screen is resized, so Values will never be out of sync.



    Min, Max, And MinMax

    A lot of the ideas for RMGUI have come from changing things I don't like about how UI is traditionally built. Something I always hated was the idea of having properties such as Width, MinWidth, and MaxWidth. They can easily get detatched, and it's a lot of headache working with duplicate properties in the backend. I tried out a few systems like that earlier on, but never ended up with anything I liked.

    When going over Value, I realised that this could be tied into the Value system directly.

    Let's say we wanted to dynamically resize the width of something. We could of course just use a normalised value, but what happens if the UI should stop after a certain size? As an example, let's use a root panel, which should aim to be 50% of the screen width, but should never be larger than 500 pixels across.

    The MinValue system lets us specify it's width to satisfy these constraints. It simply takes the smallest of two Values, so when 50% of the screen width is less than 500 pixels, we'll use the normalised Value of 0.5F, else we'll use the pixel Value of 500.





    Simple as that! We can do the opposite with the MaxValue system, which takes the greatest of two Values.





    And finally, if we want to clamp a target Value between a min and max Value, the MinMaxValue system lets us do just that. Here, we clamp a target normalised Value of 10% between a minumum 32 pixel Value, and a maximum 64 pixel Value.




    Hopefully this post has been clear and easy to follow- it's sometimes difficult getting the detail in without it turning into a huge wall of text. I'm planning on starting work on demo/tutorial videos for RMGUI before too long, not sure when exactly, but the idea is to get some footage together to give a better view of the library in use. I'm definitely looking forward to going over these systems in individual videos, where I can give a full, in-depth look at everything.
     
    Last edited: Nov 27, 2019
    Rtyper and Matchstick21 like this.
  45. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Implemented a Unity-style Grid. You can set the starting corner and axis (horizontal/vertical), with some optional parameters;

    -CellWidth and CellHeight can be used to force each element in the grid to a defined size.
    -ColumnCount and RowCount can be used to force a certain layout, or they can be used to calculate the size of a cell. If CellWidth isn't set, for example, ColumnCount must be set, and from that, the Grid will automatically divide the free space (accounting for any spacing values you set) between the number of columns, and then position based on that.

    upload_2019-12-5_13-6-21.png

    The HorizontalOverflow and VerticalOverflow properties (which are shared with other Layouts) let you fine-tune the Grid behaviour even further. Here we use Clamp overflowing, which, as you can see below, squishes any Entities which aren't fully overflowing so that they smoothly disappear.

    upload_2019-12-5_13-9-5.gif

    Clip overflowing cuts out the smoothness to make sure we can always see every Entity;

    upload_2019-12-5_13-17-15.gif

    We can also tell it to just Overflow anyway;

    upload_2019-12-5_13-19-27.gif

    Just a couple more ideas;

    Fixed Grid Layout




    Dynamically Scaling Grid

     
    Last edited: Dec 5, 2019
  46. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The shader needs antialising still, but I'm really happy with this sytem.

    I shared an early example of this waaay back, on the old backend. I reimplemented the fill system on the new pipeline, and the grid demo seemed like a good layout to show how versatile it is.
    As with the rest, this is done in one drawcall.

    upload_2019-12-6_0-9-35.gif
    You can create Fillmodes with control over everything from smoothing values to start and end angles, and even the centre of the effect. This is just the clockwise radial system- I'm going to go back over counterclockwise and linear fills tomorrow.

    Point is, you aren't limited to things like 0->360 degree fill, bottom corner fill, etc. You can create fills at any point on an Entity, with total control over the parameters.
     

    Attached Files:

    Matchstick21 likes this.
  47. piotr_unity854

    piotr_unity854

    Joined:
    May 27, 2019
    Posts:
    2
    Hey, so this project looks pretty impressive and I'd love to use it in our upcoming game. One question, though: will it integrate nicely with the LWRP? Also, what's the latest word on mobile support, both Android and iOS? Thanks.
     
  48. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks for the feedback,

    Because RMGUI is basically standalone with engine hooks, integration with the SRP means that yes, it works fine alongside it, but you can't use SRP shaders with RMGUI.

    I've done most of the work so that RMGUI can run on ARM processors, but I'm yet to do any actual mobile porting. All I can really say yet is that I've done the really awkward groundwork, so once the other graphics APIs are in (OpenGL ES, specifically), it should (hopefully) be fairly straightforward to add mobile support. It's on my todo list- mobile support is planned.
     
    Klausology likes this.
  49. piotr_unity854

    piotr_unity854

    Joined:
    May 27, 2019
    Posts:
    2
    Thanks for your answer!

    Being as far from a shader expert as it's possible to get, does that mean that I can run an RMGUI UI overlaid on a LWRP (now URP? sheesh) camera view without problems? Can I render RMGUI in-world as well, mixing with LWRP graphics? (If what you mean is that I can't customize RMGUI via custom SRP shaders, then that's perfectly fine.)

    So.... When can you put up a beta version? Christmas gift, perhaps? ;)

    BTW, are you aware of any other Unity UI systems that take a code centric approach to defining the UI?

    Thanks.
     
  50. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    That's the idea- RMGUI can internally draw to any render target, so it's just a matter of letting the URP do it's thing, and then we can blit over the outputs. Like you said- you won't be able to use SRP shaders to render RMGUI- I wanted to be clear that the rendering pipelines are totally separate.

    Haha, well, I still can't say anything about a release just yet, but a big thanks to you (and everyone else here!) for the interest. I'm starting to wrap things up, but as always, unexpected jobs/issues appear which can majorly knock the timeline off, so I don't want to give any deadlines and then end up missing them.

    The only existing asset I'm aware of is Noesis GUI, which is a way to use WPF in Unity (along with XAML).