Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

RMGUI; High Performance, Code Based GUI

Discussion in 'Assets and Asset Store' started by LacunaCorp, Jan 13, 2019.

  1. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    RMGUI;
    A New Approach To UI Development
    High-Performance, Code-Based Retained-Mode UI Library





    Featuring everything from Unity-esque horizontal and vertical layouts to WPF-style grids, RMGUI looks very much like CSS mixed with IMGUI. The key difference is that all calls you make return objects you can modify on-the-fly, i.e. Retained Mode GUI, with the syntax of IMGUI. Styling and animation is builtin to the core of the library, meaning that you can define CSS-like behaviours to style anything, or provide multiple states as keyframes to play them as animations.

    Really getting into the thick of it, there is support for unlimited full-alpha nested masks, GPU-based sprite fills (you can build your own fillmodes with start angles, end angles, centrepoint coordinates, smoothing values and even animate between them, or use one of the many builtin presets), and define interpolation settings for any animation in the library (every mode shown here is implemented).

    Moving from screenspace to worldspace takes literally one line of code- just supply a rect to draw it on in the control contructor, and you can seamlessly jump between the two coordinate spaces. No more weird scaling issues! The library is backed up with a lot of 3D features, including full depth/z-pos control for entities, and geometry shader support. You can even generate procedural lines, circles, arcs, rings (and animate them) on the GPU, giving you perfectly antialiased shapes at any resolution, without any source artwork.

    Any user-facing property, such as the text of a label or the value of a slider, can also be data bound with one-way listeners, one-way setters, or two-way reactive bindings. No more polling elements for their values or managing delegates- just provide a callback and tell RMGUI where you want the value to be sent if the user touches it.

    You can easily mix logic and views, or use a full MVVM setup. This is totally extensible and you can plug in any sort of abstraction layer you like. It can be as simple or as complex as required.​



    This has turned into a bit of a devlog, so if you want to catch up you can find recent progress at the end of the thread.

    Thanks for all of the questions and feedback so far!​
     

    Attached Files:

    Last edited: Jan 16, 2020
    Zoey_O, Xerioz, Matchstick21 and 4 others like this.
  2. Prodigga

    Prodigga

    Joined:
    Apr 13, 2011
    Posts:
    1,123
    Very excited to follow this assets development :)
     
    LacunaCorp likes this.
  3. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    There is a quite likely fix for OpenGL. However, it seems that support for logical operations was dropped in OpenGL ES 2.0, despite being there in earlier versions. So I will have to do a separate fallback system for ES, and DX systems prior to DX11.1.



    I did a nested test with DX12. Shown here is the one-drawcall masking system- the first drawcall is from the underlying camera which is rendering the black background- with 2 depth levels and scrolls (just using some assets from a game I'm working on). You can see that both the main bodies and the white offset icons are masked (I left some light transparency to show that they're cut off on the left).

    Anyhow, this week I'm building a native rendering plugin to implement the OpenGL fix. This will be used on all targets, not just OpenGL, because it means that I can also operate directly on the unmanaged mesh memory without having to interop via Unity, which will lower overhead across all APIs.
     
    Prodigga likes this.
  4. Prodigga

    Prodigga

    Joined:
    Apr 13, 2011
    Posts:
    1,123
    Oof sounds amazing. It's crazy to me that you are able to write a rendering plugin that works alongside Unity and adds additional features? I wonder if you will start running into silly OpenGL quirks on some of the crappier mobile devices out there, which may be the reason some of these features are dropped?
     
  5. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Well, unfortunately the answer to that turned out to be a big yes. Some are fairly weird software restraints imo (the logical operations I need were supported in OpenGL ES 1.1, but were arbitrarily dropped in 2.0 because there were so rarely used), the rest comes down to what the hardware is physically capable of.

    I was fairly sure that I could force the original method to work on those platforms, but it turns out there are some critical hardware limitations- even with a lot of low level hacking- and there's just no reliable way to bind the data without full integer RT support. I'm worried that I've maybe misadvertised some of the workflow here. This is a very specific workflow, and it's not feasible for me to layer up separate fallbacks for weird platforms (i.e. for old platforms, I can't just stick another mask on top- the masking system is heavily coupled to the core for performance reasons).

    I can revisit this in future, but for now I think I have to revoke the idea of mobile support, or else there's a major risk of watering down the main library. Apologies for this, because you were one of the people who mentioned mobile specifically, but from my research I just genuinely can't see a way to make it work without forking the entire thing and building a separate solution for mobile platforms, which I very well may do after release. I want to maintain that RMGUI is a next-generation system, and as such will only target "current"- and, in time, future- generation hardware and platforms.

    So, for now, I have to enforce some hardware specs here;

    DX11.1+ (tested)
    DX12 (tested)
    OpenGL 3.0+ (in progress)
    Vulkan (in progress)


    I will definitely revisit mobile support, but I can't make any promises right now as that research turned up a hell of a lot of unforeseen issues. Just to be clear, the approach I was trying will still work perfectly for desktop OpenGL versions.
     
    Last edited: Mar 27, 2019
    Novack likes this.
  6. Prodigga

    Prodigga

    Joined:
    Apr 13, 2011
    Posts:
    1,123
    Ouch, that is a big shame! If you were able to target Metal, then you'd atleast be covering a lot of modern devices, from around 5 years ago onwards with OpenGL3/Vulkan and Metal. Atleast with that you could gauge whether or not it'd be worth your time to do full blown mobile support with OpenGL2 support. Still excited to get my hands on this and got around with it! :)
     
  7. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    I've read conflicting things on Metal, I'm going to dig through the API but it's hard to say if it's going to support it or not so I thought it best not to mention anything just yet. With Apple's recent changes it's definitely something I want to support, so hopefully I can force that behaviour in.
     
  8. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    It took some doing, but I have a fully native rendering backend in for RMGUI now. RMGUI is mostly based on unmanaged memory anyhow (i.e. I allocate blocks of unmanaged memory and operate on it via "unsafe" pointers), but before, I still had to interop the vertex and index buffers to Unity whenever they were dirty. Now, I can operate directly on native memory (using my own mesh, material, shader and texture classes) tied to the graphics API, meaning that there is zero interop overhead (it's made a noticeable difference). More importantly, I can get this working on OpenGL now (I have DirectX wrapped up, OpenGL is next but everything is in place, I just have to swap the calls out from d3d to ogl).



    Please note that this is a first test of the native system, and I need to do some further integration work with Unity. The draw calls shown are not accurate, they're actually doubled because of native profiling- it's actually 1 drawcall from the camera (black background), 1 for the geometry, and 1 extra when rebuilding the mask metadata. So where it says 3, it's actually 2, and where it says 5, it's actually 3. The screen recorder also added about 0.5ms, effectively halving the framerates I'm getting at a minimum (I'm averaging around 3000FPS).
     
    Prodigga likes this.
  9. Prodigga

    Prodigga

    Joined:
    Apr 13, 2011
    Posts:
    1,123
    Excellent, I was worried you gave up! :)
     
  10. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Oh, trust me, the joys of trying to play nicely with Unity brought me quite close! 90% of it was built in the first day or so, the rest of the ~2 weeks was spent looking at graphics debuggers and swearing loudly.
     
    JustTheCoolDude likes this.
  11. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    I've been working on image fills, along with some usability features. The end result is the ability to draw a sprite, position it, and smoothly fade it in from left to right over 5 seconds, with just one line of code.

    Code (CSharp):
    1. DrawImage(AtlasMain.Background_CornerHighlights, LayoutDef.Anchor(0.1F, 0.1F, 0.1F, 0.75F)).LerpFill(FillMode.HorizontalLeft, 0, 1, 5);

    This still fits into the single drawcall system- sprites can be filled with different fill modes in the same control, in the same drawcall. This is just a quick first iteration, I'm going to add smoothing options so you can fill with a gradient edge, radial fill, offset fill (filling from a certain point along the sprite in both directions), and image-based fill (filling based on the alpha of a secondary sprite).
     
    aklgupta and Prodigga like this.
  12. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147

    First test of the animation backend. I need to work a few things out with it but it's fully functional, and you can animate everything from scale and colour to sprite fill values and padding.

    The test in the video is drawn with this skin (I'm going to reformat skin declarations so they're a bit cleaner, this is a WIP)

    upload_2019-2-9_11-23-49.png

    The animated entities you see are drawn with this;

    upload_2019-2-9_11-24-32.png

    That's all the code you need to create the animation shown in the video. Oh, and again I should add that the screenrecorder I'm using, because of the involved high framerates, is somewhere about halving what I'm actually getting. This test is running at a solid 4000FPS+ on my machine, often at 4500FPS+, during a full rebuild every frame.

    It's very flexible (might need to view this one fullscreen, it looks jittery in the thumbnail);
     
    Last edited: Feb 9, 2019
  13. keenanwoodall

    keenanwoodall

    Joined:
    May 30, 2014
    Posts:
    597
    Looks cool! I saw you have your own versions of Begin/EndHorizontal(). If you haven't already, you should add "scope" versions of layout controls. In Unity they let you do stuff like:
    Code (CSharp):
    1. using (new GUILayout.HorizontalScope ())
    2. {
    3.      GUILayout.Button ("I'm on the left!");
    4.      GUILayout.Button ("I'm in the middle!");
    5.      GUILayout.Button ("I'm on the right!");
    6. }
    7. GUILayout.Button ("The layout has ended, so I'm on the bottom!");
     
    LacunaCorp likes this.
  14. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks for the feedback, this can definitely help neaten things up a bit and make it more maintainable. I just added it in through the base class for all layouts and swapped it out in the demo code;

    upload_2019-2-10_11-42-53.png

    It also exposes all of the usual overloads, so while you can't access it once it's created due to the nature of usage, you can supply it with skins and layout definitions when it's constructed.
     
    Last edited: Feb 10, 2019
    keenanwoodall likes this.
  15. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    upload_2019-2-11_0-55-23.png upload_2019-2-11_0-56-25.png

    First iteration of radial layouts done; simply specify a radius, start angle, and spacing angle (which you can manually adjust on-the-fly) and all the children will automatically be snapped to the correct position by their pivot. I'll tie this to the animation system so you could easily do an effect where a bunch of elements in a circle smoothly zoom in and out on mouse hover. You could also animate the start angle to have a set of elements rotate around a centre point.



    Edit: like that... needs some smoothing, but you get the idea.
     
    Last edited: Feb 11, 2019
    Rtyper likes this.
  16. Prodigga

    Prodigga

    Joined:
    Apr 13, 2011
    Posts:
    1,123
    So where do I pay for early access haha
     
    LacunaCorp likes this.
  17. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    A little more progress to show;

    I need to port the text rendering over to native still so I don't have any labels right now, but I implemented a couple of controls to show how UI feedback works. Once the text is back up, I'll be extending these controls so you can pass in some text and have them drawn alongside a label, as with EditorGUILayout. Here we draw a slider, with a minimum value of 15, and a maximum value of 75, and a slide toggle (think of a checkbox with a moving thumb instead of the thumb becoming in/visible).

    upload_2019-2-12_12-35-42.png

    The graphics I used weren't designed to be shown this small so they're a bit pixellated; I'll sort this out for future demos. However, this does tie in to another features- automated slice margin scaling;

    Let's draw them again at half their height;

    upload_2019-2-12_12-36-17.png

    upload_2019-2-12_12-36-43.png

    As you can see, the 9-sliced sprite scale is automatically adjusted to ensure that all of the borders fit into the rect- you don't have to calculate anything manually. The thumb is also resized so that it fits perfectly inside the rect. This means that you can design a slider and a thumb together, at, say 64px, split them into two separate graphics, and they will always match up when used by one of these controls. Again, not the best source graphics so they do look pixellated, but this is down to the art used, not the system itself. It's just the way it's downsampled;

     

    Attached Files:

  18. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147

    Got those graphics sorted, and did some more contextual interaction work. While interacting with a child entity- the Sliders, in the video- the hover state is maintained for any parents in the hierarchy.

    I've spent most of the day reworking some core aspects of the backend for additional performance gain, especially around piping data to the GPU.
     
  19. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Been working on a lot of different things today, but just to follow up from yesterday;

    Sliders are now totally finished. You can supply a fill image as part of their style wrapper, which can be handled either by the GPU (sprite is drawn at full width, and then the GPU does a pixel fill on it), or as a sliced sprite (i.e. a 9-sliced fill sprite is stretched along the fill). This automatically adjusts to end at the pivot of the Slider's thumb. They're also automatically smoothed together, so there is never any desync between the thumb and the fill. As shown in the video below, if the user clicks a spot on the slider, it smoothly sets itself to match the current mouse position. Also, the mouse delta is only taken into account if the user is within the bounds of the slider on it's target axis. In other words, for a horizontal slider (you can also make vertical sliders, just call `DrawVerticalSlider` instead), the mouse input is only accepted if the mouse's x position is between the Slider's rect's x bounds.

     
    Last edited: Feb 13, 2019
    Prodigga likes this.
  20. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Not a huge amount to report this week- I've been doing mostly architectural work- but I wanted to keep the thread up to date, so;

    I've done a full rebuild of the multithreading backend. It was using System::Threading::ThreadPool and there were a few native issues arising from it not executing immediately in some cases. So there's now a custom ThreadPool just for rebuilding, where at the beginning of the frame, any dirty Controls are rebuild on a thread pulled from the pool, and then they are forced to converge before the end of the frame, when the Camera is manually rendered by the system. This has fixed all of those issues, and it's generally more smooth as before I was allowing an immediate render to go ahead if not finished in that frame. In other words, rendering has to wait for all of the threads to exit in the same frame, so rather than rendering multiple frames while still rebuilding in the background, it has to wait the ~0.2ms in this demo case before rendering, so you get more predictable results. Again, this happens at the beginning of the frame so in a normal game where you're talking about upto 16ms per frame for 60FPS, there will be literally zero overhead for rebuilding or waiting on the main thread.

    Also shown explicitly here is the internal Camera system. The "Main Camera" is passed to RMGUI::Initialise, which binds it as the target and then sets up false projection matrices. As you can see in the video, that allows RMGUI to use your Camera to render both the gameworld in perspective, and the GUI in orthographic. This prevents Unity from doing any duplicate culling work, etc. (which saves about 0.15ms in my test cases, compared to using 2 Cameras).

    You'll probably notice that the text in the video is messed up- I've only just ported it to native. I wanted to wrap the architectural refits up first, but I'll now be going onto the final text engine, which could take some time. Once that's done I'll be able to implement control labels, and get all of the EditorGUI-style inputs replicated.
     
  21. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Been out of office for a week or so but I'm back now.

    I was working on porting the text rendering engine to the native plugin, and I came across a few legacy issues which I've since resolved. Below is a comparison between the same string, in the same font, rendered at the same size with RMGUI and Microsoft Word. You can also see the automatic kerning generation at work here, with RMGUI packing the letters together to fit as closely as possible.

    upload_2019-3-5_13-0-27.png

    I've still got quite a few things to do on this front, but after that it's onto a new text input system, and linking labels to controls. I can then finish the control library, so you get the full range of controls the builtin Editor IMGUI offers, but, of course, for game UI.
     

    Attached Files:

    Last edited: Mar 5, 2019
    Prodigga likes this.
  22. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Got something a bit more fun to show today.

    First up, I've reworked the rendering side to integrate with other meshes. It was initially built as an overlay-only renderer. It now natively works with the postprocessing stack;

    upload_2019-3-9_10-8-17.png

    But, more importantly, it let me begin work on worldspace UI. And it's as simple to use as ever. The Control above is created with;

    upload_2019-3-9_10-9-40.png

    Let's get this working in worldspace. I want to draw it where that cube face is now- starting at the world origin, no rotation, 1 unit wide, and 1 unit tall. We change the constructor to take a plane, with a default position and rotation, and a width and height of 1 unit;

    upload_2019-3-9_10-11-28.png

    And that's everything. The panel will now be drawn in worldspace with the provided plane (which can be updated at runtime, whenever you want). You can set the PixelsPerUnit property of a Control to define how many pixels will be put into 1 Unity unit; this defaults to 1000. In other words, if we drew a panel with a width of 1.92F, and a height of 1.08F, we'd have a 1920px*1080px worldspace panel. And the panel we're working with is 1000px*1000px.

    upload_2019-3-9_10-13-55.png

    I need to sort out mipmapping on the native side, so the textures are highly aliased here, but the core system works. You can now go from screenspace to worldspace in literally seconds.

    Edit: Bit of a silly example, but just to show how I've converted the single-drawcall masking to work in worldspace, with extreme depth spacing to show the physical structure.

     
    Last edited: Mar 9, 2019
  23. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Just doing a little more work on the 3D front. Shouldn't be too long before I can start putting together some fancy animations and start showcasing what's possible with the library.

    First up, I've refined autosizing a little more. The only set value here is the size of the radial sprites, while the container panel is anchored to 3 edges. In other words, it doesn't have a set height value. Instead, the height is automatically calculated from it's visible children. Here you can see how the container resizes itself when one of the children expands.

    I've also done the first iteration of Entity rotation. Layouts can project the rotation of children early so that they can react to child rotation while placing them. Here you can see the same reactive setup, firstly with a (0, 0) pivot, and secondly with a (0.5, 0.5) pivot.


     
    Last edited: Mar 12, 2019
    JustTheCoolDude likes this.
  24. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    I've done quite a lot since the last post, mostly on the backend. These additions have made it really easy to build precise controls, as shown below. This is a slider, with very precise fill and thumb layouts. The container sprite is sliced, as is the fill (which is half the height of the container), but the thumb is a normal sprite, although it needs to be positioned so that it's centre left point is at the thumb position.



    The entire thing is skinned with the following setup. I've been piecing a few of these together as demos, hence the comments.

    upload_2019-3-19_12-15-3.png

    These labels are also very versatile; you can provide a format string, and tell the label to be formatted with either the slider value, normalised value, or normalised value as a percentage, and you can also set the number of decimal places to format it to, as shown here;

    upload_2019-3-19_12-18-8.png

    upload_2019-3-19_12-18-36.png
     
  25. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    I have a procedural mesh backend in place now, which will be extended to things like splines, so you can dynamically draw 9-sliced line links between elements, and graphs (eventually). I've done a quick deploy test with a radial set (you can set the resolution to go from everything from a hexagon to a 256-vertex high-resolution circle), so you can draw high-definition shapes without having to store and scale them with huge textures.

    In line with some work I'm doing on this, I implemented radial fills. You can set the direction (clockwise/counterclockwise), start angle, and smoothing values. I'll also be doing 90/180 fills like UGUI has.


     
  26. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Starting to get somewhere now.

    You can build some pretty complex layouts, mixing radials, stacks, controls, and rotation hierarchies. I need to do optional non-pixel perfect movement (you'll notice the animations are very snappy in the videos because of forced pixel-perfect rounding), so this definitely isn't representative of the final product, but you can get an idea of what's possible here.

    Also, the circle and ring are fully procedural. They have no sprite assets; they're procedural meshes created with calls to DrawCircle and DrawRing, where a radius (or inner and outer radius, in the case of the ring) is provided, and their colour is set.

     
    Prodigga and SugoiDev like this.
  27. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The last test run highlighted a few rendering quality issues with very thin sprites- the slider surrounds in the video above, for example- so I decided to implement a native supersampling solution to eliminate these issues, which you can globally disable, or set at 2x, 4x, or 8x, through the main RMGUI class. This does not affect anything else in the scene; it's totally dedicated supersampling for the UI, making it significantly cheaper than just dumping full screen SS on to solve those issues.

    With supersampling off; avg ~2600FPS;

    upload_2019-3-27_16-27-50.png

    With supersampling at 8x (with a ~12k backbuffer texture); avg ~2500FPS

    upload_2019-3-27_16-29-37.png

    And, just to give a better idea of the final production quality, with some extreme filmic postprocessing;

    upload_2019-3-27_16-46-50.png
     
    Last edited: Mar 27, 2019
  28. Egi

    Egi

    Joined:
    Apr 16, 2018
    Posts:
    4
    looks good. can't wait to get my hands on it. all the UI solutions I've seen so far perform awful.
     
    LacunaCorp likes this.
  29. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    RMGUI_Promo_4.png First standalone build; 1920*1080 screenshot. I've been making quite a lot of improvements to the text rendering engine this morning. There's still some work to be done, but small glyphs are now significantly clearer.
     
    Last edited: Mar 28, 2019
  30. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The core is now mostly wrapped up; everything from here is more or less down to usability, and additional layout features. I do still have a few new modules planned, but I'll be concentrating on refinement and usability for at least the next couple of weeks.

    Just to kick things off, I've began work on an RTF-like system for all text handling. I might also implement RTF blocks directly, but for now it's a new op-based system. The idea behind this is to let you draw complex text sequences with a single string, through a single text Entity.

    The first use case I thought of was dynamically changing the font size within the same string. The idea is that you can override the Entity's fontsize at any point, as many times as necessary. The format is (this will be extended to other ops);

    //'opcode'"value"//

    An escaped backslash, followed by an opcode, followed by a value (if necessary), ended with a final escaped backslash.

    With the opcode for size being an 's', to set the fontsize to 10, we simply provide;

    "//s10//"

    So, with an Entity with fontsize 10, we can draw an increasing size string with;

    "Size 10//s20//Size 20//s30//Size 30"

    Which gives the result;

    unknown.png

    This will initially be extended to colour; I might follow it up with things like dynamic alignment agaibst the layout type (i.e. dynamic vertical alignment for horizontal text). Please do leave any thoughts/suggestions; I'd like to implement as much as possible on this side. Just any thoughts on what you would like to see, and/or what features could be added to make developing UI easier in general. I'd like to hit the ground running at launch and ensure that as much is covered as possible.
     
    JustTheCoolDude likes this.
  31. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Lots of stuff done today. I've been solidifying the worldspace features, and rolling out some fixes I've made to the screenspace core to worldspace. Shown below is the exact same Control drawn twice; the only difference is the commented out line of code shown here (if no WorldspacePlane is set, the Control is drawn in screenspace);

    In worldspace;



    And going back to screenspace by removing just one line of code;

    upload_2019-4-2_0-41-43.png

     
    Last edited: Apr 2, 2019
  32. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Been making some further changes to the rendering pipeline (also ticked a lot of minor things off the TODO list today). As ever, the video quality isn't the best (storage requirements), but you can see just how clear everything is, especially compared to the surrounding editor GUI. I've also shown the new interaction system much more clearly;
    entities have simple interaction quads calculated, and then a very fast Möller-Trumbore implementation raycasts the target primitives (which are prepartitioned).For complex geometries, such as the ring/circle, the resolution is stepped up, and it's final geometry is tested. When atlases are built, their transparency data is bitpacked into an alpha atlas, which is interpolated for testing, meaning that raycasting can also handle precached alpha values (optional per Entity), meaning that you get full raycast transparency support with effectively zero overhead. I might also set up the atlas to accept custom interaction data, so you could specify the pixels to be used for raycasting (in case some transparent pixels should be selectable, while others should not).

     
    JustTheCoolDude likes this.
  33. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Frontpage of /r/gamedev last week, thanks to everyone for the support!

    I've been doing a lot of optimisation work this week around partial rebuilds (to make sure the absolute minimum amount of work is done when Entities become dirty), so there's not a lot of flashy stuff to show, but I have done some more procedural work.

    You can draw complex segmented ring layouts now. They can be as simple as providing a segment count, arc length, and spacing length (or none, to automatically calculate the spacing);

    upload_2019-4-7_14-21-55.png

    Or you can supply an array of segment lengths, followed by an array of spacing lengths, for any sort of sequence you need;

    upload_2019-4-7_14-13-9.png
     
    Last edited: Apr 7, 2019
  34. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    This week has seen a major refit of the rendering backend, around shader and material handling.

    It's now incredibly easy to add your own shaders. Shaders are written in HLSL as opposed to Cg/ShaderLab, but I've included a few decorators- you can use all of Unity's blendmodes and blendops, for example, with #blend and #blendop tags (as shown below).

    So, quick demo. I've moved all of my underlying shader code to RMGUI.hlslinc. This provides vertex and fragment structs for you to use in your own shaders, and the builtin main vertex and fragment shader methods to use. The main shader now just makes calls to these. This is the actual main shader I'm now using;

    upload_2019-4-11_21-11-52.png

    So if you put this into an HLSL file, you'll get the exact output I do. You don't have to, though; you could write your own shaders from scratch, but then you miss out on all the fancy bits like masking and fills, unless you copy those features over. This does make it really easy to do cool effects though, maybe something like a burning fade in filter. I've kept it really simple, and made a test shader to change the colour of whatever we draw to be fully red;

    upload_2019-4-11_21-14-5.png

    Now, this isn't the builtin shader compiler (I parse these for custom tags, and then compact it down and send it off to fxc.exe for HLSL compilation), so you do need to remember to hit recompile;

    upload_2019-4-11_21-15-53.png

    But what's really cool about this is that everything is autogenerated into constant lookups, meaning, no strings! The shaders are built out into a RMGUIShader factory, and any textures you use are built out into a TextureSlot factory, like so;

    upload_2019-4-11_21-21-19.png

    This makes working with materials very easy. Let's take the existing demo, and set the centre circle's material to use this test shader. Note that everything is still one mesh, but the system intelligently packages everything to use the minimum amount of context switches on the fly, drawing the mesh with multiple fragments to allow dynamic materials per entity.

    upload_2019-4-11_21-23-2.png

    Those factories really do improve ease-of-use. As you can see, to get this working, all we have to do is call CreateMaterial (in the base Control class) and hand it our shader, then we can set any Entity's material to use this.

    Code (CSharp):
    1. Entity.Material = CreateMaterial(RMGUIShader.YourShaderName)
    If you add your own textures, you can set them with;

    Code (CSharp):
    1. material.SetTexture(TextureSlot.YourTextureName, textureReference)
     
    Last edited: Apr 11, 2019
    JustTheCoolDude likes this.
  35. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    So, super quick test- I've replaced the existing transition system with a very robust animation framework, which now represents all entity states as keyframes. The old transitions system now works on top of this system, and allows you to easily animate any Entity.


    The EntityAnimation is a simple keyframe wrapper which lets you provide skins, along with timestamps (and optionally additional settings such as varying interpolation modes per transition), to form animation sequences. This makes it very easy to compound clips together into animation objects, but you can also provide a skin directly if you only want to animate to a new skin.

    This is all handled through Entity:: PlayAnimation. You just pass a clip or skin to this method, and the backend takes care of the rest. You can also call Entity::LoopAnimation to indefinitely loop an animation (until it's interrupted by a new animation), or pass in a count to loop the animation a set number of times.

    upload_2019-4-15_0-22-58.png


    This is just using a Color32.Lerp, so the results aren't great, but I'm also working on a custom colourspace system for each Entity, which allows you to also lerp in HSV, CIELAB, and CIELUV (it's fully functional, just got a couple of edge cases to address).
     
    Last edited: Apr 15, 2019
    Prodigga likes this.
  36. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The animation framework is pretty much finished now, and I'm really pleased with it. You can keyframe basically every entity property now, and there are derived skin classes for procedural entities.

    upload_2019-4-17_14-2-54.png

    The animation itself is very straightforward. We specify a base keyframe at time 0, and then a final keyframe at time 2 (seconds), and set it's InterpolationMode to CubicOut to give it a nice slowing effect as it progresses. The ColorUtils.Copy call is provided by the library- it's just a way to take a base colour and set only some of it's properties- in the case, I use it to set the alpha to 0. This is just so we can use a style colour, rather than having to type it out multiple time, and then change it everywhere if we want to update it.



    We can easily extend this to segmented rings;

     
    mons00n and Prodigga like this.
  37. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Honestly a bit in awe at this! Everything looks incredible, and the whole transition/animation system is a lifesaver. All the examples here are using mouse control for interaction I notice. Have you got plans for keyboard/gamepad as well? I've ended up writing my own set of control components for UGUI to get around the kinda broken keyboard focus system it uses, but I'd love to switch to this someday! Any idea when you might be releasing it?
     
    LacunaCorp likes this.
  38. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks, really glad to hear you like it!

    This is going to be one of the later features, but yeah- it'll be in at release. I just want to get all the core elements refined first to make sure I don't end up having to change the mapping down the line, but what I'm currently thinking is after the UI is built, calculate a sort of internal flowmap which defines what happens if you move left, right, etc. I'd also like to provide some sort of system to create control groups, which you could navigate separately. For example, you might have a bunch of buttons which control a series of tabs/panels- you could group all the buttons into a control group, set that as the active group, and then have the controller triggers (for example) cycle through them.

    I'm also thinking that there should be no controller-specific bindings. Instead, you would just call (something like) RMGUI.Navigate(Direction.Right) to select the closest entity to the right, if available, else try to move to the next line down, etc. That way, you can interface with any control scheme by just calling a few methods. This way it would also work with a keyboard alone, if you wanted to navigate without a mouse for whatever reason.

    As for release, I'm hoping to have the rest of the planned features wrapped up in a couple of months, but I have a habit of chasing rabbit holes so I really don't have a specific date yet. I do a lot of testing in-house, but I'll probably do an earlier testing version with a small group to iron out any bugs I've missed, then look at an asset store release from there.
     
  39. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    All sounds good! I can relate on the whole "rabbit hole" situation :p, - you're probably right not tying yourself down to a specific date! And if you're ever in need of extra testers, I'd be happy to help out :)
     
    LacunaCorp likes this.
  40. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Just a quick one.

    I've had a lot of questions over the past couple of days, especially through reddit, and I'm really pleased to hear what has been overwhelmingly positive feedback. So a big thanks to everyone who is following along!

    I didn't link back here, so just for the people following this thread only, I grabbed some 1080p footage of the new animation backend.


    I also just wrapped up some (kinda weird) clamping features you can enable for layouts. I'm using sliced sprites with highly irregular borders, so when they're really squished up you get some overlap, but that's inevitable. You wouldn't get this with regular slice borders, but I thought it could be pretty handy either way.

     
  41. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    The fill system was in need of a bit of a rework. It was functional, but I wasn't happy with the level of customisation it allowed for.

    I liked Unity's radial fills as they don't just offer 360 degree fills; you can also do 90 degree fills from a corner, and 180 degree fills from an edge. However, they're all preconfigured, so you can only select from a few presets.

    I've tackled this by building the entire fill system on one method, which can accept a centrepoint, and a start and end angle. Presets are built on top of this so you can quickly use a fill method from the library, but you can also create very precise fills with a simple call.

    upload_2019-4-25_19-28-20.png

    Here, we will fill a sprite radially. Fill::Radial returns a RadialFill object we assign to Fillmode, passing in a centre, and a start and an end angle in degrees. The centre is in normalised Entity space, i.e. (0.1F, 0.75F) gives us a centre at 10% of the width along the entity, and 75% of it's height.

    We can even animate between these fills. As you can see, we assign a completely new fill in the second keyframe (at time 4 seconds). It's as easy as that to create a very precise fill animation.

     
  42. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    I'm putting together a fully-procedural automotive instrument cluster as a demo. I wanted to do some complex animation to stress test the system.

     
    Last edited: Apr 29, 2019
  43. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
  44. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147


    Not that much exciting stuff to show lately, it's been mostly backend, but I do have the text input core running on the new system now, with a total overhaul of small font rendering. I was doing a switchout from MSDF to bitmap earlier but the results weren't great, so this is basically the second iteration of that system with a different precomputation technique for the bitmaps (using GDI+ to render properly hinted glyphs from 20px down to 5px for each character).
     
  45. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Just another quick one, in light of some stuff I've done this evening.

    So I'd already planned this for text but didn't want to implement it until the final system was definitely in place. Obviously if text overflows it's rect, it needs to be clipped in some way. There are 2 options; anything which overflows could either be completely skipped, or the text could be masked in some way to smoothly fade it out at each boundary.

    I didn't want to use masks as it would take up user mask data space, potentially for every StringEntity drawn. So the following is my solution;

    upload_2019-5-7_21-7-22.png

    I've had the shader simply draw the vertex colour here. The actual string is "Login"; the 'g' overflows the highlighted boundary at the bottom.

    upload_2019-5-7_21-8-31.png

    Fading is handled by injecting an additional vertex pair at 2 pixels above the overflow point, and clamping the lower edge to the boundary. The lower edge is then given an alpha value of 0, which causes it to fade out at the boundary, masking it without the use of any actual masks.

    This can handle overflows on all 4 sides, although I doubt anyone will ever actually need that functionality, but just in case. It detects adjacent clips, and then injects an additional corner vertex to keep the colours correct, allowing for fades along multiple contiguous edges.

    I did a little promo writeup here just to go over the cliffnotes, and also remind everyone why I'm an engineer and not a designer.


     
    Last edited: May 7, 2019
  46. Matchstick21

    Matchstick21

    Joined:
    Sep 3, 2015
    Posts:
    6
    As someone who is also a programmer and definitely not a designer I'm really looking forward to this.

    How are multiple panels swapped at runtime? IE if I want to switch from a Main Menu to a player HUD, would a monobehaviour call a panel directly (somehow) and enable/disable? Or would RMGUI listen for an event?
     
  47. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Thanks, glad to hear you're interested!

    Because controls are all instances, you have full control over that sort of management.

    So your MainMenu would come in the form of, say, MainMenu : Control, and the HUD in the form of HUD : Control. In you main menu scene, you'd just instance it with mainMenu = new MainMenu(), and then you can make your GUI calls in the "awake" of the control (just for this example; you can actually issue calls at any time within the child class with your own methods).

    When you load the game scene, you could either set mainMenu.Enabled = false (which would keep everything loaded in memory, but stop issuing drawcalls for the control- if you enable it again, there's nothing to rebuild; it just starts drawing again), or if you want to totally unload it, you can call mainMenu.Clear(). This will completely destroy anything the control owns, and then you can either reuse the cleared control, or just set any references to null and have the GC clear it up.
     
  48. Matchstick21

    Matchstick21

    Joined:
    Sep 3, 2015
    Posts:
    6
    Ah that that makes complete sense, I was just stuck thinking of it like a unity prefab.

    Looking forward to it, I'll gladly grab a copy whenever it releases.
     
  49. LacunaCorp

    LacunaCorp

    Joined:
    Feb 15, 2015
    Posts:
    147
    Ran into an interesting bug today (now fixed) and thought it would make a good stress test case. I'm drawing just under 5k buttons here at around 1500 FPS (doing further optimisation based on this data). The individual buttons are dirtied and rebuilt as part of the retained system, i.e. the only work actually done here is a quick analysis to check what needs to be rebuilt when marked dirty (in this case, when moving between buttons, the old button is deselected and assigned it's idle state back, and the new button is selected and switches to it's hover state), those targets are then processed, and the native buffers are updated.



    The bug was a purposeful test- the idle states are single sprites (4 vertices, 6 indices), and the hover state is sliced (16 vertices, 54 indices). In other words, we're switching from a non-sliced sprite to a sliced sprite, so there is overlap in the buffer. I thought it gave a nice test effect though so grabbed a video before fixing.
     
  50. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Looks great !

    I was wondering if your engine could one day takes an XML/CSS files as input to declare/describe an UI instead of using pure code ? It would allow non coder persons to work on UI in a team.

    Thanks