Search Unity

Enemies: New tech demo showcasing Unity's growing humanoid toolsets

Discussion in 'General Discussion' started by IllTemperedTunas, Mar 22, 2022.

  1. IllTemperedTunas

    IllTemperedTunas

    Joined:
    Aug 31, 2012
    Posts:
    782
    No one's talking about this!?



    I wasn't impressed when I heard Weta was picked up from Unity, but I gotta say after watching this video, I'm absolutely blown the &#(k away.

    The hair is likely a non Realtime bake, but the skin shaders, the rigging, the animation are all so damned spectacular. I mean, if I'm being cynical, this is kinda what you'd expect to see from Unity after bringing the Weta guys on. A super indulgent clip with no backstory on the tools and no talk about getting this level of quality in an actual game. LA Noire comes to mind... it had all that facial capturing, all the voice lines, and all that production value just didn't really add up to a better game. I hope Unity is thinking about this from a developer perspective, "How can we develop ergonomic tools that we can build on over time to produce next level humanoid entities in our games?"

    All that said, absolutely amazing demo. Even knowing it's likely all smoke and mirrors, it's still incredible and a hell of a benchmark.

    The question is, will we get prebuilt, automated rigs that can automatically extract animated emotions from an MP3 audio files? or can we animate emotion by hand using simple sliders? Will we finally start getting automatic mouth animations based on parsed dialogue files? Even just giving character's realistic idle blending with eyes darting around a room, blinks, and subtle head motions based on points of interests nearby, would be a pretty big step in the right direction. Or was this just flexing top tier animators and riggers with decades of experience in a medium that doesn't really translate to the real-time Unity engine? At the very least this demo shows the potential for Unity to be used as a platform to create INCREDIBLE cinematics with a few bells and whistles.

    Maybe this is something we look back on as another "blacksmith": Largely glitz and glamor. But maybe we look back on this as that short glimpse of how fantastic cinematic scenes can look in real time within a game engine, and how Unity built on this hand crafted segment and used it as a benchmark to build powerful, automated tools for realtime humanoid generation.
     
    Last edited: Mar 22, 2022
  2. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,156
    Because we all know better.

    They show off demos like this every couple years and they only ever work with a single point release, likely now with only heavily modified versions of various packages as well, showcasing technology that will make it into the engine five years down the line if we're lucky.
     
  3. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,698
    Well, at least part of the tech is promised to be released this year Q2 according to the coresponding article:
    https://unity.com/demos/enemies

    It's nice to see that the tech is building on the last demo though. They are not doing something from scratch. So that makes it likely that more and more flows into the engine itself. It just takes time.

    In the end, let's be real, visuals that push up the ceiling of what's possible will ALWAYS require handtuned optimization and not work out of the box with any engine. Would be great if it were different, but that's how computer tech works. That's true even in entirely different industries.
     
    Last edited: Mar 22, 2022
    Shizola likes this.
  4. Schubkraft

    Schubkraft

    Unity Technologies

    Joined:
    Dec 3, 2012
    Posts:
    1,073
    https://blog.unity.com/news/introdu...on-in-high-fidelity-digital-humans-from-unity

    “When can I have it?”

    As with previous projects, the Demo team will be sharing the technology developed for Enemies with the community to try out in their own Unity projects.

    In a month or two, we’ll release a Digital Human 2.0 package that contains all of the updates and enhancements we’ve made since the version we shared for The Heretic.

    We will also release a package containing the strand-based Hair system on GitHub, which allows us to collect feedback and make updates before it becomes an officially supported feature. Keep an eye on Unity’s blog and social media to make sure you’re alerted when these packages are available.

    Most of the improvements in Unity that originated from the production of Enemies, or were directly adopted in it, are already in Unity 2021.2 or will be shipping in 2022.1 or 2022.2.
     
  5. IllTemperedTunas

    IllTemperedTunas

    Joined:
    Aug 31, 2012
    Posts:
    782
    While I appreciate the transparency, these aren't the sort of assets that are going get people excited about what sorts of fruits we can look forward to coming from these tech demos. The big difference between this and the Unreal stuff we got a year ago, is all of those bells and whistles were just so damned applicable to a great many people's pipelines, the raw "it just works" factor and technical payoff was so blatantly obvious.

    If you've done system taxing, overnight bakes for large mesh animations or hair simulations, this isn't that exciting. Yes, the outside observer who doesn't know what a bake is might be blown away by the "hair physics" and "facial animations" but once you understand that it's essentially a flip book from an external 3d package and isn't feasible for mass content in most games because of the memory requirement and sheer manpower required to create it, the allure fades kinda quickly. While this pipeline can generate fantastic cinematics, when you get down to it, you're probably better off making these stunning short clips in an environment built from the ground up for it like Maya and exporting a cinematic from there as your game's cinematic or ad.

    Contrast this with Unreal already generating stunning, real-time vistas at such a quality level that Hollywood is adopting the engine into their pipeline for movies and shows that are already out, and this demo feels a little uninspired.

    I was really hoping to read about some sort of custom physics system you guys had created to get this working in real time for hair, or some revolutionary toolkit that makes the animations generated for this humanoid capturable and applicable to other humanoids rigs when reading the behind the scenes setups. That somehow some of these systems were modular, real time, and blanket solutions that could widely improve the humanoid assets of most any team looking to utilize them.

    I don't know... I don't really get it. You guys at Unity clearly have talented people, you can clearly create amazing things, I don't know why you guys consistently put out these demos without saying, "See that cool S***, we're going to polish the hell out of it, make it ergonomic, release fantastic tutorials on how to use it, make it as painless as possible to use, and everyone who uses Unity 2 years from now are going to have the best real-time animated characters with emotion, and facial animations than on any other platform on the planet." We get it, sometimes things don't go as planned, sometimes thign don't come together and you have egg on your face. But lately, I'm not sure I know what it is you guys are doing. What are you guys excited about? What cool sh*t drives you guys?
     
    Last edited: Mar 22, 2022
  6. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,698
    Btw. someone named Mark Shoennagel commented on the video that this was made without source modification.
    Is that true?

    @IllTemperedTunas You have a slightly strong 1-man-studio view on this.. I'd tend to say that slightly larger studios who have at least 2 or 3 dedicated artists can well be excited about such improvements even if it is not absolutely plug and play.
    As far as Unreal is concerned, do you have an actual practical view on it, or have you just heard their promises that what they show is actually doable through easily usable tools without man-power for months of optimization?

    I'd tend to say striving to push Unities boundries is what drives the demo devs. Maybe those resources could be usable somewhere else, who knows, but it's a justifiable drive if you ask me xP
     
    Last edited: Mar 22, 2022
  7. IllTemperedTunas

    IllTemperedTunas

    Joined:
    Aug 31, 2012
    Posts:
    782
    That's the thing, these aren't necessarily big additions. Bulk mesh import and animations aren't really a unity addition, they are based on the quality of the assets you are bringing in to unity. If you have a big fancy camera, a professional actress, professional lighting, recording tools, and composite setups, yeah of course you can get a good performance in Unity, but you could get a good performance in darn near anything else with a heck of a lot less overhead.

    I made my post as someone who's been the guy crunching the high end simulation for these bakes. Do I have a practical view on Unreal? Yes, I've worked professionally in both Unreal and Unity as a tech/ effects artist on a variety of projects.

    There's a ton of videos on their incredible advancements in particles, terrain, rendering tech, real-time scripting, animation advancements and on and on...

    Here's a video on their character tech, it doesn't look good as this demo, but it's a true game ready system that generates unique characters on the fly without baking animations into a streamed file:


    You can find more here:
    https://www.youtube.com/c/UnrealEngine/videos?view=0&sort=p&flow=grid

    Their recent matrix demo was pretty mind blowing.
     
    Last edited: Mar 22, 2022
    DungDajHjep likes this.
  8. Schubkraft

    Schubkraft

    Unity Technologies

    Joined:
    Dec 3, 2012
    Posts:
    1,073
    If Mark says so then it is.
     
    karl_jones and DragonCoder like this.
  9. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    You can get all this done right now in Unity. I just built an entire procedural animation system that sits on top of a conversational AI and intent engine that reacts to the dialogue, ssml tracks, audio volume and spectral analysis, phoneme to viseme timing to synch voice to face and lip motion and blends emotions from the face rig into the viseme, has parameters for procedural idling and gesturing where any kind of character can be dialed in. No canned animation loops, and can be calmed down to sleepiness levels or amped up into full blown traffic rage rant. I would be happy with a hair shader a few steps beyond what i have now but I am happy with the results and more importantly, the people paying me are stoked.

    I am unabashedly using MakeHuman to extract avatars as i am keeping as open source as possible. I export the one with the face rig and then change the body armature to a gaming rig, use the weights of the first rigged avatar and just add the tags in C4D and a new avatar can be gotten together in under an hour. They can also be blend morphed so you can choose two avatars and pull a slider to get a hybrid. I would like to see the soft tissue tools in the pipeline soon. Proper use will add mass to those larger creatures and muscled warriors that just don't seem to have any weight in boss fight scenes etc.
     
    julienkay likes this.
  10. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Of course a scanned face is going to beat a synthetic face. If you want an apples to apples comparison to Unreal, you should be using this instead of metahumans:

    51737478995_0f7258ab4a_h.jpg
     
    DragonCoder likes this.
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The light diffusion is wrong, especially on darker skin, this is true for all CG I have reviewed, hair is even worse.
    @SebLagarde did some work ( I think) on dark skin in the game Remember me, but even then they use very dark scene and boosting the specular as an artistic stylization, as a way to mask things that don't work.
     
    Last edited: Mar 23, 2022
    BIGTIMEMASTER likes this.
  12. unitedone3D

    unitedone3D

    Joined:
    Jul 29, 2017
    Posts:
    160
    Hi there! Just a 2 cents. Sorry for the length. TL DR: Absolutely Spectacular demo...ok..it's a demo..but for a demo, it's one of the most (or, the most) impressive short demo/CGI film; this looks like a Offline Rendering - Totally; like..if you showed this a movie crow in the Theater...they would think it's a movie with 150 million budget à la Avatar...
    but it'S not. and that'S good (for us).

    Congratulations!

    Incredible work, Unity, Ennemies team. This made me change my mind (on HDRP, I'll switch now (and accept whatever problems that may come; it'S one of those times - half-way in the quest - where you are at 'no return' point...you either continue on half-tank (hoping) or you go back to the start (and sink)); thankfully, it's not that dire, we can just go back to a previous version project (that worked/bugless); but, sometimes, there is enough reason to upgrade even if you are mid-way production and you are not suppose to - it's suppose to be final/decided what tool to use and not switch on and on and face new bugs/tons of problems; hence, stick to a LTS version - and don'T budge until the end of project/game is done; but, even LTS, becomes old one time and this new technology there is worth it; akin to UE5's Nanite and Lumen..kind of; so I thank you, Unity Technologies for listening).

    Will this scene be available to test on computer (I read that the demo is running on GeForce 3090 RTX, is using HDRP GPU Raytracing (reflections, and especially, the sun shadows, the light rays in the windows, and the shading itself (which looks definitely Raytraced look/looks like a 'offline rendering' using Renderman or Arnold pathtracers) - I'm asking this because I am seeking to get this graphic quality/visual fidelity...but, we don't know what are the Project Settings - very important; like what are the Project Settings, Exactly. Because, Project Settings can totally change the look...

    - Is it Pathtracing Raytracing; I understand that it is HDRP Raytracing...but is it the Pathtracing type of Raytracing...it is RTX Raytracing - specifically, because HDRP Real-Time Raytracing only supports (to my knowledge.) Nvidia's RTX
    Raytracing technology...not AMD Radeon's Proradeontracing one...Albeit, it'S not so bad, I checked on Steam, and the large portion of players are Nvidia GPU holders...not AMD Radeons XTs...there are at least 20% of Nvidia RTX cards (2000 series and up, to RTX 3090) on Steam; while AMD Radeon 5600-5700/6000s XT are less than 5%. 20% of 150 million Steam users = 30 million Nvidia RTX holders..while AMD Radeon XTs is about 5 million holders. In my case, I have a AMD Radeon...so it's problematic to use HDRP Real-time Raytracing, that Requires a Nvidian Geforce RTX card...that has that RTX Raytracing technology - on the GPU/hardware card itself...so that means Need a RTX card to use HDRP RealTime-Raytracing. I mean it's not the best thing, you wish to have Both GPU brands and not cut yourself out of - 5 million people who happen to have a AMD XT GPU; that's a huge chunk lost due to being forced to Having this Hardware GPU (RTX Geforce; since only their cards provide this Harwarde Raytracing tech)

    - The Youtube messages said that the demo runs in 4K resolution, on a i12 like a 12-core or so CPU and on RTX 3090 GPU that has at least I think 25 GB of VRAM (which is extremely expensive card, less than 0.3% of Steam Nvidia users have this card; a rich enough to get it; and due to COVID having created silicon chip drought...cards are more expensive now, just unaffordable and even Old GPU cards are sold higher price (not worth it - to play low end games/not worth buying old card unless playing older games, but not for next-gen 3d games that are coming in later 2022-2023...) because unavailability of GPUs/silicon/diode/transistors/boards...electronic parts (due to rising costs/covid putting a wrench in the industrial cogwheel of hardware peripheral making));

    - This demo Needs Next-Gen Hardware...simple as that. As such it will remove a considerable amount of people still on old GPUs, albeit the graphics can be downgraded...but these graphics are the Reason why one would want to get next gen hardware; unless, they don't care about that (And I know a ton of people could not care less...because they don,t play next-gen games...or are interested in them; they prefer artistical/cartoon games (small indie games) that are more about Stylization/stylized look...rather than CGI/3D Photorealism like this demo... I guess..different strokes for different folks..

    - But, let's not kid ourselves, the Large Market our there is Interested in next-gen (as I have said before) and there are over 30 million Nvdidian users...they would like to Make Use of their GPU...More.. RTX is that answer.

    - Often the problem with new GPUs and new games...it's that the new games/next-gen games - Don't - make use of new next-gen hardware; and do not Push the Enveloppe/To the Limit the GPU hardware...or they don't make use of New Technologies (...like RTX).

    - Hence, stagnation and same ol same ol, no progression visually...now progression is very subjective...once your reached Photographic Realism there is nothing after that (dminishing return...after CGI..well..it's RealityTM)...

    - So then..devs are faced with a dilemman - what'S next...make More Realistic or ...Stylized Realism...most choose the latter and think that Photographic realism is 'boring/dull' because we see it 'everyday' with our 2 eyes - in reality.

    - Games are about Escaping Reality...

    - But, how far you escape is the choice of the dev...some don't like Far-Fetched Escaping..just looks like a joke..because no basis in reality/Uncredible....Reality = Credible..because is real- reality; you believe it. you live in it.

    - I think this demo is a proof (in the CG pudding) that we can make Dreamy games...taht are like movies...and even Cartoons à la Pixar Toy Story..with this, the (cgi) sky's the limit. IF you want to retrograde down to Popeye cartoons à la 'Cuphead' game (1930s sunday cartoons..) why not?...

    - It's just more tools to express and accomplish/concretize your vision.

    - I think the thread derailed a bit with the political talk of skin SSS; but, I mean I understand the point of 'oh look...of course...white CGI...of course...'...but let's not make this a deep political thing (even if there is politicality behind it...let's just not Shoehorn-it...in); there CGI African American renderings are incredible and like...I'm surprised that people made this a 'skin thing' (pigmentation) thing...the Skin technology of the demo is incredible and some of the most accurate so far; as they said with the 'Dispersion model'; I think that the Epic Metahumans are impressive...but they are more CGI looking (due to the deep shading); while this demo here, looks a little Less CGI...and thus, is 'in the middle'; and closer to a offline render (due to the use of Raytracing; Metahumans can use Raytracing too, but the look, looks a bit more CG/'plastic'..as people say : 'it looks plastic'...cg; well, the protagonist woman in the demo looks Eerily 'real' because it looks a Bit Less CGI; and thus closer to a photograph (which has Less 'plastic' shading look); as some point I was almost fooled - I almost thought this was a Real Person acting in front of a desk...this woman like in reality shot with a real camera; now the big reason why..is raytracing but Also Her Face...her face is 3D Scan and it shows...this a Real 3D face scan and thus is much more accurate than Metahumans 'semblant humans'...they are not real face scans...thus we hit the big thing:

    - Uncanny Valley...I read this in the Youtube messages; ''D*rn..the Uncanny Valley hit me hard...'', it's due to the eyes that are slightly more robotic/lacking soul/depth...thus llooks CGI Puppet...eyes are very hard to get right and that is due to lacking Pupilla change to light condition, if they stay same it looks 'dead stare/CG stare' microdetails are very hard and is why I applaud the team because they capture 98% of it...in the microdetails; from the mouth/lips moving, the slightly too jelly/face...(instead of face musculature under the skin; when skin 'Slides' on muscles and creates this kind 'of skin tension' and micro-details likes wrinkles..which they did emulate.. it's just that eye is Very Trained to detect micro-Off things...that makes a CGI face enter - Uncanny Valley);

    - I did read some messages that said : ''Wow...we are Out of the Uncanny Valley..and there is Canniness; the eyes, emotions and mouth are Canny; which is good; not Uncanny);

    - That is where we will have to Stylize the Ultrarealism...so that it's 'Stylized Realism'..instead of Accurate Photographic Realism 1:1...to Reality.

    - Anyway, sorry for extending here, I am very impressed (And no am not alone..and I know tonds of other people are Not impressed..like some Youtubers were like ; ''uh...ok..same ol CG...same ol...nothing impressive...realism does not impress me..show me Cartoons...''

    Just a 2 cents.

    PS: The Skin shader is incredible...but Unity, you, needs to make this Scene available to study the Settings...or tell us what are the settings, exactly, so that we reach this look using HDRP; for me the most incredible is the Global Illumination (SSGI) and the RTX Raytracing 3D graphic visual fidelity here; especially the deep shadows/detail of the shadows; the message is poignant too ''Power.. is only given to those brave enough to lower themselves to pick it up''.
     
    FernandoMK and OCASM like this.
  13. unitedone3D

    unitedone3D

    Joined:
    Jul 29, 2017
    Posts:
    160
    PPPPS: For anyone switching to HDRP - hold it, I guess I misspoke and was bit to hopefuil...I won't switch now. Since my Builtin Render is better than without this tech...I knew there was a catch...'some thing that slowed everthing'...sigh/sad/i lulled myself..I guess the thing to learn is don't hope Too Much..too quick/too fast..

    The demo's shading-3D Look of objects/things- is, Very Dependent of this APV new technology (Adaptive Probe Volumes), not so much the SSGI, nor the Raytracing (in fact there is nothing RTX at all..excep that the RTX support ray tracing...but it is not RTX-catered to, only just because Only RTX offers raytracing;

    meaning the whole 99% look...is due to APV 'probing GI'...

    it is not Path Tracing neither...

    APV, - is Not Realt Time...is a PreComputed, 'Global Illumination' (kind of like Reflectin Probes)...here I was thinking this demo's visual was -Real Time...It is not. You must pre Bake'' this...so not real time. they said it is an Experimental thing and that is not a replacement for 'Light Map Baking'....on top of that. I axed that, and skipped in my game. Probing is low and slow...not real time, simply put this APV - is the Holy Grail of GI/CGI look...and it impossible to get Real Time...so far...I have looked at SSGI and the SSReflections and Raytrace AO...they don't give that look..alone; only APV does. The Unity BMW 2019 demo...used RT-Raytracing...but it did not look good like this demo here...and that is due, to APV.

    Maybe later..if APV becomes Realtime'ill switch until then too much work for too little gain...the fact that we have to Pre Bake is the crux/problem...light mapping is slow, and APV too...if you have a small game with almost nothing and time on your hands...APV is gret...if you ahve a giant game and not spend years to wait prebaking/probing the gi space..it's not.

    PS: Hopefully some APV equivalent in real time happens in the years coming...no more baking. baking soda -> no more. Changing Half-way a project - to do more Baking; no. Just a 2 cents..

    Last PS: One more (last) thing...the Unity demo is impressive (because of APV...) because it is rendered in 4K (DLSS); 4K makes a Big difference to 'feel' of the image; it is much more Filmic...than 2k (2k is filmic too; most films are in 2K...but many films are Not shot in 2K..they are shot in digital 4K format camera - natively; thus, they are 4K source material and downresed to 2K on Blu-Ray...; sometimes you find 4K Blu-Rays...more expensive but they keep the full resolution of the film..and ther eis a Very visible Eye difference to the image...there is just more 'detail' in the image; all the micro-details now appear (and thus, you enter the Uncanny Valley..the more High-Res you increase the image resolution and add more details in the image by the rising resolution; 4K Native rendering is very expensive; hence, DLSS upresing is the solution; in my case, Built-in Render has no DLSS...so I will have to find a way to do 'supersampling/upresing/upscaling'...to 4K from lower res..to emulate the 4K DLSS lookl it is worth it; because it sells the look of this demo here - in 2K, the demo loses quite some visual oomph in detail (especially the little details on her clothes...they vanish..). In any case, I don'T wish to minimize the greatness of this achievement...it's just not what I thought (I thought it was real time..). APV = Precomputed Real-time; not True RT Real-time.
     
    Last edited: Mar 24, 2022
    OCASM likes this.
  14. EternalAmbiguity

    EternalAmbiguity

    Joined:
    Dec 27, 2014
    Posts:
    3,144
    What actual features are here? The only one I'm seeing explicitly mentioned is the "real-time" strand genre system. How was the person made, and how was this hair system integrated?

    "Give me Metahuman" is a tall ask but for someone who will never have the resources to individually sculpt, texture, rig, and who knows-what-else-I'm completely-unaware-of scores of individual characters, something like a character creator is the only realistic option.
     
  15. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    Damn, finally a new impressive graphcis demo by Unity, and the first thing I can't help but think about is how ugly their new logo still is.
     
    Ryiah, AcidArrow and neoshaman like this.
  16. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,639
    Unity's made some pretty cool custom shaders and rendering features for heretic and Blacksmith too. I always have the same question every time- How would I even author the assets that could make use of these features?
     
  17. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Technically they should be useful even in lower fidelity assets, it's not like hair and skin stop being themselves in "lower def". Though strand based hair is different from card base hair, they seem to hint at strand based importation from hair edition like in blender. There is some gotcha about the details, they haven't released enough information (for example guiding strand vs interpolated strand). Skin is less and less difficult as the workflow get automated, either through capture or plain generation, small details should be less a problem in the future and the artistry goes to the lower frequency features.

    For example here is a highly stylized character with realistic shader and hair as per the unity demo, done in current unity by sakura rabbit on twitter:
    https://mobile.twitter.com/sakura_rabbiter


    I don't think authoring should be such a big problem once the workflow is done. You don't actually need hyper realistic mesh for hyper realistic rendering (see pixar too). It's just up to you what target you want to hit.

    also it's a meme:
    https://ifunny.co/picture/who-would-win-unreal-meta-humans-a-character-f-sakura-jsQZ1bFN8
     
  18. PutridEx

    PutridEx

    Joined:
    Feb 3, 2021
    Posts:
    1,136
    Personally I was more interested in the room/environment and lighting, small as they are, than the character.
    Different strokes for different folks :p
     
    koirat, BrandyStarbrite and Martin_H like this.
  19. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    For something more concrete as a workflow, you can get inspired by this, you don't need all of it though, that's good inspiration.
     
    KRGraphics likes this.
  20. BillO

    BillO

    Joined:
    Sep 26, 2010
    Posts:
    61
    In the meantime, other companies are providing similar solutions. Soul Machines is an example --https://www.soulmachines.com/. Unity may have a better mouse trap, but it's months or years away!
     
  21. valarnur

    valarnur

    Joined:
    Apr 7, 2019
    Posts:
    440
    On Siggraph http://www.siggraph2022.unity.com/ Enemies and other topics will be discussed.

    Also Weta will show Loki,a new framework for simulation of fluid, rigid, and deformable objects.

    Ziva will introduce Lion, glimpse of the future with Unity Art Tools.
     
    DragonCoder likes this.
  22. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The issue is, would they actually show a product that solve the character workflow. As far as I know, both enemy and heretic are basically using unity as a viewer.

    The hard part of believable character performance is:
    1 - in the acquisition of the base and expression model scan,
    2 - turning that into game optimized mesh and blendshape with a decent rig,
    3 - capturing the performance of an actor at reasonable quality and editable to match art direction.

    All of these aspects are done outside unity and therefore aren't useful to show the engine. Worse they are slow process that involve both skill and talent, which is costly.

    To contrast, unreal solve 1 with phone scanning, 2 with mesh to metahuman, 3 with livelink. The whole process can be done in half a day at a good enough quality, more to achieve very good quality. The speed allowed for new experimentation that are pushing the edge of character believability quite hard and still stay accessible to a wide market.

    I hope in the Siggraph they will present something that is better than marginally better skin and hair shader, if nobody can use those in production because they can't feed the quality model to match, or that it's just more cost effective to go at the concurrent engine, that will not bode well for their market penetration. The heretic demo released is basically a pile of uncommented custom advence unsafe code, which is concerning since artist would be the most interested to learn, and basically outsourcing the bulk of the work to external company like ir for scanning and snapper for rigging.

    I hope their next demo is more down to earth about the reality of unity as a product marketed toward people who want to make things and not be impressed by weird flex.
     
    pcg, angrypenguin and PanthenEye like this.
  23. Andy-Touch

    Andy-Touch

    A Moon Shaped Bool Unity Legend

    Joined:
    May 5, 2014
    Posts:
    1,485
    :( We were close!
     
    Last edited: Aug 8, 2022
    clownhunter, bluescrn, Ryiah and 6 others like this.
  24. valarnur

    valarnur

    Joined:
    Apr 7, 2019
    Posts:
    440
    I wish for realtime, no lightmaps, no baking GI, Lumen like.
     
  25. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,698
    Would indeed benefit more game types than the humanoid characters, but it's a whole different topic of course.
     
  26. valarnur

    valarnur

    Joined:
    Apr 7, 2019
    Posts:
    440
    They could make the Enemies demo but with realtime GI and no baking at the same visual quality.
     
  27. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I have no doubt they can make a competent gi solution though, GI idea are booming right now and they are kinda compatible with each other, I heard unity has competent raytracing, they showcase gi adaptive probe with enemy, and recent combinations of restir + rtxgi/ddgi show there is a generalization waiting to happen. Competent GI is mostly around the corner from everywhere, it will be damning if unity miss that boat, even Godot is on the state of current art. The issue would be more about generalizing it downward toward low end machine.
     
    m0nsky likes this.
  28. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,790
    They've already been missing the boat for years.
     
    neoshaman likes this.
  29. gjaccieczo

    gjaccieczo

    Joined:
    Jun 30, 2021
    Posts:
    306
    The frog on your pfp has a very befitting facial expression :D.
     
    Ryiah and Andy-Touch like this.
  30. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    I'm honestly interested in that skin tension solver and how they were able to pull the tension from a single normal map. I don't use 3D scans but I can probably do the tension maps by hand sculpting then in ZBrush and bake them in substance painter.

    I've been using HDRP for quite a long time and I've been able to get some seriously great quality from my characters
     
  31. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Tension map are calculated by looking at delta of edge length with the base mesh, shorter is compression, longer is stretch.

    You can bake them with a DCC, i know blender does, then just blend them with equal amount of the corresponding blendshape (unity is an optimized gpu delta blendshape model, it doesn't blend vertex color automatically).

    Alternatively but untested, you can probably read on pixel shader the dudv/fwidth of pixel position or tangent, and compare it to the resting value in the base blend, probably rendered into a texture for reference.

    Drawing them is fine too, more control, but by baking you get at least a reference.

    It's worth mentioning that it generally use two map, a compression and a stretch, but stretch is generally marginally different and effect can be simulated by shader. Not sure if that's what enemy does.

    Also pointing to the skin texture workflow of square Enix, shown on YouTube gdc vault talk channel, allows you to have much better qualité with lower precision map, enemy has 4k texture, you can probably get away, with the se worflow, with a 512 or 1024 textures for primary and secondary features, a 4 256 small skin micro details map selected by mask baked on vertex or texture.

    I'm looking for some more optimization myself, by baking skin colors into a single grayscale ramp that lerp 2 colors, based on hsl decomposition analysis of skin color. Basically there is almost no blue in pure hue extraction, full red everywhere, so variation are given by pure green hue variations. Variations seems correlated in light scale and saturation scale, which mean you would just need a detail variations map to break the smoothness, and bake the ramp into vertex colors.
     
    KRGraphics likes this.
  32. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    You can find implementation in a separate branch of digital human package on GitHub. It uses compute shaders to calculate the distance between vertices based on an initial value, and thus calculates stretch or compression. It uses vertex colors to submit the information to the shader.
     
  33. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    I remember seeing a lot about this during my learning back in the day. Could use a weight map to modulate the values.

    I will do some research on this since I do have to build my own face rigs in Modo to use in Unity. If I can figure out how to create a tension solver in Amplify, I should be good. Also, ZBrush would be extensively used for this process.

    I could bake out a texture that has ALL of the wrinkles in it (just by turning on the layers with the wrinkles on it), and blend them in at a shader level with animation.

    I've started doing this for my own characters, but since my head textures alone are at 2k and the details show up very well, I decided to leave it alone.

    My character shader takes this approach similar to Marmoset Toolbag 4 and some inspiration from Arnold. Since I don't use scans, what I do is paint the main base texture (I call it Dermis which is the middle layer of skin and this is the texture you paint) in Substance Painter with as much detail as possible, including cavity maps. Then when I finish that map, I use the information from that, create a custom channel for the Hypodermis (the deep layer of your skin) and lower the saturation a bit and overlay a colour like orange to simulate blood flow and fatty tissue. And this is where I add things like veins and freckles to give the impression that it's part of the skin and not just painted on top.

    In my shader, the Epidermis (the outermost layer of skin) is basically calculated from the Dermis Texture and slightly desaturated toward grayscale and blended on top additively. Same with the Hypodermis. If you're familiar with skin shaders in offline renderers, the three skin blend weights must equal 1.

    The Lite version of my skin shader does the above, my advanced version will take this even further. Currently working on this.
     
  34. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    I can easily create the vertex colour maps for this and the normal maps, but I'm using Amplify and would need to reverse engineer the shader graph version to achieve this
     
  35. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    Can't you access the vertex color in Amplify? The digital human package is just writing into red and green, and you can use these values to drive the strength of e.g. a tension normal map.
     
  36. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    I don't know how to, but there is a vertex colour input in the master node
     
  37. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    Hm, in ShaderGraph you have a VertexColor node, which provides the color of the current vertex in mesh. I guess an input in master node is something different.
     
  38. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    I found a vertex colour node in Amplify...now to figure out how to use it
     
  39. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
  40. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
  41. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    This was very interesting. And I'm wondering if I could use the vertex painting tools in Modo and have it readable I'm Unity
     
  42. valarnur

    valarnur

    Joined:
    Apr 7, 2019
    Posts:
    440
    Netflix' show was completely made realtime in Unreal. I wonder when will movie producers start using Unity for the same. Heretic and Enemies proved it can be done.
    But still Unity is not completely realtime when is using mixed lights.

     
    KRGraphics and Daydreamer66 like this.
  43. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    Nope
     
  44. valarnur

    valarnur

    Joined:
    Apr 7, 2019
    Posts:
    440
    I believe Weta will make high realism movie using Unity and new VFX tools. That's probably the end goal of new toolsets they're exploring now.
     
  45. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    And we'll be grey before we can use it
     
  46. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I need to go back and study this 3 layer model again, heard of it long time ago, but never tried it. Maybe I'll found some new idea to optimize further.

    I'm trying to figure out a way to push quality, optimization and ease of worflow as far as I can for a low end, low skill, zero cost face authoring and rendering. I broke the face in 13 zones to study animation optimization, I realized that most part of the face animate trivially, and might only need simple bones.

    But the real challenging part that is complex is just the mouth area, between the nose and the chin, encapsulated by the naso labial fold. I have seen some Asian cut just the face to apply blendshape and avoid the whole head, I got inspired by that to cut the mouth part as a separate mesh and apply blendshape only to that. Also I need to test applying masking blendshape to get partial blending, if that works we may need less blendshapes overall, and just operate on region intensity.

    Also the mouth is structurally symmetrical, I was wondering if I could only store half then mirror it. I haven't tested it yet, also expression capture is kind of hard, and I don't master blendshape creation on this area, I'll probably do a sloppy proof of concept first, I'll use make human base to start, before having good data. But that mean figuring out how to get good data, even as reference.

    While I'm not only targeting likeness cloning, so far, acquisition of scan data is the hard part, then performance capture, as it must be possible on cheap phone or camera not just iPhone. I have started downloading scan and face motion capture app to test, with the advent of vtuber there as been a lot more experience on that latter front. Obviously the quality is lower but I'm looking for idea to build on top.

    For example one idea I had is to use a deep learning transfer that work on on stylized character (like the wombo app) then use the clearer features due to flat colors to get a better stable transfer of data with a regular tracking soft.

    There is also an explosion of services about face retopo that basically take a base mesh and fit it to a scan (or even a photo) by recognizing facial landmark and deforming to match the base with the sources. (I should hit github and co for that one). It help recover good data from the wobbly phone scan a lot, but I have only found paid options so far. I don't know if they can work for expression though.

    I was wondering if you have any experience or idea about those aspects there? Or anyone?
     
    KRGraphics likes this.
  47. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    My most recent 3D scan was of food. As for my skin shader using Amplify, I use a Vector3 and plug that into a Summed Blend Node, then I plug my three textures into it. I will be looking at Wrap3 for retopologizing scans. Once I finish building my new HMC, I hope to use it for facial captures, but I will be keying the face by hand
     
    neoshaman likes this.
  48. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    @neoshaman So, this is the character that I've been using as my shader guinea pig and I've made so many strides with this as far as creating shaders is concerned.. His skin is the 3 three texture blend I mentioned earlier, based on shaders like Marmoset Toolbag 4.

    Before I was using a single texture for this shader, but I was not able to get the feeling of blood flow with a single texture. And things like tattoos and liver spots look REALLY good when textured properly. Right now the eyes are "okayish", but I think it is missing caustics, that aren't supported in Amplify yet. I prefer to use Amplify because it's closer to Unreal 5.

    All of his skin is hand painted. For Blendshapes, I will be doing them in ZBrush and that will require a lot of careful attention.
    upload_2022-8-12_22-36-20.png upload_2022-8-12_22-37-34.png
     
    neoshaman likes this.
  49. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    It's looking good! The quality is really high and the details excellent!

    I should try to pass that image to my hsl decomposition analysis, but I need to automate that hsl analysis with shader, instead of doing it slowly by hand in photoshop equivalent. I should probably share that work flow lol.

    I see that you use the red cheek model on a black character, so far when analyzing black skin albedo scan, these part are returned as orange, the T area of the face is orange. I used the hsl analysis precisely for better black skin generation, because it always felt wrong, and in the past I didn't knew why.

    Basically the green layer I was talking on the PURE hue decomposition is the red + green = orange. But red appears constant so it mean that's the green component that drive variation. Ie cheeks turn orange on dark skin, generally red is seen on female make up layer, that's only when you see blue, on make up, generally on the lids and eyelashes in dark colors.

    I'm also investigating the skin diffusion profile, I don't have definite proof it's different, but tracing back to early skin measuring paper, black skin seems to be different.

    I'm also looking for solutions about specular occlusion, but I haven't started and has no lead. For whiter skin I'm looking at local gi, like the cheeks reflecting on the nose in the shadow. I'm developing a texture to texture accumulation based gi, I hope it can be useful for that without breaking rendering budget.
     
  50. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    The exact colour I tinted my hypodermis map was orange and I'm glad I nailed it correctly. Also the IOR in the diffusion profile is lowered than the default, because the skin looked weird to me for some reason. I'll know once I get my actual levels working.

    I spent a lot of time tuning my skin shader workflow for darker skin and the important part is getting the roughness correct. The character I'm testing needs a little more tweaking. And the second screenshot is "Grey Mode" for debugging and lighting checks
     
    neoshaman likes this.