Search Unity

  1. We are migrating the Unity Forums to Unity Discussions. On July 12, the Unity Forums will become read-only. On July 15, Unity Discussions will become read-only until July 18, when the new design and the migrated forum contents will go live. Read our full announcement for more information and let us know if you have any questions.

Why can't Unity HDRP correctly render AAA looking character models?

Discussion in 'High Definition Render Pipeline' started by cloverme, Sep 9, 2021.

  1. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    UPDATE: Dec 8, 2022: https://forum.unity.com/threads/why...-character-models.1168103/page-5#post-8646000


    Just in general, why can't Unity display an FBX rigged model with decent materials for SSS for Skin with the HDRP\Lit shader? Every major character design tool outputs full 4k PBR materials but when you pull them into Unity, they all look wooden flat. It doesn't matter how much you adjust specular/metal/alpha, etc. Whereas in Unreal, they look amazing without a having to tweak or use custom 3rd party shaders. It doesn't seem to be related to subdivision or normal maps, etc. It seems to come down to HDRP boiling away details in its light rendering or the import workflows.

    Yes, I know there's the Digital Human pack with the 1 character (Whoopie!), but it's not compatible with any of the popular character design tools, at all, so it can't be used.

    I'd really love for Unity to address using AAA looking digital humans for skin, eye, and hair without Asset Store stuff and compatibility with importing characters from tools that are used to create them. Granted, it's not an apples to apples comparison, but has anyone been able to get a decent AAA looking character into HDRP other than the heretic team with their special one-off-custom rig? If so, what was your workflow?

    Unity HDRP:


    Unreal:
     
    Last edited: Dec 8, 2022
  2. PutridEx

    PutridEx

    Joined:
    Feb 3, 2021
    Posts:
    1,136
    This is a part of an HDRP game being worked on by two people only.

    this isn't a showcase, where there's nothing but the character.
    This is a part of an actual game, with RPG mechanics, saying that to give an idea of what they accomplished when it wasn't a skin/char demo, they gave some focus on this and moved on to the rest of.. everything that makes a game. They don't use any assets related to this, and as far as I know, they use HDRP as it is. But unless an indie project is made by absolute beginners, custom tools/shaders will be made. Just not the level of 'unity demos' custom.







    Game is She Will Punish Them, it's +18 with nudity. Although you can see more of the skin work in there.
     
    Last edited: Sep 10, 2021
  3. koirat

    koirat

    Joined:
    Jul 7, 2012
    Posts:
    2,100
    The question is, is the math result in the lighting/surface calculations in unity3d and unreal the same.
    And the only difference is placement of lights and shadow quality in this scenes.
     
  4. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    Yeah, when I get the time, I'll do an apples/apples from Unity to Unreal using the same model. I think you sort of answered my question in a way. After you posted the Sua example, I went and did a bit of deeper dive into the Unity Digital Human build, there I found custom shaders (which no longer work in Unity HDRP 10.x) and some lower level API injections they've done for the the example, which they hint at in the blog. I believe this is why we don't see more AAA-style character models in Unity for cutscenes and close up camera work.

    The "Sua" model from Hyeong uses the Unity Digital Human, which is largely non-extensible. Unity has a "making of" of Digital Human on the blog post. Which is here https://blog.unity.com/technology/making-of-the-heretic-digital-human-tech-package. You'll quickly see it's not realistic workflow for indie game developers, in fact, some of what's there sort of answers my question, they had to "inject custom rendering work at certain stages during a frame". Hyeong also used RayTracing for Unity, but raytracing has a lot of drawbacks for typical unity games. The clothes used for the Sua model was done by an asset store developer that's no longer in business.

    Unreal 5 has metahuman as a package
    which essentially does what the Unity Digital Human does, but Unreal made it artist friendly whereas Unity dumped it in github with the standard "good luck" and no real tutorials for it.

    I'd really to see Unity do more along the lines of supporting artist tools (Maya, ZBrush, etc) and bringing higher fidelity into game production. Unity has this bad habit of showing off tech that are one-offs to gamedevs (Adam, Book of the Dead, Heretic, Megacity) that are purpose built for one demo and pretty much locked to one version of Unity with lots of experimental development. They show these things off like "Look at what you can do in Unity!", which is really, "Look at what you can do in Unity with a 20 person team working on something specific for a year or more for a 30 second demo".
     
    stonstad, Ubrano, soleron and 5 others like this.
  5. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    Yeah, I don't think its about light and shadow to be honest. Even with SSS, using Ultra for quality in shadow maps with High in quality for contact shadows, I don't think that really helps. Yeah GI helps (although not available in HDRP 10.x in Unity 2020.3), but really only with light bounce, there's something more that's missing. At this point I'm attributing the lack of higher fidelity from design to game engine is what they allude to in the digital human blog post with custom shaders and a lot of tweaking and injection into the srp.
     
  6. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    You can export metahumans characters to unity with bridge, also it is about lights and shadows, you need to make the same scene in unity and unreal, also you need to take into consideration the textures and the materials. Remember when you open unreal it is preconfigured to high quality with effects and all that and unity is not. Try opening the HDRP official demo from unity, put all quality in high and then import your metahuman model with all the textures at least in 2k and make good use of the materials, then show us the results :)
     
  7. Ruchir

    Ruchir

    Joined:
    May 26, 2015
    Posts:
    934
    How? :eek:
     
  8. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
  9. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    I'm still tweaking my own model, but I mean, from an artist perspective, you don't want to spend days and days trying to match quality from the design tool into the engine. How Unreal does this is something that Unity should pay attention to. I have no idea how many hours the guy spent doing a 1:1 on the test above. I've been tweaking for about an hour and managed to get closer, but I still have a lot more work to do, which, frankly is disappointing when Unreal does it so well. At this point, I'm considering doing my cutscenes in Unreal, capturing them to a 4k/60fps video file, and playing them in Unity as such.

     
  10. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    And HDRP just doesn't have dithered temporal AA for hair Afaik, or maybe it did, but there's was never an example on how to use it ¯\_(ツ)_/¯
     
  11. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    Unity did add a HDRP hair shader into 10.2, but like everything else in the unity world, no in-depth tutorials. It does improve the hair look over the traditional HDRP/Lit shader through by quite a bit. I have used it, not on the model above, but on another model and the hair quality does improve. Same with eyes, Unity did release an HDRP Eye shader, which is decent, but as always, the Asset Store has better ones, but locked to specific HDRP versions (9.x).

    The Unity HDRP Hair shader does support vertex animation, but here's the kicker, if you use Raytracing DX12, vertex animation is not supported. Again, like everything in Unity, it's a mishmash of compatibility. I can't use Raytracing anyway because my project uses tessellation materials, and terrains, but it's just an example of how Unity is approaching HDRP from an artist perspective, which is not well thought out.
     
  12. Deleted User

    Deleted User

    Guest

  13. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    For what i can see, the lighting is not the same (you can see it in the lips for example), but the shader looks better on unreal's image. If you really need that, why don't you use unreal then? The beta version of unreal is graphically waaay better than unity.
    You need to take into consideration that there is not release version for Unreal 5, is still in beta, so is in fact something new (newer than unity's features) but if you are looking 100% realism, go with unreal of course.
     
  14. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    Yeah, I mean, after I finish my current game, I'm moving over to Unity. I like the ease of development with Unity from a language (C#) and extensibility perspective, but what it offers in one area, has some significant drawbacks in others. The philosophy has become "use the asset store for functionality". I've got 10k+ hours into Unity, been a presenter at GDC, and worked for 505 games, Microsoft, and Bioware, so it's not like I just downloaded Unity and was like, "Uh, why does this look like cardboard?!?". I get lighting, probes, shadows, shaders, etc.... so I think I've boiled this down to, Unity can't do what Unreal can do, which bums me out, but it is what it is.
     
    NotaNaN, florianBrn and hippocoder like this.
  15. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,725
    Wondering if people even have proper cubemaps / probes set up. Those are unfortunately not optional, and HDRP's SSS/transmission needs a modification to the map unless you know for sure the thickness amount is what is expected by this particular formula in this particular engine.

    I don't mean to be funny but you know a lot of people aren't using assets made for HDRP, nor setting up scenes properly.
     
    Ryiah, JoNax97, KRGraphics and 4 others like this.
  16. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    Yes i agree, in fact i'm doing the same, finishing my current project and then maybe for my new ideas i'll go with unreal but unreal has a big downside for me, a simple < 2 minuttes demo is 100gb size, that is for me completely useless, so i hope the release version is better on that.
     
    cloverme likes this.
  17. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    That is of course other important point. I love unity of course but you can achieve best graphics in unreal doing almost nothing. That is from an artist perspective which is our case. Thinking about (for example) setting probes, dealing with limitations and bugs, light-maps, lights limitations, learning all that stuffs in unity while unreal you have all that almost immediately... is very attractive for us. It is what it is. But of course Unity has very useful tools for other things too, so...
     
    hippocoder likes this.
  18. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,725
    I sympathise, but I don't think it's a case of Unity's HDRP not being able to do it, and as you say, it's how much friction there is. Unity can do raytraced ground truth as good as the best of them.

    Sadly you're right - Unreal's all fully set up for that glossy AAA scenario - but be warned, it's not likely to be anywhere near as easy to tune. I'd say that's worth trying out anyway since you probably won't want to.
     
    BrandyStarbrite, Ryiah and impheris like this.
  19. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    A little example, i open the 2020's hdrp template scene, i did not touch anything i just put a point light close to a wall and i have that light leakage (in fact i made a whole post for this some time ago)
    Maybe i have to re-bake, maybe i have to add probes maybe something else, but with unreal i do not have to deal with this, i just add a point light in the "new scene template" on unreal and it works perfectly without this light leakage, this is why people like us got a little disappointed.
     

    Attached Files:

    Last edited: Sep 15, 2021
    nehvaleem, valarnur and hippocoder like this.
  20. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,725
    I don't blame you, it's Unity's own problem to deal with. The engine is awesome but they're poor at showing that and poor at keeping demos alive, unlike their competition.
     
    AnomalusUndrdog and Ryiah like this.
  21. DEEnvironment

    DEEnvironment

    Joined:
    Dec 30, 2018
    Posts:
    437

    i have to fully agree with you on this point
    i see post after post weekly about x vs x and feel for them as they don't seam to grasp the basic ideas and expect one click solutions, HDRP can and does fully make this level of detail if they take the time to setup

    HDRP is rapidly evolving and Unity's level of progress just over the last year has been amazing, if they take a moment to see what is coming down the path not far ahead of them they may just praise unity for the hard work.. Unity seams to use a gated process as we see much of the workflow comes in small steps clearing each milestone in its schedule. Not every one is always happy about this but it does keep unity controlled

    @Unity keep up the good work and we look forward to the future
     
    cmppc, KRGraphics, bdb400 and 3 others like this.
  22. Ruchir

    Ruchir

    Joined:
    May 26, 2015
    Posts:
    934
    Well you didn't enable shadows :rolleyes:
     
  23. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    ah yeah of course i forgot that, because in unity it is obvious that lights don´t need shadows, that is ok whatever.. But, what about my old post with (Jun 7, 2021)
    Post: https://forum.unity.com/threads/rea...2019-idk-how-to-call-it.1121854/#post-7243577
    That is the leakage i was talking about i just forgot the very obvious shadow setup for those rare cases where lights needs shadows.
    in that post i have the bug in 2019 and 2021 (Whatch the videos)
    also see the image
     

    Attached Files:

  24. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    The other thing is that the Digital Human doesn't even have a forum for it. This is one of those "dump on github and run" deals that unity does. You can't easily reach the people at Unity who put it together nor can you actually reach anyone at Unity to say, "Look, you've got a community of paying artists/gamedevs that want to put AAA-looking characters in our games, and you've left us with nothing. Because of that, we're moving to Unreal".

    I'd love for someone from Unity to drop in here and go, "you're wrong! Here's a step by step tutorial how to achieve a custom character from xyz to Unity HDRP 10.x that is 1:1 with Unreal Metahumans".
     
  25. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    I have mixed feelings about this.

    1. Unity HDRP normal rendering...and how it reacts to light source and all is a hit and miss for my liking. It just doesn't produce results as I expected. Especially in high res environment, it feels like normals just become noise. I believe this is a tradeoff to achieve runtime rendering.

    2. SSS helps but given that the SSS implementation is a quick approximation, it is very hard to achieve the typical rendered look, which just like 1), also understandable. I am not really complaining.

    3. Hair rendering is hard to achieve and often ends up being a crude alpha clip. Also understandable. That soft hair is near impossible to achieve.

    4. Every other showcase of human/organic rendering consists of many 4k textures, custom lighting and shaders. Also custom environment, and super high poly count for rigged. Not realistically feasible unless the model is used specifically for cutscenes...for indies, this is automatic no no.

    5. The lack of indirect lighting and SSS detail makes every character's head look much bigger even if they are proportionally correct, the depth correction is also an issue as the game camera cannot focus on characters when the camera is controlled by the player.

    Provided all the above, I just ended up with the following. She is a vampire NPC in my game :)
    npc.jpg
    The first image is how a NPC gets rendered in the game. SSS, with normal detail map. Then the second image, has light layer based 2 point lights (blue and green) from either side to give more depth and variation to the npc. This is only turned on when the character is talking to the npc (Skyrim style).
    npc2.jpg
    This is her in game. No blue and green lighting...but facial and skin details don't play much of a roll as you will most likely keep some distance in FPS view.

    It is not much, but realistically speaking, bumping up character detail further will kill more performance and just makes it not really feasible. And I don't want one NPC to look great and other not so great. I want to maintain a steady level of detail throughout the game. She is a 100k triangle rigged character with 12 4k maps. On top of that 4 bone drive and blendshapes with mecanim already takes a huge chunk out of performance. All the IK and foot placement as well. I think I am already using way more than I should be.

    The flatness that comes with PBR based lighting makes character rendering hard as lighting wasn't designed for organic rigged mesh, but with a few tricks it gets better. The test scene is deliberately kept overcast so that I can keep a steady baseline of how things look.

    Now, I am okay with this level for now, and probably for my next project (If I don't move to Unreal Engine). But I don't think I will be happy with this level in 2023.

    Hope this helps!
     
    Last edited: Sep 16, 2021
  26. Deleted User

    Deleted User

    Guest

    I think u have not setup material, textures,
    scene lighting and shadows properly.... Try tweaking ambient lighting or use HDRIs or try using it in an environment and use point lights and area lights to see SSS effect because the effect does not look proper in directional light and add some minor details like skin pores, spots etc to the texture and use their normal maps.. proper normal maps and specular maps are really important in creating realistic human skin.... Highlight areas in the specular map where u want skin to be shiny to represent sweat or oil and setup the SSS profile properly and use multiple sss maps land also take a look at this 11-53-14-images.jpg use this color scheme and blend this with your skin texture this will instantly make your character look believable and if u have zbrush than u can sculpt minor details and bake the results to normal map and poly paint the above color scheme..... the process is same for unreal if u are creating custom characters.... metahumans has everything setup already and is tightly integrated inside it(maybe unreal looks better in custom charcters as well because of its out of the box realistic lighting.... on the other hand in unity u always have to tune some setting for shadows, lighting, GI and exposure) if shaders is what giving u trouble than read the documentation, get help from forums or watch some tutorials or get another shader from asset store..... If u want out of the box results u can take look at CC3
     
    soleron and cmppc like this.
  27. Deleted User

    Deleted User

    Guest

    Aren't u using any kind of SSS map?? It seems that your whole model is performing SSS.... isn't it possible to have the effect stronger in thinner parts like ears, nose and fingers of the mesh and more subtle in thicker areas via settings or by manually defining it through maps or vertex color????(because it's quite simple to do in blender with thickness settings and material editor and if I remember unity does have these things in shadergraph and maybe in profile settings)
     
  28. Deleted User

    Deleted User

    Guest

    Here is the process of creating realistic skin texture

    There is always a lot of manual work to be done for creating charcters manually.... does not matter what engine u use... So if u want to have AAA quality than u have to use the AAA methods to create a custom charcter... U can do this easily with CC3 and metahumans by the way
     
    Last edited by a moderator: Sep 16, 2021
    soleron and Ruchir like this.
  29. Ruchir

    Ruchir

    Joined:
    May 26, 2015
    Posts:
    934
    Are there any sample AAA characters available for unity?
     
  30. Deleted User

    Deleted User

    Guest

  31. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    Yes, but often times it is not worth it. The upside to using Thickness map and SSS map, so 2 textures is that you get to fine control how thickness vs scattering happens. The downside, is that 2 extra textures are needed. Also, it is UV based, and UVs are made with texture detail and distribution in mind but not thickness. That usually ends up with a very small portion of the thickness and SSS map texture actually contributing towards the effect. Think of ears and nose size for the whole body. Unless you are prepared to have a separate map for the head, I don't think it is worth the trouble. Usually a good adjustment of thinkness and diffuse profile is enough to the desired look. Also, with correctly working thickness, you have to be extra careful with light placement strength as it will react to it, which requires multiple testing in many scene settings. The fact that females mostly have their ear covered with hair also makes it not worth it.

    Also, SSS doesn't work very well with contact shadows, creating a line of shadows that looks really bad. I reported the issue using a bucket but Unity seems to think it is a fair tradeoff to keep things performant. And I agree to some degree.

    Lastly, if my game spends most of its time looking close up to the head, then yes, I think the 2 extra maps are worth it. But so far, i think it introduces more work, more noise to rendering and extreme cases that restrict light placement.

    I do use it for my vampire zombie as it is mostly naked with no hair and the SS on its ear shines. But for a female character with enough hair it probably isn't worth it. Not to mention that it doesn't look that great anyway.
     
    cloverme and Deleted User like this.
  32. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    I think we are discussing why AAA quality models rendered in Unity do not render like it does in other Content Creation Suite. So not exactly how to create the asset, but why in default HDRP environment, shaders and textures lack quality and look "downgraded".
     
    cloverme likes this.
  33. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    I get your meaning, but the issue is really about quality fidelity between the character authoring tools and Unity. I have AAA character design models with high micro normals for skin, etc. The problem is that, hours and hours of tweaking in unity with Shadows, Light, probes, SSS, etc only get you about 40% of the way in terms of fidelity with HDRP, even if you use custom skin shaders, hair, eye, etc.

    The workflow pipelines can be pretty complex, especially if you have a model that needs to express facial emotions (blendshapes or meshmorphs) and speak with synchronized audio. Some of these character design tools have exports, specifically for Unity HDRP, but they still produce a less than ideal Unity model for closeups with cutscenes. They're great for if the camera is 10-15 meters away, but for cutscenes, you need to have the camera close. When that happens, the character details for skin, eyes, and hair are not very good due to how HDRP is rendering the models, even with high subdivision and 4k textures.

    I'm not trying to diss your posts on advice, but I've already done all the things you have suggested. Normals, multilayer texturing, HDRI's, 3 light studio configurations, shadow tweaking, etc. This is not so say that I haven't had a "okay" result with these things, but it's still a bit of a boondoggle to get a much higher quality result with Unity HDRP.

    Update:
    Recently, Unity posted information on a new tool called Lookdev studio, in that, they recently showed a fairly decent character model in Unity. https://forum.unity.com/threads/lookdev-studio.1148474/ Unfortunately, they don't talk about the workflow for that particular model, who designed it, what software they used, what the pipeline workflow was for import, etc. I've asked if they can share more info since it looks about on par with the digital human package, but I'm guessing it isnt the DH.
     
  34. Deleted User

    Deleted User

    Guest

    cloverme likes this.
  35. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    Yes, I've already mentioned this a few times as I've seen the digital human asset. I know of two other experimental research projects that have used it to create examples, but it's been quite difficult to follow. Honestly, 2 people using it out of tens of thousands of Unity projects and developers doesn't make the tool a production ready asset.

    -It's mostly abandonware from Unity since there's little to no official support, it's on github, which means it requires the community to improve it, not unity.
    -The asset only works with "4D clip import", which no character creation tools used by artists can export.
    -There's a blog here: https://blog.unity.com/technology/making-of-the-heretic-digital-human-tech-package which shows how much of a one-off it is to create just 1 character.
    -They had to use a custom pass that operates on the contents of the HDRP normal buffer with in the SRP.
    -Some of the links to additional information have 404 errors on the blog now, several years later.

    I don't feel that the Digital Human asset is a good solution from Unity on importing AAA style model characters. Mostly because it doesn't support any workflow pipeline from character design tools that artists currently use. You can't use mocap tools from leading providers in the industry. Each character you want to try to use it for has to be custom tailored requiring customized granular tweaks per shader and shader graph per character.

    A lot of the people are under the misconception that the DH package from Unity is a "tool" that will allow you take say, a character from Character Creator 3 (A popular character design tool) and somehow create a DH from an export. That's simply not the case. In it's current state, I would largely say, the DH package is unusable from an artist / developer perspective.

    Yes, I've also seen that forum link, and the work is quite good there. It's probably as close as most of us can get.

    Here's my own Unity HDRP example using a combination from CC/DAZ and tweaking the Lit shader
    Unity HDRP 10.x:


    Source from tool CC3:
     
    Last edited: Sep 17, 2021
    stonstad and Ruchir like this.
  36. Ruchir

    Ruchir

    Joined:
    May 26, 2015
    Posts:
    934
    I really feel like they should own an in-house game studio for creating AAA (or the highest level unity is capable of tbh) games and actually try scaling that game to cards like 1060 and normal user hardware.

    Then they will understand the pain with current unity's workflow and how not production ready many of tools are for games at big scale. I mean they recently added a card for adding support for surface shaders (which wasn't a priority for them before), even shader graph isn't a match for amplify shader editor after so many years (I really don't know what they expect devs to do for things like weather or any integrated system). Documentation for most of the new pipelines are pretty scarce, there main pitch was to be able to create custom render pipelines or atleast customize the existing ones but I can be pretty sure I have no idea how to do that (At least Catlike coding did a series on this).

    They should also allow for pull request from outside the company, I know there are quite a few talented individuals that can help with development of quite a few packages but they still haven't figured out the licensing part since 2018 for some reason.:(

    I really don't think they have any real short term goals for this year sadly (Given they skipped the roadmap talk this year). And the next roadmap is surely going to be about DOTS. (It's going to take quite a few years before unity return to being well documented and stable like it was when unity 5 launched)
     
    hopeful likes this.
  37. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    I do not think that Package (digital human) is a tool and from what i understand, that package is more like a tech demo that show what you can do with unity and custom shaders. Also you need to take into consideration that this package is from 2020.
    Anyway your example looks very good in my opinion.
     
    Last edited: Sep 17, 2021
    cloverme likes this.
  38. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    @cloverme Do you have that model and textures you show on your 4th comment? can you please upload those files? i thought i can export my models from metahuman because i saw the option but is not posible and i want to try it because i think we can achieve good results too with unity
     
  39. Deleted User

    Deleted User

    Guest

    @cloverme I wasn't saying that the DH Asset is a tool... I was just saying to take a look at the material, texture, shader, sss setup and settings..... Anyway u can also take look at these
    https://forum.unity.com/threads/nodes-for-shader-graph.543519/
    (Scroll down a bit for the post by Andrey_Graphics about multiple sss and specular maps)
    https://docs.unity3d.com/Manual/StandardShaderMaterialParameterDetail.html

    https://docs.unity3d.com/Packages/c...tion@10.2/manual/Mask-Map-and-Detail-Map.html

    https://docs.unity3d.com/Packages/c...efinition@12.0/manual/Layered-Lit-Shader.html
    And maybe use HDRP layered lit material for multiple detail,SSS, specular layers(if u don't want to use HDRP Layered mat than use layered shader graph or just normal lit one)
    Hope these help u to improve your character look a bit and keep SSS effect subtle because it doesn't look great according to me
    Also There are some tutorials by unity
    https://learn.unity.com/tutorial/challenge-configure-hdrp-materials-for-a-character-model

    https://learn.unity.com/tutorial/preparing-to-texture-and-paint-the-character-model#

    https://learn.unity.com/tutorial/enhancing-the-character-model

    https://learn.unity.com/tutorial/configuring-the-character-model-in-unity-editor#
     
    Last edited by a moderator: Sep 18, 2021
  40. Deleted User

    Deleted User

    Guest

    I think layered lit + detail maps is the way to go for realistic results instead of standard lit or shader graph and looks easier as well and the above unity tuts I mentioned also used it .... Unity also used it for photogrammetry the steps must be same for skin
     

    Attached Files:

    HIBIKI_entertainment likes this.
  41. HIBIKI_entertainment

    HIBIKI_entertainment

    Joined:
    Dec 4, 2018
    Posts:
    596

    The detail maps itself are certainly really powerful layers to use if your project requires high frequency close up for sure.
    Even glass works well with a detail pass.

    But yeah high frequency details maps for skin, scans or otherwise. Perfect use case if you need extremely high fidelity models.

    And you're totally right, it really helps keep large scale and scans in check too that might be missing just a little bit of the details, or you didn't do a mid and close distance scan pass on your assets.

    You don't always need them but really really helpful for sure.

    And regarding layered lit, absolutely, you still need to create a process to make ID maps for the asset but it's amazing for blending terrain and overlays, it's one of those unloved underdog shaders that really make the difference.

    It's a little more steps than say Unreal s virtual texturing blends but powerful nevertheless less.
     
    Deleted User likes this.
  42. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
  43. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    Any of you manage to check the new "FIA" character asset?, it have a good quality. Head might have too much tris, but it doable for realtime cinematic, and ingame it should be replaced with LODS
    upload_2021-9-20_19-48-58.png
    upload_2021-9-20_19-51-35.png
     
    Last edited: Sep 20, 2021
    PaulMDev, KRGraphics and Deleted User like this.
  44. impheris

    impheris

    Joined:
    Dec 30, 2009
    Posts:
    1,747
    @Reanimate_L i was playing with it some days ago, it looks good for me, i'm very noob of course, but i achieve good results just tweaking some settings and editing a little bit the texture... but i have problem with the hair (maybe for my laptop)
     

    Attached Files:

    • 1.jpg
      1.jpg
      File size:
      168 KB
      Views:
      484
  45. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    @impheris yeah i also tweak the hair material settings a bit, but i didn't change the texture.
     
  46. HIBIKI_entertainment

    HIBIKI_entertainment

    Joined:
    Dec 4, 2018
    Posts:
    596
    oooo Yes! we did a bunch of render A/Bs with our own stuff over the last couple of weeks.
    Just grabbed a quick screen.
     

    Attached Files:

    Deleted User and Ruchir like this.
  47. Oguzkagansahin

    Oguzkagansahin

    Joined:
    Feb 19, 2021
    Posts:
    4
    Hi, everyone! I have managed to get some shots like below. These are mostly some random things that ı have taken time to time to inspect later to see the progress and to create a development journal. Using HDRP's official stacklit, eye and hair shaders with little adjustments on top. Settings are mostly at medium. All shots are using only a directional light, a spotlight(for transmission) and a HDRI. I like to keep it minimal in terms of light count. Because, I believe this is the worst case scenario for almost any game especially for open worlds and ı think it must look enough with these conditions first.

    image_027_0015_2k.png

    Closer shot of the skin:

    image_028_0015_2k.png

    This is an old one:

    eye00.jpg

    I didn't get much into the hair but ı got something like below. An old brow model. It seems like hair will be a problem from the tests that ı do and still trying to figure out things about that.

    image_029_0015_2k.png

    And I was testing ripped models for educational purposes. Here is Alcina Dimitrescu from Resident Evil Village. It's just using a base color and a normal texture with built-in lit materials.

    image_020_0003_2k.png
     
  48. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,725
    That's more like it.
     
    Oguzkagansahin and cloverme like this.
  49. cloverme

    cloverme

    Joined:
    Apr 6, 2018
    Posts:
    202
    This is fantastic... probably the best character model presentation in HDRP. Looking forward to seeing your dev journal!
     
    Oguzkagansahin and hopeful like this.
  50. Oguzkagansahin

    Oguzkagansahin

    Joined:
    Feb 19, 2021
    Posts:
    4
    Sorry for my late response. Thank you and everyone that put a like, so much. It motivated me a lot. About the workflow process, I try to explain the things that I have found important. Hope it helps.

    First of all, have to say that model on the first shot not rigged yet. Almost all of the of the skin shader and creation of the roughness texture based on the GDC talk below by Jorge Jimenez et al.:

    https://www.gdcvault.com/play/1018270/Next-Generation-Character >> GDC Talk
    http://www.iryoku.com/stare-into-the-future >> Slides of the same talk

    Heretic demo and as far as I can see most of the modern digital human presentations are almost using the same principles. Let's start with the texturing part. Because especially roughness will play a important part at the shader stage.

    Skin
    Base Color

    - You can tint the custom shadows for nostrils, mouth and ear to a slight red tone to replicate a fake SSS effect.
    - If your rendering software supports multi channel AO(not possible in Shadergraph I believe), you can do this for AO too.

    Scattering
    - I'm just using a float value to decrease the texture count.

    Bent Normal
    - To enhance the visual a bit and to occlude global illumination. Generated at Marmoset. Since it will directly send to the engine rendered at 8 bit with dithering. At later stages, planning to pack this with regular normal to decrease the texture count.

    Roughness
    This is where the real trick comes in play. According to the talk above, pores on the skin is occluding the light so they must not reflect any light. We can create this kind of effect with masking/multiplying roughness/smoothness values with cavity. Cavity texture should be binary and as flat as possible, either 0 or 1 because we do not want to alter the roughness values. Our intention is just to mask them. Like below, base/raw roughness only have low frequency details. Variation and high frequency details coming from detail map and cavity.

    Screenshot 2021-11-08 212459.jpg

    Shader
    I have used a "StackLit" shader. First applied a detail map to normal and smoothness textures and as advised at the GDC talk, used "Reoriented Normal Mapping" for blending base normal and detail normal which Shadergraph has this feature in it's "Normal Blend" node. As I know this will allow us to maintain both base and detail normals more correctly without losing their powers. At the talk they have used a sinusoidal noise for detail map, did not been able to implement that yet. Instead, used a height sample from TextureXYZ's multi channel face but I think sinusoidal noise represents the real life referances better especially at mid range shots. After the detail map process we should lerp the smoothness value to a constant float that fits with a fresnel calculation because we can't really see the pores at grazing angles so skin must reflect the light without cavity applied. Also we'll feed this smoothness result to the "Specular Occlusion" input after setting "Specular Occlusion Mode" to "Custom". Because we want to occlude global illumunation at the pores too.

    A Little Note: I have applied the curvature at the Substance Painter to decrease the texture count. Alternatively, you can bring both curvature and raw smoothness textures to the engine and multiply them at the shader. Remember the part where we fade out curvature applied smoothness with a lerp function to a float value. If you bring both raw smoothness and curvature textures you should use the raw smoothness at the lerp instead of a float. Our goal here is to remove the effect of the curvature at grazing angles but when we apply curvature at our 3rd party software we don't have the real smoothness values at the engine without curvature so we're using a float value like 0.45 - 0.55 to replicate the general skin smoohtness. Let's say Lerp(x , y , Alpha) is the lerp function.

    Texture you bring to the engine
    Raw smoothness + Curvature >> Lerp(raw smoothness x curvature, raw smoothness, Fresnel)
    Curvature Applied Smoothness >> Lerp(Curvature Applied Smoothness, Float, Fresnel)

    Skin has a two layer structure. One of them is the outer layer of the skin which is more rough and the other is that oily thin layer on top of that. To replicate this, used the StackLit's "Dual Specular Lobe" option with "Dual Specular Lobe Parametrization" to "Direct". This is giving two inputs for two layers which is "Smoothness A" and "Smoothness B" then lerps between them. Smoothness A is just our standard detail mapped, fresnel faded smoothness texture which fed into the specular occlusion too and Smoothness B is just the multiplied version of the other. If I understand correctly, at the talk they're using a value of 2 and I'm using a value of 1.5 as multiplier.

    If you're not thinking about creating custom shaders and since there is not a built-in StackLit material, you can use a lit material with just missing the dual lobe and fresnel parts. Probably, dual lobe will not affect too much but without fresnel it can create disturbing specular shine patterns at grazing angles if you're using curvature.

    You can see each effect below:

    A - No Cavity - No Dual Lobe (This is the version without any of the mentioned effects above.)
    Without Cavity - No Dual Lobe.png

    B - No Cavity - With Dual Lobe
    Without  Cavity - Dual Lobe.png

    C - With Cavity - No Detail Map
    With Cavity - No Micro.png

    D - With Cavity - With Detail Map (Every effect applied)
    With Cavity - Micro.png

    Eye
    For creation of the normal map, I'm using Vadim Sorici's tutorial here:
    https://marmoset.co/posts/how-to-create-realistic-hair-peach-fuzz-and-eyes/

    Baking iris' concave structure giving a lot of interesting lighting situations but to actually see those light effects we have to go to the "Mesh Renderer" of the object and turn "Cast Shadows" to "Off". Because it's casting shadow to those areas due to it's geometry where actually iris must light up. This is presented at the GDC talk too.

    Unity's built-in Eye material actually created with shadergraph. I'm using that as a base and putting little adjustments on top like eye redness system to achieve that cry situation and a little eyelash texture for specular occlusion with a little parallax applied. One other important thing that I understand from the talk is while high frequency normal for sclera is creating a more dry look, using a low frequency one giving it a more wet and alive look. At the talk they're blending a second low freq. normal on top of a high one to dynamically change the eye wetness.

    Transitional Mesh
    I have just used a "Transparent" Lit shader with "Distortion" feature to use it's blur instead of a custom one. There are two downsides of this. One of them is when eye mesh's main specular highlight moves behind the transitional mesh it is creating a annoying flickering effect due to blur. This one can be solved by adjusting the blur amount based on the distance of the mesh to the camera. Second one is, since I'm not using a alpha map border of the transitional mesh is more prominent because of specular highlights. I have solved this by adjusting the mesh to a more cylinder like shape and give it a more curve to hide the border parts from the lights.

    Hair
    Like I said at the other post, I have not get into the hair that much but If I remember right, found a little thing like this: When merging hair cards at Maya, the order you select them effects the sorting order in the engine if you're using transparency.

    About the strand based hair, last time I have checked product board from the link below, it was at the planned section.
    https://portal.productboard.com/uni...fects/tabs/18-high-definition-render-pipeline

    And there are couple of shots at the HDRP's documentation at the "Hair Master Stack" section:
    https://docs.unity3d.com/Packages/c...definition@13.1/manual/master-stack-hair.html

    I think it's looking really cool.

    These were the things that I found important and still trying to experiment and understand it. I hope it helps somehow.

    Edit: Added couple of new sections and adjusted some of the old ones.
     
    Last edited: Nov 10, 2021