Search Unity

  1. Unity 2020.1 has been released.
    Dismiss Notice
  2. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Assets Screen Space Displacement Mapping (No Tesselation) - Progress

Discussion in 'Works In Progress' started by winning11123, Jan 12, 2019.

  1. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Edit: Now Available Here: http://oddityinteractive.com/SSDM/ssdm.html

    or here directly if website ever goes down: https://gum.co/PpyqT

    Hello everyone, just wanted to share progress I have made on my path to the concept of Unlimited Detail in games. I had been messing with point clouds and trying to find a system that would render high detail in VR. Along the way this concept came about. So I implemented and the results are really great. I believe someone came up with this concept about 10 years ago, but I was unaware of this, but it is a great idea, and I came up with my own implementation.

    I almost dismissed the idea because I thought using the screen as a mesh would cost to much, but gpu to the rescue, I love compute shaders so much))).

    So basically is is just a shader and camera that generate screen maps to send to the compute shader for Displacement and drawn with DrawProcedural. Unity just blows my mind with it's ease of use and made this such a great task to explore.

    Anyway I will be releasing it soon (hope after weekend) and just wanted to create an awareness for anyone who might be interested. I will be launching with price reduction (75%) off, so I hope people will see it's potential.

    Take care and thanks for you interest!

    I hope you can bear with my videos, I will get better at this))



     
    Last edited: Jun 15, 2019
    Flurgle, eaque, bart_the_13th and 4 others like this.
  2. alexanderameye

    alexanderameye

    Joined:
    Nov 27, 2013
    Posts:
    1,219
    Beautiful! Good luck with publishing this asset! Do you have other assets as well?
    At what price are you going to sell it?
     
    winning11123 likes this.
  3. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Thanks! Appreciate it! I did have some assets (Deep SSS and a few others) but they where written for 5.6 and no longer compatible with 2018 so I pulled them. But when I have time I will put them back after a revamp.

    $25 is going to be the launch price (75% off), just want to make it accessible to as many people as possible. It will contain full source so I think it will be interesting to see what community can build with it.
     
    Mark_01 and alexanderameye like this.
  4. Mark_01

    Mark_01

    Joined:
    Mar 31, 2016
    Posts:
    470
    looks great! This will work in VR ? Oculas and the soon to be out Oculus Quest ?
     
    winning11123 likes this.
  5. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Hi, Thanks! Yes for VR and I assume Quest but have no access to Quest to know if anything could prevent use with it. Unity does a good job of handling all this for us so I think will be good! Any limitations I find I will let all know).
     
    Mark_01 likes this.
  6. Mark_01

    Mark_01

    Joined:
    Mar 31, 2016
    Posts:
    470
    Thank you!! I am thinking beside ground, this would be great for buildings and such too.
    Looks very good. And fast.. I am waiting to get Quest too.. I am guessing it might sell
    far better then rift. I imagine Unity will support quest fairly quickly too ( I hope ) .

    Am excited to see this in the store.. I am retired so will need to get it when it comes out.
    Thanks so much.
     
    winning11123 likes this.
  7. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Cool). Yes you are right, I think quest will be very cool and open up many possibilities. Unity always just surprises me with it's such great integration and ease of use with everything, I am sure they are already on it.

    Oh yes, actually the SSDM can be used on any mesh even with animation. There are just some things to be aware of when creating content (displacement maps and the low-poly counterpart) but I will have a list of helpful tips for this.
     
    Mark_01 likes this.
  8. Elecman

    Elecman

    Joined:
    May 5, 2011
    Posts:
    1,342
    Is this similar to what Euclideon does in their point cloud engine? They were initially going for gaming but totally blew it when they wanted to do everything with their engine (NIH syndrome). It was a real shame because it is perfect for things like rendering dense grass fields. So your hybrid approach sounds a lot better. Could it be used for grass?
     
    Last edited: Jan 14, 2019
    winning11123 likes this.
  9. Bodyclock

    Bodyclock

    Joined:
    May 8, 2018
    Posts:
    137
    Very interesting. Curious to see if this could be used with something like Megasplat for terrain. The Megasplat shader does have a system for custom addons.
     
    Ascensi and winning11123 like this.
  10. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    hey guys!

    @Elecman, Honestly I cannot speak for Euclideon or their point cloud render and search algo, but after all my experiments over the years this is the closest I could think to equate to how Eulcideon Island was done (not the pointcloud renderer). It did blow me away and do not know if we will ever know for sure. So if they generate meshes from point clouds and generated height maps and as In this, if all pixels on screen are points then with a screen mesh of w*h triangle theoretically could be a similar approach.
    I have not tried grass actually but I am sure something could be done with some modification for a second pass and distorted in it's own final DrawProcedural shader. Plenty of things to experiment with for sure, even video or dynamic textures could produce some interesting effects I think. Alpha is clipped so only non alpha position data is carried accross.

    @Bodyclock, it just comes down to the shader being able to export position, color+ lighting, normals and heighmap info to the MRTs and the low poly mesh or terrain being on the DisplacementMappingLayer. The rest is handled in compute shader automatically from there. So if could mod Megasplat in such a way then I do not think it would be a problem.
     
  11. Bodyclock

    Bodyclock

    Joined:
    May 8, 2018
    Posts:
    137
    Some thoughts over on the Megasplat thread where I mentioned your product. I think you mention the problem of the thin sections in one of the videos. Looks like it could be integrated with Megasplat reasonably easily. It might be worth starting a conversation with Jason.
     
    winning11123 likes this.
  12. Mark_01

    Mark_01

    Joined:
    Mar 31, 2016
    Posts:
    470
    Thing is tho with tight integration .. there is a problem with Megasplat and Gaia being in the same project..
    if* megasplat is in it seems that Gaia window can't start ,,,

    what ever the cause, if this happens with this asset, I will be asking for money back. I do not use any of that devs products, but I do use Gaia, so I would not want to see that stop working and be other problems.

    I am referring tp this post https://forum.unity.com/threads/gai...d-scene-creation.327342/page-197#post-4072513
     
    winning11123 likes this.
  13. Bodyclock

    Bodyclock

    Joined:
    May 8, 2018
    Posts:
    137
    That post refers to Microsplat, which is a different product completely. And this would not be a tight integration, merely a customisation to the Megasplat shader. And this is all speculative at this point and merely an exploration of possibilities. You may be jumping the gun ;)
     
  14. Mark_01

    Mark_01

    Joined:
    Mar 31, 2016
    Posts:
    470
    Maybe so, but it might be the shader that is causing it some how, I do not know for sure. Just saying that I use
    All of Adams products as do many people.
     
  15. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Bodyclock, thanks for asking. As he said there would be no problem to make a modification for his product. As all source is included should be very simple for him to make a version that could use this, if he could see a benefit to it. Which is great! However this is forward based so not going to g-buffer but to MRT's but I do not think would be problem to work with g-buffer(will have to test).

    An artifact with polygons of >90 degrees happens even with normal maps and they are still used. With beveling of edges can reduce this artifact and pixel clamping can prevent bleeding. I am not sure how others implemented in past because I have only seen two videos of it which I only found a few days ago)). but I think for a vast amount of things this technique could be useful, especially things like telepresence - kinect depth maps and textures. So the hard thing for me is content and finding a good pipeline for this, however from the results I am seeing I think it is worth figuring it out)). Of course though anything that even resembles an issue or artifacts I find, I will be making sure everyone knows about it!

    Edit: oh yes with what he said about SSR that could be potential issue, but with VR multiple camera would mitigate most cases as the comparison isn't exact as just needing a way to show further around the mesh as they don't need to see what is offscreen, but rather similar to the >90 degree thing I talked about. I am sure there will be some cases it is not suitable for sure though nothing is perfect).
     
    Last edited: Jan 14, 2019
  16. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,337
    I’ve used MegaSplat/MicroSplat and Gaia in the same project before- not sure what your particular issue was, but there shouldn’t be any issue between them.
     
  17. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    1,793
    The video mentions that the shader is using vertex/fragment shader. Will it work for deferred rendering path?
     
    winning11123 likes this.
  18. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @castor76, I believe it will however at the moment the custom shader used for displacement are forward based - output to mrt and displaced in compute shader - draw in unity with draw procedural. I am sure it would be alright for the displacement camera to use the forward and then rest of the scene can use the deferred rendering. I will take note of this and see if can be combined. Thanks!
     
    Mark_01 likes this.
  19. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    1,793
    Humm.. interesting. So you use the second camera just to draw the displacement using compute shader. How does the rendering pipeline works with the result afterwards? Or are you saying any model with displacement is rendered in the second camera and the rest of non displacement objects are rendered in the other camera?
     
    winning11123 likes this.
  20. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @castor76, yes a second camera and layer are used for the meshes that have displacement applied. The final results put into dynamic buffers and return to the unity pipeline via DrawProcedural (OnRenderObject) to be draw in the main camera, so the shader I am using at that stage is just a forward shader too, but at that point I could also have option for deferred version I am assuming. So it might be alright.
     
    Mark_01 likes this.
  21. Mark_01

    Mark_01

    Joined:
    Mar 31, 2016
    Posts:
    470
    This was not my issue, so I can't speak for the user that posted it. I had assumed that user that posted that comment had contacted you. But if not I was just wanting to be sure all would be okay.
     
  22. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Little over my deadline but hopefully by tomorrow it should be out, so here is another video of current state. Really excited to play with this myself). specs: win10 1920x1080 gtx970

     
  23. Elecman

    Elecman

    Joined:
    May 5, 2011
    Posts:
    1,342
    Amazing work!

    Do you think that combining this tech with terrain height map texture streaming (via Granite or Amplify Texture), it is possible to do insane terrain view distances?
     
    winning11123 likes this.
  24. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Elecman, Thanks! For sure, all you need is your mesh and a shader that has the look you want (or use included), as long as that shader draw can draw to the 4 MRT's on the mapping layer of the mapping camera and the rest is taken care of. So your meshes are not draw main camera directly.

    Meshes you do not want to use displacement can just be like normal, but I am not sure how one would mix shadows on different layers. but non displaced meshes can be draw in the meshes layer too, just set shader displacement to 0.0. I will make sure I Identify what would need to happen when moding your own shaders.
     
    Elecman likes this.
  25. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    If anyone is interested in a playable demo you can grab it here...

    http://oddityinteractive.com/unity/oi/demo/SSDM_Demo.zip

    it is very basic demo - the hallway is all instancing and the balls are instancing but the main square with the dragons is not instancing. I could not currently get pointlights to work well with instancing so until I figure that out only directional lights for instances. The standard meshes receive direction and up to 4 point lights. there is no AA or post effects in this demo.

    I will be uploading package and project tonight so will post when available.

    First release will not have VR as will have to be in a later release due to some issues I will have to work around and I will discuss here and video as I progress (of course will be free updates over the life of product).

    Hope you find it interesting.
     
    Last edited: Jan 18, 2019
    Mark_01 and Bodyclock like this.
  26. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Just Launched, so if want to try this I added link at first post!

    Thanks for the interest!
     
  27. ChezDoodles

    ChezDoodles

    Joined:
    Sep 13, 2007
    Posts:
    94
    Will it soon arrive at the asset store?
     
  28. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @ChezDoodles, no, I am self publishing as this something I want to learn about. Same service and free upgrades as they happen. Thanks for the interest!
     
  29. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Just did some tests to see how deep I could go with a single quad and height map and seems about 1m. This is an extreme test, as edges are pure 90 degree angles so specularity would be incorrect there but the results still took me by surprise and good with around 10m viewing range at 1920x1080. I think this weekend I might have a shot at building some sci-fi assets to include as a free addon with this and try my hand at a much better demo which is more than just functional. Nice to feel inspired to model again as it has been a while and I am sure there is more to discover with this method. I am intrigued anyway)).

    Edit: oh and thanks for the first few sales!

     
    Last edited: Jan 19, 2019
  30. gurayg

    gurayg

    Joined:
    Nov 28, 2013
    Posts:
    232
    Hello,
    Congratulations on the release :)
    I had quite a few questions:) I hope you can answer.

    -Does you shaders support Precomputed GI (Enlighten)?
    -Do they have self-shadowing? Can you add Directional lightmap rotation to your demo?
    -Can we use Deferred with Linear color space with your shader?
    -Do they support transparency?
    -Do you have plans to add material layering (two sets of materials painted on a surface)?
    -Any tests made with Post Processing Stack V2?
    -Which platforms are supported?

    Thanks
     
    winning11123 likes this.
  31. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @gurayg, Hi and thanks very much!

    - I will have to look into this first one, but currently I do not believe this will utilize this feature. I do not see it to be a problem to add to the shader but light maps and such have not been included to contribute to this first release. I will look into adding this, but the displacement mesh happens runtime and will not contribute to the map.
    -Yes, receiving shadowing(can be turned off) and casting (but not from the displaced mesh). Only directional light shadows are received. Yes, I will add rotation of the lighting to the demo and upload tomorrow, I had not thought about that).
    - Yes, you can set the main camera to deferred, only the displacement camera needs forward base and Yes to linear color (seemed fine selecting and using from player settings)..
    -Yes, material layering is a shader I want to do next, because I want to blend dynamic textures with the original height-maps, such as water ripple effects and FFT ocean type things etc...
    -I did a test with post processing from the Asset Store but not V2 yet, however it will be same results I believe. Because the final mesh is generated with DrawProcedural only screen space effects will work with the mesh so FXAA, Bloom, Color correction Depth blur etc... will be ok, but SSAO I found did not work but I read that is because DP meshes do not contribute to the scene depth effect.
    -Right now The only limit is Shader Model 5.0 (compute shader) support and systems supporting floating point RenderTextures (ARGBFloat) - so I guess mobile would be out with that texture format.

    Hope that helped answer most of your questions and thanks for the interest!
     
    eaque and gurayg like this.
  32. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @gurayg, "Can you add Directional lightmap rotation to your demo?" all done and uploaded. Thanks for the suggestion and hoping to have a better demo soon).
     
    eaque likes this.
  33. Ascensi

    Ascensi

    Joined:
    Sep 7, 2013
    Posts:
    579
    @winning now do you suppose your method could be a very ideal solution if this worked with Vector Displacement mapping? I'm thinking this could be how Vector Displacement finally becomes the successor of height map displacement, evolve the gaming industry.. ideally convert photo scanned assets to vector displacement and then use a shader like yours hopefully not increasing the rendering cost.
    I'm also very interested in seeing this work with Megasplat if it would increase performance.

    Edit: I think this method might be ideal for oceans or rivers as well if the textures can be scrolled .
    I don't know if its possible or ideal but maybe a transition of Vector images/small vector video clips (maybe mp4 format) to create waves or earth opening up, mudslides, avalanches etc. AAh just dreaming about the possibilities.
     
    Last edited: Jan 23, 2019
    winning11123, Bodyclock and R0man like this.
  34. R0man

    R0man

    Joined:
    Jul 10, 2011
    Posts:
    77
    Damn. This thread might just be the greatest thread in real-time graphics for some time. Are the height-maps converted to position buffers implicitly by your method?
     
    winning11123 likes this.
  35. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Ascensi, I did a test with Vector DIsplacement but with this method the problem is that you only have screen space pixels as your real-estate. I could see performing this on a lower level per mesh basis might have interesting results, but in this method it is a pixel to pixel for everything rendered on screen so works best with some base geometry and displaceing that. But I think it would be a very cool area to research further. as for other products working with this, as long as possible to draw the mesh on it's own layer (the displacement layer camera) and the shader outputs to the needed data to the 4 render targets all would be good. I was toying with the idea of doing my own custom megatexture utilizing buffers and memory mapped files (with unity terrain), however finding an easy way to describe the usage might not be so easy. but can be done of course and I will probably add that in an experimental version down the track.

    @R0man, glad you are interested like I am). basically the low-poly meshes that are used for displacement are on a seperate layer with own camera (forward base). it outputs mesh pixel position,color+lighting,normal and height + other data if needed to 4 RT's. these MRT's are sent to a compute shader where a fullscreen worth of trangles (two triangles per 4 pixels) are pushed to the original low poly pixel position in world space and then displaced along their normals. this full screen mesh is sent to DrawProcedural and from there is back in the normal rendering pipeline. From my instancing tests this leaves around 2 million triangles for low poly meshes on a gtx970. It has really gotten my creative juices flowing as I can really see a lot of potential and interesting effects to create as I have time and hope others will benefit from this too.
     
    Last edited: Jan 23, 2019
    Mark_01 likes this.
  36. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,337
    @winning11123

    Some thoughts:

    I'm surprised your not doing this with the GBuffer data instead of a custom forward pass? You can write a height value into the extra channel, reconstruct your world position from the depth buffer, and insert your post processing after the GBuffer is resolved, allowing to to work on the data before lighting happens. This also means no extra camera is needed. You could then write the results back into the GBuffer so they can be lit afterwards, which would allow the lighting to affect the displacement, as if my understanding is correct from your description, right now lighting happens on the low resolution mesh, correct?

    Another thought is to do your displacement entirely in the pixel shader rather than constructing triangles at all. At 2 triangles per 4 pixels, you are deep into microtriangle territory, which cuts fill rate considerably due to the way GPUs process pixel blocks. If you are creating perfectly aligned 2x2 pixel quads, then the GPU will shade 4 pixels for each pixel on the screen and throw the others away, because you are always on an edge. So this has a huge fill rate hit (which still might be faster than turning on tessellation in some cases). In the end, the technique is moving existing pixels around on the screen to appear displaced, so it's really just a distortion effect, right? You could also tap through the height field in this pass to add directional shadows on the small details..
     
  37. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @jbooth, well I may be wrong, but from my understanding the gbuffer would draw the original low poly geometry I would assume and this is not desired for final composition with itself and other objects or non displaced objects, just the position data is wanted and a way to gather the rest of the data. Yes, the lighting is done in the same pass with the normal maps and does not really benefit from being done elsewhere as it just lighting the normals and as displaced looks correct.

    Although it sounds like micro territory,the triangles become world space as the vertex are moved to the pixel world positions so actually quite large space between when looking from that perspective. the layout of the triangles is that of a screen "structure" but not actually bound to screen pixels. as each is lined with the render target buffers to extract it's true 3d world position, like a point cloud except with skin.it is also a constant cost set by the resolution.

    It is displacement mapping, so if that is classed as distortion then yes) if there is a vertex for every pixel on the screen in 3d world space and that is moved you could call it s distortion - but not like a post process distortion effect. but yes, screen space shadows for self shadowing would be a cool option for sure!
     
  38. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,337
    Having not looked at your technique I'm spitballing a bit, but bare with me if your up for it. The unused channel in the gbuffer is cleared to 0, so no displacement would happen unless objects wrote into that channel. Since the gbuffer draws before forward objects, you'd have access to positions (from depth), albedo, specular terms, and normals, and height in the extra channel from all opaque objects. You'd then run your post process (before forward rendered objects and lighting is applied), and distort everything to the expected positions, avoiding the separate rendering camera/pass. You right though- if your authoring normals correctly then they would still be correct for the displaced geometry regardless of technique used.

    Yes, so to some degree the microtriangles may be distorted to be larger or smaller. I'd imagine making them larger just reduces the quality of the effect quickly, too.

    I was thinking you could tap through the height field, essentially a simpler version of POM, once to figure out where to grab the current pixel from (instead of the triangles), and once for the shadow. Depending on where the bottleneck is this may or may not be faster than the triangle grid - but same code would give you shadows, and other effects are possible like AO.. interesting possibilities if the various artifacts are acceptable.
     
    winning11123 likes this.
  39. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    That is cool, I am always interested in new approaches an improvements and always great to get others perspective. Although I think as you describe is an option that may work. But there may be some things that could interfere...The thing is though the original colors and depth would be drawn to the gbuffer (if someone was just using the single camera for all scene), which may work for pixels moving out the object, but not inward once the depth is compared with original scene as the insertion of the final DrawProcedural mesh has to be done after the initial pass and another reason to need the extra camera. This is because there is no way to exclude a DrawProcedural meshes from any camera (or control where they are drawn) except calling after previous camera has finished drawing from what I can gather. So that I think would be main primary reason now I think about it more aside from depth overdraw. Also I think another factor is I have no way to control if Unity could change anything in future and all gbuffer slots could be filled or adding complexity to the whole setup. So I agree it seems like the obvious choice and there is nothing to stop a user modding it to use it that way, but for simplicity I think current method is best and maybe only way without going lower level.

    well it is all perspective, visually it will always look like pixels right next to eachother but yes the smaller the resolution or lack of details in textures (compression) will reduce quality absolutely.

    yes very cool ideas for sure and I am very interested to explore this.
     
  40. Ascensi

    Ascensi

    Joined:
    Sep 7, 2013
    Posts:
    579
    So Guys @jbooth @winning11123 at this time do you foresee Megasplat compatibility and if yes how long might this take to implement or would users simply need to integrate with Megasplat's custom shader option? I'm not a coder so I dread the thought not being able to integrate.

    Almost forgot.. would raycast rendering work with this?
     
    Last edited: Jan 23, 2019
  41. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Ascensi, for my part below are the critical parts of any shader that could use this. I removed all lighting and editable params to save any confusion and left only what would be required. (you could leave out pixel clamp pass if desired as terrains in most casses would not create a outline to pin, but good for standard mesh type object). Hope it helps in the process if this can add value! Also I understand there is world and local with same data and also unused texture params, at the moment this is by design.

    Edit: I think the only tripping point could be that the heightmap dipsplacements would need to have the same strength with a splatting method. So I think would just be finding a balance within the textures themselves or creating a mask for each heightmap to give individual displacement heights. Just some thought that came to me.

    Code (CSharp):
    1. //**********************************************************
    2. //copyright 2019
    3. //Author: David Gallagher
    4. //Publisher: Oddity Interactive
    5. //http://oddityinteractive.com
    6. //v.1.0.0.1
    7. //
    8. //THIS SHADER IS FOR VIEWING, USAGE, MODIFICATION AND PUBLIC DISPLAY BY AUTHORISED LICENSEE OF THIS PRODUCT HOWEVER THEY DESIRE
    9. //
    10. //Thanks for the support and being awesome!
    11. //
    12. //**********************************************************
    13. Shader "DX11/UBER_SSDM"
    14. {
    15.     //******************************************************
    16.     //PROPERTIES
    17.     //******************************************************
    18.     Properties
    19.     {
    20.         //--------------------------------------------------
    21.  
    22.         _HeightScale("HeightScale", Float) = 0.5
    23.         _HeightAdj("HeightAdj", Float) = 0.0
    24.         //--------------------------------------------------
    25.     }
    26.     //******************************************************
    27.     //SUB SHADER
    28.     //******************************************************
    29.     SubShader
    30.     {
    31.         //--------------------------------------------------
    32.         //Lighting Off
    33.      
    34.         Tags{ "RenderType" = "Opaque" }
    35.         Tags {"LightMode" = "ForwardBase"}
    36.         LOD 100
    37.         //--------------------------------------------------
    38.         //**************************************************
    39.         //PASS
    40.         //**************************************************
    41.         Pass
    42.         {
    43.             Cull Back
    44.             //**********************************************
    45.             //CGPROGRAM
    46.             //**********************************************
    47.             CGPROGRAM
    48.             #pragma vertex vert
    49.             #pragma fragment frag
    50.             // make fog work
    51.             #pragma multi_compile_fwdbase nolightmap nodirlightmap nodynlightmap //novertexlight
    52.             #pragma multi_compile_fog
    53.             #pragma target 5.0
    54.             #include "UnityCG.cginc"
    55.  
    56.             #include "AutoLight.cginc"          
    57.             #include "Lighting.cginc"
    58.             #include "UnityLightingCommon.cginc"
    59.             //**********************************************
    60.             //IN
    61.             //**********************************************
    62.             struct appdata
    63.             {
    64.                 float4 vertex : POSITION;
    65.                 float2 uv : TEXCOORD0;
    66.                 float3 normal : NORMAL;
    67.                 float4 tangent : TANGENT;
    68.             };
    69.             //**********************************************
    70.             //OUT
    71.             //**********************************************
    72.             struct v2f
    73.             {
    74.                 float4 uv : TEXCOORD0;
    75.                 float4 pos : SV_POSITION;
    76.                 float4 localPos : TEXCOORD2;
    77.  
    78.                 half3 tspace0 : TEXCOORD3; // tangent.x, bitangent.x, normal.x
    79.                 half3 tspace1 : TEXCOORD4; // tangent.y, bitangent.y, normal.y
    80.                 half3 tspace2 : TEXCOORD5; // tangent.z, bitangent.z, normal.z
    81.                 float3 worldPos : TEXCOORD6;
    82.                 float3 Normal : NORMAL;
    83.             };
    84.             //**********************************************
    85.             //DECLARATIONS
    86.             //**********************************************
    87.  
    88.             float _HeightScale = 1.0f;
    89.             float _HeightAdj = 0.0f;
    90.  
    91.             //**********************************************************
    92.             //PIXEL SHADER - FUNCTIONS
    93.             //**********************************************************
    94.             //**********************************************
    95.             //MRT - DATA
    96.             //**********************************************
    97.             struct FragmentOutput
    98.             {
    99.                 //------------------------------------------
    100.                 float4 dest0 : SV_Target0;
    101.                 float4 dest1 : SV_Target1;
    102.                 float4 dest2 : SV_Target2;
    103.                 float4 dest3 : SV_Target3;
    104.                 //------------------------------------------
    105.             };
    106.             //**********************************************
    107.             //VERT
    108.             //**********************************************
    109.             v2f vert(appdata v)//, uint instanceID : SV_InstanceID)
    110.             {
    111.                 //------------------------------------------
    112.                 v2f o;
    113.                 //------------------------------------------
    114.                 //uv
    115.                 o.uv.xy = TRANSFORM_TEX(v.uv, _MainTex);
    116.                 o.uv.zw = TRANSFORM_TEX(v.uv, _Normal);
    117.                 //------------------------------------------
    118.                 float4 Pw = UnityObjectToClipPos(v.vertex);
    119.                 o.pos = Pw;
    120.                 o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
    121.                 o.Normal = float4((0.5f * normalize(UnityObjectToWorldNormal(v.normal)) + 0.5f), 1.0f);
    122.                 o.localPos = mul(unity_ObjectToWorld, v.vertex);
    123.                 //------------------------------------------
    124.                 half3 wNormal = UnityObjectToWorldNormal(v.normal);
    125.                 half3 wTangent = UnityObjectToWorldDir(v.tangent.xyz);
    126.                 //------------------------------------------
    127.                 // compute bitangent from cross product of normal and tangent
    128.                 half tangentSign = v.tangent.w * unity_WorldTransformParams.w;
    129.                 half3 wBitangent = cross(wNormal, wTangent) * tangentSign;
    130.                 //------------------------------------------
    131.                 // output the tangent space matrix
    132.                 o.tspace0 = half3(wTangent.x, wBitangent.x, wNormal.x);
    133.                 o.tspace1 = half3(wTangent.y, wBitangent.y, wNormal.y);
    134.                 o.tspace2 = half3(wTangent.z, wBitangent.z, wNormal.z);
    135.                 //------------------------------------------
    136.                 return o;
    137.                 //------------------------------------------
    138.             }
    139.             //**********************************************
    140.             //MRT - OUT
    141.             //**********************************************
    142.             FragmentOutput frag(v2f i) : SV_Target
    143.             {
    144.                 //------------------------------------------
    145.                 FragmentOutput o;
    146.                 //------------------------------------------
    147.                 float4 pos = float4(i.localPos.x, i.localPos.y, i.localPos.z, _HeightAdj);//<--------------------------------------------------(w: param) Height Adust Value (usually half or less than half of variable _HeightScale)
    148.                 float4 color = float4(0.0f,0.0f,0.0f,0.0f);
    149.                 float4 normal = float4(0.0f,0.0f,0.0f,0.0f);
    150.                 float4 HeightData = float4(0.0f,0.0f,0.0f,0.0f);
    151.                 //--------------------------------------
    152.                 color = tex2D(_MainTex, i.uv.xy);
    153.                 UNITY_APPLY_FOG(i.fogCoord, color);
    154.                 [branch]
    155.                 if (color.a < 1.0f)
    156.                 {
    157.                     //----------------------------------
    158.                     clip(-1);
    159.                     //----------------------------------
    160.                 }
    161.                 else
    162.                 {
    163.                     //----------------------------------
    164.                     // sample the normal map, and decode from the Unity encoding
    165.                     float3 tnormal = UnpackNormal(tex2D(_Normal, i.uv.zw));
    166.                     // transform normal from tangent to world space
    167.                     float3 worldNormal;
    168.                     worldNormal.x = dot(i.tspace0, tnormal);
    169.                     worldNormal.y = dot(i.tspace1, tnormal);
    170.                     worldNormal.z = dot(i.tspace2, tnormal);
    171.                     normal = float4(normalize(worldNormal.xyz)*0.5f+0.5f,1.0f);
    172.  
    173.                     HeightData.x = tex2D(_Height, i.uv.zw).x * _HeightScale;//<--------------------------------------------------(_HeightScale - actualy height of height map displacement)
    174.                     HeightData.y = 0.0f;
    175.                     HeightData.z = 0.0f;
    176.                     HeightData.w = 0.0f;//pixel clamping - seperate pass at w
    177.                     //--------------------------------------
    178.                 }
    179.                 //------------------------------------------
    180.                 o.dest0 = pos;
    181.                 o.dest1 = color;
    182.                 o.dest2 = float4(i.Normal, 1.0f);
    183.                 o.dest3 = HeightData;
    184.                 //------------------------------------------
    185.                 return o;
    186.                 //------------------------------------------
    187.             }
    188.             ENDCG
    189.         }
    190.         //**************************************************
    191.         //PASS
    192.         //**************************************************
    193.         // shadow casting support
    194.         UsePass "Legacy Shaders/VertexLit/SHADOWCASTER"
    195.         //**************************************************
    196.         //PASS - PIXEL CLAMP
    197.         //**************************************************
    198.         Pass
    199.         {
    200.             Cull Front
    201.             //**********************************************
    202.             //CGPROGRAM
    203.             //**********************************************
    204.             CGPROGRAM
    205.             #pragma vertex vert
    206.             #pragma fragment frag
    207.             #pragma multi_compile_fwdbase noshadow nolightmap nodirlightmap nodynlightmap //novertexlight
    208.             #pragma multi_compile_fog
    209.             #pragma target 5.0
    210.             #include "UnityCG.cginc"
    211.  
    212.             #include "AutoLight.cginc"          
    213.             #include "Lighting.cginc"
    214.             #include "UnityLightingCommon.cginc"
    215.             //**********************************************
    216.             //IN
    217.             //**********************************************
    218.             struct appdata
    219.             {
    220.                 float4 vertex : POSITION;
    221.                 float2 uv : TEXCOORD0;
    222.                 float3 normal : NORMAL;
    223.                 float4 tangent : TANGENT;
    224.             };
    225.             //**********************************************
    226.             //OUT
    227.             //**********************************************
    228.             struct v2f
    229.             {
    230.                 float4 uv : TEXCOORD0;
    231.                 float4 pos : SV_POSITION;
    232.             };
    233.             //**********************************************
    234.             //DECLARATIONS
    235.             //**********************************************
    236.             sampler2D _MainTex;
    237.             float4 _MainTex_ST;
    238.  
    239.  
    240.             float4 _OutlineColor;
    241.             float _OutlineWidth;
    242.             //**********************************************
    243.             //MRT - DATA
    244.             //**********************************************
    245.             struct FragmentOutput
    246.             {
    247.                 //------------------------------------------
    248.                 float4 dest0 : SV_Target0;
    249.                 float4 dest1 : SV_Target1;
    250.                 float4 dest2 : SV_Target2;
    251.                 float4 dest3 : SV_Target3;
    252.                 //------------------------------------------
    253.             };
    254.             //**********************************************
    255.             //VERT
    256.             //**********************************************
    257.             v2f vert(appdata v)
    258.             {
    259.                 //------------------------------------------
    260.                 v2f o;
    261.                 o.uv.xy = TRANSFORM_TEX(v.uv, _MainTex);
    262.                 o.uv.zw = 1.0f;
    263.  
    264.                 float4 clipPosition = UnityObjectToClipPos(v.vertex);
    265.                 float3 clipNormal = mul((float3x3) UNITY_MATRIX_VP, mul((float3x3) UNITY_MATRIX_M, v.normal));
    266.                 float2 offset = normalize(clipNormal.xy) / _ScreenParams.xy * _OutlineWidth * clipPosition.w * 2;
    267.                 clipPosition.xy += offset;
    268.  
    269.                 o.pos = clipPosition;
    270.  
    271.                 return o;
    272.                 //------------------------------------------
    273.             }
    274.             //**********************************************
    275.             //MRT - OUT
    276.             //**********************************************
    277.             FragmentOutput frag(v2f i) : SV_Target
    278.             {
    279.                 //------------------------------------------
    280.                 FragmentOutput o;
    281.                 //------------------------------------------
    282.                 float4 color = float4(0.0f,0.0f,0.0f,0.0f);
    283.                 //------------------------------------------
    284.                 color = tex2D(_MainTex, i.uv.xy);
    285.                 [branch]
    286.                 if (color.a < 1.0f)
    287.                 {
    288.                     //--------------------------------------
    289.                     clip(-1);
    290.                     //--------------------------------------
    291.                 }
    292.                 //------------------------------------------
    293.                 o.dest3.w = _OutlineColor.x;
    294.                 //------------------------------------------
    295.                 return o;
    296.                 //------------------------------------------
    297.             }
    298.             ENDCG
    299.         }
    300.     }
    301. }
    302.  
     
    Last edited: Jan 23, 2019
    Bodyclock and Ascensi like this.
  42. Ascensi

    Ascensi

    Joined:
    Sep 7, 2013
    Posts:
    579
    @winning11123 Tried to pm you about this with more detail Christoph Schindelar has reached out to me after sharing your asset page and prospects with Height &Vector Displacement and would like to offer his textures for free for marketing and testing. He probably has the most advanced 3D photo scanned textures on the market http://www.rd-textures.com
     
    eaque, mfleurent and winning11123 like this.
  43. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Ascensi, wow, that is so cool! I would love to know more and would be fantastic to have better resources to experiment with for sure! Thanks so much for thinking of me as it is taking some time to put the new demo and shader together from scratch, if you like you can talk further direct to davidgallagher(at)oddityinteractive(dot)com.
     
    eaque likes this.
  44. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Ascensi, Just wanted to thank you again! wow I am so impressed by the quality of his work and generosity. If you want a free SSDM license just let me know for the hookup! I would love to post a pic but will wait till in a great scene. If anyone needs help using RD-Textures let me know as I already have a compatible pipeline and will provide more info as I use them further. Really a game changer! Thanks again!
     
    eaque, Ascensi and mfleurent like this.
  45. Ascensi

    Ascensi

    Joined:
    Sep 7, 2013
    Posts:
    579
    The only thing I could request from you at this point is an easy Megasplat integration as I'm not a coder ;) I already bought your asset on the 23rd seeing it's potential and wanting to support your work here.
     
    eaque and winning11123 like this.
  46. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Ascensi, Thanks for the support! sent you an email before I read this :p Hope to make this happen in the future!
     
    eaque likes this.
  47. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    Hi again, Just a video of some new features that will be in the next release. 75% off Sale will be on until next Friday, so hope it will give you an opportunity as it's potential grows. Also just can't wait to show it off with RD-Textures so stay tuned for that! Thanks for the interest I have received, it really has blown me away).

     
    eaque, Ascensi and Bodyclock like this.
  48. Ascensi

    Ascensi

    Joined:
    Sep 7, 2013
    Posts:
    579
    @winning11123 do you think you might be able to tap screen space point cloud as well?
    isn't point cloud more or less just pixels in 3d space? If it can be streamed in somehow that would be awesome! or is that what you did here?

    I have to add that I was thinking about a way to convert Voxeland mesh chunk Data to point cloud and saw that the same youtube channel also has an example converting triangle meshes to point cloud but then I started to wonder how meshes painted with triplanar could be extracted for the point cloud data, maybe all the vertex painting would have to be done with UV mode. Lastly would it even be possible to use SSD with point clouds or maybe it's one in the same? Thank you, you have a lot of incredible work and makes me think of more possibilities, mind is exploding now..
     
    Last edited: Jan 31, 2019
    winning11123 likes this.
  49. winning11123

    winning11123

    Joined:
    Apr 29, 2014
    Posts:
    72
    @Ascensi, yes all is one it seems))))) but that is for the future. There are still some hard parts to work out and for that time and resources will be needed to devote time to it. But a lot of the foundation is already there.

    I have so many different ways to draw point clouds I think I could rival Edison))))) I try to see as steps rather than fails lol. There are so many different ways it can be done because at the end of the day all anyone is trying to do is fill (w * h) pixels on the screen. With mega textures and a fast enough tile swapping system (so we wait for io) is one way. Also memory mapped files showed a lot of promise however even with fast random access the best speed would require no abstraction I think. You could however generate height maps from point cloud data, but would also require the low poly mesh with SSDM to extrude. It really comes down to how much time someone is will to spend on content creation to realize their true vision.

    Thanks so much for the compliment, for me, it is for some reason an obsession that started after I saw Euclideon Island, but am happy with how all is being shaped.Actually you could even just save all standard meshes (verts) as RGBFloat Textures and procedurally draw from mega texture for infinite detail in combination with SSDM if wanted, but tile swapping isn't the fastest on Sparse Textures(maybe could be threaded????). Actually I think Carmack alluded to this in one of his interviews while discussing mega textures.
     
    Last edited: Feb 1, 2019
    Bodyclock and Ascensi like this.
  50. Ascensi

    Ascensi

    Joined:
    Sep 7, 2013
    Posts:
    579
    @winning11123 I had asked several threads back if Raycast rendering would work with this, if lighting hit the geometry and the coloring & lighting info then sent to the screen space.. would this be possible? NVidia should sponsor you :)
     
    winning11123 likes this.
unityunity