A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Separate names with a comma.
Discussion in 'Graphics Experimental Previews' started by Cyril-Jover, Jul 7, 2017.
Welcome to the De-Lighting Tool thread
My first obvious question: what is the De-Lighting tool?
My first not so obvious answer is:
The De-Lighting tool aims to remove the lighting that remains in 2D textures exported from photogrammetry software (Reality Capture, Photoscan ...).
Here is the blog post about the tool:
You're welcome !
Looks like an usefull tool. Do you know if something similar exists for Photoshop ?
That looks absolutely delightful.
I'll see myself out...
You can do this manually in photoshop. Theres a gdc talk by the ea battlefront devs on photogrammetry over on youtube, they briefly cover this.
Gave it a go, pretty nice results.
Short gif of just manually rotating a light with the assets - https://gfycat.com/MagnificentAdventurousGypsymoth
This tool is gonna make a lot of very happy artists, thanks guys.
Do you maybe have a link of the specific gdc talk ? Your result looks great ! Which pipeline do you use to get the scan and all the maps out of it ?
heres the talk, with the time of when he mentions the shadow removal process in the link
I used a 5d1 for the photos(all on a tripod at lowest iso with a remote shutter to reduce camera blur, but this was apparently not necessary according to the talk and I'm going to have to do more tests to see), and agisoft photoscan for processing them. I pretty much used the same process described in the video, although I've only gotten as far as processing them in agisoft, theres still more that could be done(such as cleaning up holes in zbrush before generating the lowres, manual uvs before generating the maps to plug into the delighter), but these are just some examples from my first outdoor attempt at the process.
This picture kinda shows how you take pictures of the object, each blue square is a photo. It was about 86 photos for that big rock(it was about 6 feet I think) and it was probably the right amount for the detail captured.
This one is of a sequoia, only 30 photos and it was nowhere near enough. I could kick myself because I'm not sure when Ill return to that area, and sequoias only grow in a very small handful of places.
The gdc talk mentions they captured 300-500 photos per asset, which seems a tad excessive? The camera they used probably has double the resolution of mine. As a comparison: using photoscan at its highest settings for 150 (12mp)photos will use all of 32gb of my ram, not to mention needing to leave the computer on overnight(and still wake up to unfinished processing).
Anyway the software will undoubtedly change as its a fast growing area, but as a final note its very possible to get started with just a phone camera.
Hey ! Cool results. I think in your case you should use the mask map. It is descibed in the doc on the GitHub project.
Basically the tool use the object itself as a lightprobe. When there is some deposition material (like on your ground) it can perturb the measure. In your case, the ground loses a bit its color.
If you create a mask (should be the same resolution that the other maps) just quickly (doesn't have to be accurate) paint in red the ground parts. Everything else should be black. You will see that color will be good again.
The result should be good on the rock but might be a bit less good on the ground. Invert the red channel of the mask, and this time the ground will be better. You can save each result and mix them in Photoshop (using the red channe of the mask).
Sorry, the answer is long, but it's pretty fast to do ... It should be more assisted or automatic. Maybe in next version.
Can't wait to see your next result ! Cheers !
Looks cool! Not really a comment on the tool itself, but it seems like ideally when processing photos before the reconstruction process you should do a batch process on the original RAW photos to lift exposure on shadows and lower exposure on highlights. That way you preserve the most visual detail.
First of all, thanks for the tool!
I have tested it with some of my photogrammetry props. It worked fine in this tree, although I lost some of the colors of the base part of the tree.
But in a captured scene where you got objects with a lot of different materials, the results are not very good. I have a lot of nature props and old architecture objects from 16th century captured, so I can do a lot of tests if you need it.
Hi Grihan ! Did you try to use the Mask Map ? It seems to be a problem very close to the one of Thelebaron.
no, I didn't use it, but...how many masks do you think I'll need? three? one for the ground with leaves, one for the white stones and another one for the brown stone?
@ thelebaron ... thx for Link and explaination. You need good horsepower for photogrammetry, sadly. But otherwise, its cheaper than a laserscanner, i guess. If i would have the money for a laser scanner, i would go for this tech instead for photogrammetry.
I think you just need one very simple.
Everything should be black exept the ground that should be red.
The red channel of the mask is used to separate very different materials. It is explained in the doc that is in the GitHub project, but I will publish a video tutorial very soon. This will show what are the corner cases and how to quickly fix it.
In the future it should be automatic, but in this first version it isn't. Mastering the red channel of the mask is really quick and easy, and can dramatically improve the result. If you still have the problem I'll be happy to test your data (and thelebaron 's) and try to fix what went wrong.
@Cyril-Jover What did you guys use to bake the position map? I used quixel and the position gradient but I was wondering if this was the right usage or not.
We use Knald and Substance Designer. We haven't test quixel yet. To bake this map you should use the "Normalized" option.
Great tool and video presentation! I have a few questions if you don't mind.
I tried the example rock data set. However, I couldn't find the actual geometry. Would it be possible to include the rock geo (both high res and low res) as OBJ or FBX files?
If I have a captured HDR environment map, can I use it with your tool?
Could you please describe how your tool works? How is the environment map estimated? Does the tool assume that the material is Lambertian? Does the original lit texture equal the unlit texture rendered under the estimated environment map? (If so, I would love to see this kind of render comparison in your next video The reason I ask is because your answers may give me a better technical understanding of how to get the best results from the tool.
How well does the tool work with multiple materials on the same object? Should each material be processed separately with masks for best results?
Thanks in advance!
This tool looks great. However it doesn't play quite so well with skin data from human scans. Still testing with it.
@Cyril-Jover : Knald doesn't support Position baking? There is no option for it. Not in 1.2.1.
Spoiler: Official apology for the worst pun spoken today.
I'm sorry. Please forgive me. I couldn't contain myself.
Yes, Sorry ! I forgot we have a beta version of Knald. Position baking should be available in the next version. About your problems with human face scan, could you tell me more about what's happening ? Maybe you can share data that I can test. This is still experimental, I've already leaned a lot from users feedback, and I've started to change few things.
Ohhh Beta! Would love to try it. Yes I would be more than happy to share data. If you want to PM me I can send some over and then come back here to share it for others to test with, if we can make it work.
I'm having trouble accessing the Tool.
Window > Experimental > 'blank'
Where do I place the "DeLightingTool-master" folder within unity's editor folder? This is confusing me
Hi everyone, here's how red channel of the mask should be used. I'm preparing a tech paper and tutorials to explain everything deeply. I hope these few pictures will help you.
This is a test with virtual data. This allows to compare de-lighting results with a "ground truth".
Hi Oleg !
So many good questions !
1- you're right ! we will give the 3d mesh too ! but only the retopo (100 KB) because of the HD size (2 GB .ply file).
2- At the first time we wanted to add this feature, but since we use an array of generated EnvMaps, we can't do this anymore. The EnvMap in the debug view is the one that is the root of the "EnvMap Tree" and is used to remove lighting on the non-occluded parts. The EnvMap Tree could be compared to a lightfield (sorry for the buzz word). More deep you go in the tree, more you go from global to local de-lighting. Local de-lighting allows to better remove Global Illumination and Occlusion.
3- It's true ! more you know on what's happens behind the tool, more you can use it well. I writing a white paper that deeply explain everything about the de-lighting techniques and I will publish it very soon.
Materials are considered as Lambertian because photogrammetry software don't work really well on reflective objects. More than that, when those software reconstruct the texture, they project it from many different view angles, this turn a material to look rougher than it is. But, obviously, the roughness (smoothness) could be a good input parameter to get a better reconstruction.
I've got comparison of the De-Lighted model lit by the extracted irradiance map, I'll publish them at the same time as the White-Paper. But here's a preview:
On the left, the Original photogrammetry. On the right, the "de-lighted / re-lit" version. Because the Environment lighting is separated by the tool, I can rotate it.
The Re-Lighting is a basic real-time IBL lighting (just use the normal to look up EnvMap texture), so, it's a bit cheap. But we can see that it's pretty close.
4- The tool use a chosen material as "lighting measurement reference". So lots of materials doesn't means lots of masks. You just have to select (roughly) a material as lighting reference and mask everything else. The best is to start with an empty mask and then iterate on it if the results are not good enough.
when then project is unzipped and you've got a folder named "DeLightingTool-master", you just have to open Unity, select "File/Open Project" and coose the folder. It should work, if not, don't hesitate to tell me.
Thank you! I'll be posting results of some work with the tool soon
Really thanks for your tool !
I did a quick test. I use Knald demo to generate the maps.
What 's wrong with the noisy area in red circle?
Actually I tried Xnormal first but the normal map is different.
Sorry I am beginner for normal map baking.
Anything needs to take care to generate the normal and bent normal maps?
I think there is no Alpha channel in your BaseColor. It should be like this:
Did you use a mask ? You have the face and the shirt in your base color. I think you should colorize the shirt parts in red in the mask. There infos about the red mask in this thread and in the documentation with the tool.
Even if it's not so much, I think the watermarks can disturb the light measurement.
The noise is still there but the color improve a lot !
Great tool! Need to test this
I have one problem.
I follow the tutorial but in my unity, the Delighting Tool was not working.
window - experimental - delighting tool was appear and the image was loaded.
but not working Compute.
please help me
Thanks for the wonderful tool, I'm amazed with your results.
I'd given this a try. Couldn't get very good results as yours. I need to adjust my red mask and introduce the green mask as well.
Wanted to ask few questions. I would appreciate if you can answer.
-Although I've taken a quick look at the github docs, I couldn't see information about how you're baking your maps. Comparing your normal and bent textures to mine, I see lots of difference. Is there anything I might be missing? I'm using Substance Designer baking.
-In your example data, position textures are exr files. Is it better to use 16bit fp? How about the other textures, can we use 16bit normal/albedo textures for better results?
-I see that there is a way to use command line. How does that work? I'm guessing that we need to build the project but not sure because those are editor scripts. Is it possible to call functions at runtime?
-The "Switch Y/Z axes" is not documented thoroughly at the moment, Can you tell be when do we need this?
-I'm getting a harsh dark/light transition on top left, It is worse without the position texture. How can I avoid that? position texture is exr.
Hi Gurayg ! Thanks a lot using this tool and giving feedback. The first thing I see is that your normal and bent normal seem to be baked in tangent space instead of object space (or world space). If you're baking your maps in designer, Map Type should be set to World Space:
Another question: do you have alpha channel in your lit texture ? Here's a setup I suggest to you:
Lit Texture - RGB: is the color to de-light
Lit Texture - Alpha: is "what pixel to de-light"
Mask - Red channel: is "not material reference".
Basically, the mask should be red in all the bad reconstructed parts, or non reference material.
The reference material is the one that is used to measure the lighting. When lighting is measured, it is used on all materials (not only on the reference material).
I hope this was helpful. I feel that the concept of reference material isn't really clear, and I'm sorry of that. I try to find a way to make all of this easier to author or to understand.
I'm having some issues installing the tool on unity... I open the unity package and it adds the maps to unity and on the windows menu it only appears "Look Dev" not the Delighting tool... I've experimented with unity 2017 and 5.6.2 :/
Am I doing something wrong??
Hi Carlos, the post #35 can help you
Tell me if not !
Question: why should such a tool be part of Unity?
Shouldn't it be part of the software suite that creates the meshes?
In an ideal world yes. But why not just be thankful there is a tool being worked on that is cross-platform and free?
Because there are more important areas in Unity which need a long deserved update.
People ask for specific feature updates which get pushed back again and again (terrain and nested prefabs for example). And then, out of the blue, a new feature is worked on which hasn't really a place in Unity.
Very good questions.
I'm technical artist and I was involved in a project that required to create asset using photogrammetry pipeline. I kind of get stuck on the de-lighting process because there is no really ready to use solution and the commonly used techniques can't achieve a good quality without a lot of energy. I start to develop the tool to help the graphic team, and because we start to have good results, we decided to share it with everyone that have the same problem. Nested prefabs and terrain are developed by editor or engine programmers, not by tech artists. So, it wasn't this feature instead of the others.
Because the scripting code is really easy, tech artists can create tools by themselves. This project was published like a data processing tool example to show people that Unity is great to create your own pipeline tools and not only to create games.
I hope I've answered to your question, don't hesitate to tell me if it's not the case
Can you correct manual for this tool?
Write that not RGBA for base map but Map with Alpha in RGBA!
Second, if you not know yet, but bent Normal map in default settings (Uniform, etc) EXACTLY the same as Object Space normal map in xNormal just calculate 10x times slower.
So will be better if you write your own settings for bent normal maps if using not default one.