Welcome to the De-Lighting Tool thread
My first obvious question: what is the De-Lighting tool?
My first not so obvious answer is:
The De-Lighting tool aims to remove the lighting that remains in 2D textures exported from photogrammetry software (Reality Capture, Photoscan …).
Here is the blog post about the tool:
Thanks!
You’re welcome !
Looks like an usefull tool. Do you know if something similar exists for Photoshop ?
That looks absolutely delightful.
I’ll see myself out…
You can do this manually in photoshop. Theres a gdc talk by the ea battlefront devs on photogrammetry over on youtube, they briefly cover this.
Gave it a go, pretty nice results.
Short gif of just manually rotating a light with the assets - https://gfycat.com/MagnificentAdventurousGypsymoth
This tool is gonna make a lot of very happy artists, thanks guys.
Do you maybe have a link of the specific gdc talk ? Your result looks great ! Which pipeline do you use to get the scan and all the maps out of it ?
heres the talk, with the time of when he mentions the shadow removal process in the link
https://www.youtube.com/watch?v=U_WaqCBp9zo
I used a 5d1 for the photos(all on a tripod at lowest iso with a remote shutter to reduce camera blur, but this was apparently not necessary according to the talk and I’m going to have to do more tests to see), and agisoft photoscan for processing them. I pretty much used the same process described in the video, although I’ve only gotten as far as processing them in agisoft, theres still more that could be done(such as cleaning up holes in zbrush before generating the lowres, manual uvs before generating the maps to plug into the delighter), but these are just some examples from my first outdoor attempt at the process.
http://chattypics.com/files/photoscan_20170708_130837_mhp1lrjvdf.png
This picture kinda shows how you take pictures of the object, each blue square is a photo. It was about 86 photos for that big rock(it was about 6 feet I think) and it was probably the right amount for the detail captured.
http://chattypics.com/files/photoscan_20170708_132030_rr9vtrxs05.png
This one is of a sequoia, only 30 photos and it was nowhere near enough. I could kick myself because I’m not sure when Ill return to that area, and sequoias only grow in a very small handful of places.
The gdc talk mentions they captured 300-500 photos per asset, which seems a tad excessive? The camera they used probably has double the resolution of mine. As a comparison: using photoscan at its highest settings for 150 (12mp)photos will use all of 32gb of my ram, not to mention needing to leave the computer on overnight(and still wake up to unfinished processing).
Anyway the software will undoubtedly change as its a fast growing area, but as a final note its very possible to get started with just a phone camera.
Hey ! Cool results. I think in your case you should use the mask map. It is descibed in the doc on the GitHub project.
Basically the tool use the object itself as a lightprobe. When there is some deposition material (like on your ground) it can perturb the measure. In your case, the ground loses a bit its color.
If you create a mask (should be the same resolution that the other maps) just quickly (doesn’t have to be accurate) paint in red the ground parts. Everything else should be black. You will see that color will be good again.
The result should be good on the rock but might be a bit less good on the ground. Invert the red channel of the mask, and this time the ground will be better. You can save each result and mix them in Photoshop (using the red channe of the mask).
Sorry, the answer is long, but it’s pretty fast to do … It should be more assisted or automatic. Maybe in next version.
Can’t wait to see your next result ! Cheers !
Looks cool! Not really a comment on the tool itself, but it seems like ideally when processing photos before the reconstruction process you should do a batch process on the original RAW photos to lift exposure on shadows and lower exposure on highlights. That way you preserve the most visual detail.
Hi there!
First of all, thanks for the tool!
I have tested it with some of my photogrammetry props. It worked fine in this tree, although I lost some of the colors of the base part of the tree.
But in a captured scene where you got objects with a lot of different materials, the results are not very good. I have a lot of nature props and old architecture objects from 16th century captured, so I can do a lot of tests if you need it.
Hi Grihan ! Did you try to use the Mask Map ? It seems to be a problem very close to the one of Thelebaron.
no, I didn’t use it, but…how many masks do you think I’ll need? three? one for the ground with leaves, one for the white stones and another one for the brown stone?
@ thelebaron … thx for Link and explaination. You need good horsepower for photogrammetry, sadly. But otherwise, its cheaper than a laserscanner, i guess. If i would have the money for a laser scanner, i would go for this tech instead for photogrammetry.
I think you just need one very simple.
Everything should be black exept the ground that should be red.
The red channel of the mask is used to separate very different materials. It is explained in the doc that is in the GitHub project, but I will publish a video tutorial very soon. This will show what are the corner cases and how to quickly fix it.
In the future it should be automatic, but in this first version it isn’t. Mastering the red channel of the mask is really quick and easy, and can dramatically improve the result. If you still have the problem I’ll be happy to test your data (and thelebaron 's) and try to fix what went wrong.
Cheers