Using separate set of pictures for texturing - alicevision/Meshroom GitHub Wiki

How can I use a separate set of pictures for texturing?

Usually two image sets from the exact same angles as described in Projected Light Patterns are used for reconstruction and texturing. This is only feasible with a camera rig or motorized turntable or structured light pattern approaches.

There is another way to circumvent the requirement, but still use two datasets, one for reconstruction and one for texturing. This approach only requires the use of CCTAGs (or other supported markers). Markers and the object of interest may not be moved. The Images for both datasets don´t need to be taken from the same position.

Workflow

Duplicate the default pipeline (CameraInit -> " Duplicate node from here") and duplicate the pipeline once starting at PrepareDenseScene.

Inset the SfMAlignment node to align the sfm from reconstruction1 with reconstruction2 (original). Use "From Markers".

Now you can set the reconstructed1 mesh as input for reconstruction2 (original) Texturing Input Mesh similar to https://github.com/alicevision/Meshroom/wiki/Texturing-after-external-re-topology

Instead of an optimized Mesh, we provide the better and aligned mesh as input.

grafik

Original image example

grafik

Painted and sprinkled object with paint to improve reconstruction. It provides Meshroom with a number of features it can use during the photogrammetry process:

grafik

Result:

grafik

Images provided by @MaxDidIt

https://github.com/alicevision/Meshroom/issues/2168