How is a set of multiple 360 pictures (such as Matterport ones) converted to a 3D model with dimensions? Can it be done? If yes, which is the math behind it? Is the 3D model usually a point cloud created with photogrammetric techniques? How does this differ from the conversion from 2D pictures to 3D model that is done with outdoor drones? The latter take a set of 2D pictures at different locations with ~80% overlap, create the point cloud (3D “discrete” model) with photogrammetric techniques, then it gets the 3D “continuous” model by meshing the point cloud. And it also needs RTK-GPS for cm-accurate positioning, or alternatively a bunch of Ground Control Points with known coordinates. How is the process instead if the starting point is a set of 360 pics?