This course presents the mathematical models underpinning several 3D vision algorithms, with a particular emphasis on synchronization and multi-model fitting, two streams of research that have been recently combined to derive better understanding of a 3D scene. The first part of the course describes the basic geometric tools of photogrammetric computer vision, and how they can be combined together to implement a modern 3D reconstruction pipeline. The second part of the course introduces the concept of “synchronization”, that is a general framework to solve problems involving multiple entities (e.g., images or 3D point clouds) organized as a “graph”, where the task is to seek for global consistency. Instances of synchronization include pose-graph optimization, multi-view matching and 3D point cloud registration. The third part of the course is devoted to “multi-model fitting”; the problem of robustly fitting a single parametric model will be first introduced, to move then to the general case of multiple models, with a focus on methods addressing this problem from a clustering perspective. Examples include fitting geometric primitives (e.g., lines or circles) to points in the plane and fitting geometric models (e.g., fundamental matrices or homographies) to correspondences in two images. Finally, the connections between synchronization and multi-model fitting are explained, with particular emphasis to the motion segmentation problem, where the task is to detect moving objects in a dynamic 3D scene.