The automatic generation of building information models with laser scanners is an emergent research line in the reverse engineering field. The creation of this kind of models has been made by hand during the last years, leading to suppose a complex and tedious work. Therefore, the automation of this process is an interesting challenge. In this thesis, a work focused on the automatic reconstruction of inhabited interiors is presented. Firstly, 3D point clouds, which are acquired from strategic positions, are processed in order to identify and pose the structural components of the scene. Then, a boundary representation (B-Rep) model is created. These models contain the location and relationships of structural elements of inhabited scenarios such as walls, ceilings, floors, columns, doors and windows. The scene, enclosed by the calculated B-Rep model, is also composed of a set of basic pieces of furniture. These �non-permanent� elements, which can be relocated or removed in the scene, are also identified and positioned. Some authors have developed different algorithms in order to localize objects in interior environments. However, these processes (mainly based on Computer Vision) are complex, computationally expensive and the results are unaccurate. In this dissertation, a more flexible and novel solution to this problem is proposed, combining laser scanners and radio-frequency identitication (RFID) technologies. The general strategy consists of carrying out a selective and sequential segmentation of the point cloud by means of different algorithms which depend on the information that the RFID tags provide. These tags, attached to pieces of furniture, store geometrical information of the objects, making the identification and positioning of basic elements in the scene faster and easier. This method has been tested in real scenes yielding promising results. An in depth assessment has been performed, analyzing how reliably these elements can be detected and how accurately they are modeled. Finally, we can conclude that this proposal yields accurate 3D models which may be used for further purposes related to the scene understanding.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados