Semantic Data Capture


Data Capture and simultaneous Data Classification in Real-time - gather smart 3D point clouds with your mobile device only

iPad Pro and iPhone Pro prototype iOS App available as Beta TestFlight


Semantic Data Capture
Semantic Data Capture - Bastian Plaß, CC BY SA 4.0

By using augmented reality, machine learning and LiDAR, 3D point cloud data can be captured and classified simultaneously. The iOS application was prototyped as part of bim4cAIre project at the Institute for Spatial Information and Measurement Technology i3mainz and is intended to support intelligent room analysis in the future. Currently, about a dozen classes for structural and functional interior classification are offered, including floor, wall, ceiling, door, window for elementary elements and table, chair, monitor, bed, toilet, sink, laptop for functional furnishing. The application offers a view component following the capturing phase and export capabilities to common 3D data formats such das .ply and .oby.



According to the demo video, the functionalities of the application can be seen. Following a brief initialisation time, objects are captures as a 3D mesh by using the LiDAR sensor. The mesh nodes itself are simultaneously classified and colorised accordingly. The application distinguishes between objects such as floor, wall, ceiling, door, window, table, chair and unknown. After scanning, the application allows to refine the classification result by reclassifiying mesh nodes of the class unknown using YOLO.


Semantic Data Capture
Semantic Data Capture - Bastian Plaß, CC BY SA 4.0

Capture and simultaneous classification of interior objecty with the iPad Pro application. The colorisation of the 3D mesh is currently realised without face filling. The geometry adjustment of the mesh is dynamically adapted according to the object topology, resulting in more complex object shapes being approximated by finer faces than largely plane surfaces.

Semantic Data Capture
Semantic Data Capture - Bastian Plaß, CC BY SA 4.0

Refinement of the classification result by a subsequent, image-based classifiation of specific objects using YOLO. The scanning process is stopped so that no further data is acquired. The up to that point generated 3D mesh is overlaid then with the camera feed. Objects classified as unknown can be detected, localised with a bounding box and thus reprojected into the 3D mesh, where further colorisation finished the classification refinement.


Interested in the technology or partnering with us?

Email us at ­bastian.plass@hs-mainz.de

The privacy policy is located here.