Generation of up-to-date 3D city model
The method is tested in Assen, the Netherlands. The ALS was acquired in 2010, while the images were acquired in 2018.
All 952 unchanged buildings were completely verified. 172 new buildings were detected with the minimum size of 2m*2m*2m. 163 buildings are correctly detected from 175 buildings in the ground truth. 12 and 9 buildings were missed and falsely-detected respectively. The F1 score of new and removed building detection are above 0.9, which shows its potential for business level of map production.
Zhou, K., Lindenbergh, R., Gorte, B., & Zlatanova, S. LiDAR-guided dense matching of airborne VHR images for detecting changes in and updating of 3D buildings in LiDAR data. ISPRS Journal of Photogrammetry and Remote Sensing, on review. (link)
Zhou, K., Gorte, B., Lindenbergh, R., & Widyaningrum, E. (2018). 3D Building change detection between current VHR images and past LiDAR data. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 2. (link)
3D Building Change detection and Update on LiDAR data using VHR images
Contributions of the proposed method
To our knowledge, we are the first to propose LEAD-Matching method to detect accurate changes by integrating accurate plane information from LiDAR data and detailed building edges from a stereo pair to address quality problems in both data sources.
The proposed method is shown to obtain a successful building verification rate, while detecting unprecedented minimum changes of 2 m*2 m*2 m for updating large scale 3D maps.
Result is shown in video on the left
Here is how it works...
Airborne LiDAR scanning (ALS)
Airborne LiDAR data has centimeter vertical accuracy, contributing to its suitability for 3D building reconstruction. However, acquiring LiDAR data is expensive so the updating rate is low at state or national level. E.g. open source nationwide point clouds, AHN, are available for the whole of the Netherlands, but a complete update takes around 10 years. In addition, LiDAR point cloud is spare and irregularly spaced (see figure left). If spare LiDAR is interpolated to a DSM (digital surface model) with higher and uniform spatial sampling, heights in edge regions are not accurate (see figure right) due to mixed returns. E.g. near edges of overhanging roofs, points on the ground, walls and roofs get mixed in the top view.
Airborne VHR images
Airborne VHR images with cheaper acquisition price have high updating frequency. E.g. most municipalities in the Netherlands perform camera imagery annually. VHR images with a spatial resolution of higher than 10 cm endow not only detailed spectral information, but also dense geometric information on the earth surface.
Point clouds can be reconstructed from stereo images using dense image matching by searching for the corresponding pixels based on color and texture in different images (see figure left). The positions of corresponding pixels are constrained along a line defined by camera positions and rotations. The positions can be further simplified to a single value, disparity (see figure middle). Disparity is the 3rd dimension of 2D stereo images and can be transformed to height to extract a point cloud.
However, the quality point clouds is strongly affected by shadow and low textures (see figure right). As a city will not change dramatically, a large portion of a city model remains unchanged and only the changed areas should be updated.
LEAD-Matching—Building Change detection
LEAD-Matching (LiDAR-guided edge-aware dense matching) is proposed to detect accurate changes by integrating the advantages of geometric information from both LiDAR data and images to address the quality problems of both datasets. LiDAR data is sparse and irregularly spaced, and has mixed return near building edges, but LiDAR data has high vertical accuracy. Image data has can extract denser 3D information and detailed building information, but extracted 3D information is strongly affected by shadow and low texture.
The method starts from densifying LiDAR data by employing accurate planes extracted from sparse LiDAR points, instead of using interpolation. As buildings often consist of planar surfaces, accurate DSM height can be calculated, as long as a correct plane is assigned to each DSM pixel. LiDAR point clouds are first segmented and then triangulated in 2D with plane information assigned to each vertex. Each DSM pixel receives up-to three different planes from the three vertices of the 2D triangle the pixel locates in. E.g. (see figure left), a DSM roof pixel near a building may fall in a triangle with three vertices namely ground, wall or roof, where in the most cases the correct plane is included. Three candidate DSMs are created.
Subsequently, the candidate DSMs are transformed to disparity images to guide the dense matching by limiting the disparity search space (DSS) to address the shadow and low texture problems in dense matching. The edge-aware dense matching method uses detailed building edges from images to determine the optimal heights from three candidates to address the building edge problems in the LiDAR data (see figure right).
Finally, if the optimal height obtained by LEAD-Matching points to corresponding pixels of different color, a likely building change is found.