Airborne LiDAR scanning (ALS)

Generation of textured, accurate and dense point clouds

The method  is tested in Amersfoort, the Netherlands.  The ALS and the images were acquired in 2010. 


The fused point cloud is textured and denser compared to LiDAR point cloud, while is less noisy and better reconstructed in shadow and low texture areas compared to photogrametric point clouds (from Pix4D) in the video. 


The planimetric accuracy of building extraction, RMS (Root Mean Square) distance, using the fused point cloud improve from 0.480 m to 0.220 m,  compared to tusing LiDAR alone. 


Zhou, K., Smal. I., Gorte, B., &Lindenbergh, R.  E-LEADMatching---Integrating airborne LiDAR data and VHR multi-view images for improving the planimetric accuracy of building extraction. (Update soon.)​


Integration of ALS and multi-view images for improving the planimetric accuracy of building extraction

Contributions of the proposed method

  • To our knowledge, we are the first to propose the E-LEAD-Matching method to improve the planimetric accuracy of building extraction by integrating detailed building boundaries of high planimetric accuracy from multi-view images and plane information of high vertical accuracy from LiDAR data.

  • Create a denser point cloud (or DSM) with color and accurate building boundaries compared to LiDAR data. The integrated point clouds improves buildings extracted from LiDAR alone to meet the requirements of large scale mapping. 

Result is shown in video on the left

Here is how it works...

Airborne LiDAR scanning (ALS)

Airborne LiDAR point clouds with high vertical accuracy have been studied intensively as a source for extracting buildings. However, the planimetric accuracy of buildings is affected by the sparse and irregular point spacing of LiDAR data. Many studies applied regularization techniques to improve overall building outlines, but the planimetric accuracy still depends on the actual point spacing. In general, building areas are often estimated too smaller (see figure)


Airborne multi-view images 

Airborne VHR multi-view images having smaller ground sampling distance (GSD) and detailed building boundaries are alternative input for building extraction. In this case, the relief displacement of buildings should be addressed.  The  relief displacement is often addressed using height information from reconstructed image point clouds from multi-view images. However, quality of point clouds are affected in shadow and low texture areas, which directly affects the planimetric accuracy of building extraction (see figure). 


DSM from image point clouds shows its low planimetric accuracy comparing to the ground truth (red polygons)

E-LEAD-Matching- Integration of LiDAR with multi-view images 

E-LEAD-Matching extends LEAD-Matching to  (more on this link) is integrated LiDAR data with multi-view images (see figure left).  The idea is to densify sparse LiDAR points in a top view in the form of a DSM using accurate plane information from LiDAR data, building boundaries with high planimetric accuracy in multi-view images are integrated to determine the actual boundaries. 

When LEAD-matching applied to integrate  LiDAR data with a stereo pair, integration with single stereo pair has the typical facade and occlusion problem. In addition, building boundary may not be all clear in the stereo pair (see figure middle). E-LEAD-Matching extends to apply LEAD-matching to multiple stereo pairs selected from multi-view images to address these problems.  With multiple stereo pairs,  facades and occlusions effects are reduced using the information from other stereo pairs and the detailed building boundaries from multi-view images further improve planimetric accuracy (see figure right).

With multi-view (4 or 6) images,  the building boundaries matches with ground truth better as the pixel with  blue color are largely reduced when comparing to buildings extracted from LiDAR alone. The dense 3D information in images also improves the quality of building boundaries in areas with very few LiDAR point (See figure green boxes). 



Kaixuan Zhou