Aerial-Ground Fusion: Complete Field and Processing Guide | XGRIDS Pro Guide
XGRIDS Pro Guide™ / Module 6: Advanced K1/L2 Pro

6.4 Aerial-Ground Fusion

Aerial-Ground Fusion combines drone imagery with ground-level XGRIDS scan data to produce a complete model covering both perspectives. The ground scanner captures interior detail and ground-level geometry that the drone cannot see. The drone covers building exteriors, rooftops, and large outdoor areas that ground scanning cannot efficiently reach.

What Aerial-Ground Fusion Is and When It Applies

Aerial-Ground Fusion is not the same workflow as L2 Pro Drone Mode covered in section 6.3. In Drone Mode, the L2 Pro is physically mounted on the drone and collects LiDAR data from the air. In Aerial-Ground Fusion, the ground scanner stays in your hands and a separate mapping drone collects oblique photographic imagery from the air. The two datasets are aligned in software after capture.

The output format also differs depending on which software processes the data. LCC Studio produces a 3D Gaussian Splat from the combined dataset. LixelStudio produces a merged point cloud. The choice between them depends on your deliverable, not on the field collection method. Both pipelines require the same field data collected the same way.

LCC Studio Pipeline

Output: 3D Gaussian Splat (3DGS). Photorealistic rendered model viewable in LCC Viewer, suitable for walkthroughs, client delivery, and web publishing.

Ground devices: PortalCam, K1, L2 Pro.

Aerial data: 100 to 10,000 drone images. JPG/JPEG only. Minimum resolution 1024×768, consistent across all images.

Fusion method (PortalCam): Drone takeoff/landing image folders organized by fusion point, uploaded separately in LCC Studio alongside the PortalCam scan segment.

Fusion method (K1/L2 Pro): Single aerial image set uploaded alongside the ground scan. Panoramic video option is grayed out for K1 and L2 Pro; proceed directly to adding aerial data.

Processing time: 24 to 48 hours or more for large sites. Hardware minimum: 16-core CPU, 96 to 128 GB RAM, RTX 4090.

LixelStudio Pipeline

Output: Georeferenced merged point cloud in E57, LAS/LAZ, or RCP format. Suitable for BIM, survey deliverables, measurement, and coordination with other point cloud datasets.

Ground devices: K1, L2 Pro.

Aerial data: Drone imagery organized in a Drone/ subfolder alongside the Lixel/ project subfolder. XGRIDS Pose tool (extract_preview_poses) must be run on the drone data before LixelStudio import.

Fusion method: Both the Drone/ and Lixel/ subfolders are imported into LixelStudio as a single project. LixelStudio merges the point clouds using the RTK tracks from both datasets.

Processing time: Varies by data volume. Typically longer than a standard ground-only scan due to the aerial data processing overhead.

The field collection procedure is largely the same regardless of which software pipeline you use. The difference is what you do with the drone images after collection: organized takeoff/landing folders for LCC Studio, or the Drone/Lixel folder structure with the Pose tool for LixelStudio. Plan your folder organization before the flight. Reorganizing a large drone image set after the fact is tedious and error-prone.

Project Types That Benefit from Aerial-Ground Fusion

  • Campus facilities and multi-building complexes where exterior context and interior detail are both required
  • Construction sites needing periodic full-site documentation including ground-level progress and aerial site overview
  • Heritage documentation of buildings with significant exterior architectural detail that cannot be fully captured from the ground
  • Large outdoor areas with connected interior spaces, such as industrial yards leading into warehouses
  • Any project where a roof, upper facade, or terrain above the scanner's reach is part of the deliverable

Hard Requirements Before You Start

Aerial-Ground Fusion has non-negotiable prerequisites that apply to both the ground and aerial data collection. A project that does not meet these requirements will fail during processing. There is no workaround in the software for missing RTK data or incorrect coordinate system configuration.

Both drone and ground scanner must have RTK enabled RTK must be active and achieving Fixed status during collection for both devices. Float or Single Point RTK is not sufficient for Aerial-Ground Fusion alignment.
Coordinate system must be WGS84 on both devices Set WGS84 in LixelGO or LCC Scan before scanning, and set WGS84 on the DJI remote controller before the flight. A mismatch in coordinate systems between the ground and aerial datasets will offset the fusion result by meters and cannot be corrected after the fact.
Aerial data must include both the main grid flight and the takeoff/landing image sequences The takeoff/landing sequences are what allow LCC Studio to stitch the aerial view to the ground scan. A grid flight without takeoff/landing sequences cannot be fused with ground data reliably.
Drones with multifocal cameras must use the same focal length for all images If the drone camera system can switch between focal lengths, all images in a single processing project must have been captured at the same focal length. Mixed focal lengths within one project will cause reconstruction errors.

Drone Recommendations

The best aerial platform for Aerial-Ground Fusion is a DJI Matrice 300, Matrice 350, or Matrice 400 (coming soon) equipped with a DJI Zenmuse P1 camera payload. The Zenmuse P1 is a full-frame mapping camera, not a drone itself. It mounts to the Matrice platform and provides significantly higher image quality and geometric accuracy than lighter consumer-grade alternatives.

If your operation already runs a Matrice 300 or 350 for L2 Pro Drone Mode, you can swap the L2 Pro mounting bracket for the Zenmuse P1 payload and use the same drone for the aerial photography phase of a fusion project. No additional drone platform is required.

Category
Platform and Notes
Recommended
DJI Matrice 300 RTK, Matrice 350 RTK, or Matrice 400 (coming soon) with Zenmuse P1 payload. Superior image quality and the same platform used for L2 Pro Drone Mode.
Acceptable
DJI M3E, DJI M4E. Capable mapping drones for aerial photography only. Cannot carry the L2 Pro and produce lower image quality than the Zenmuse P1. Adequate for lower-fidelity fusion projects.
Not recommended
DJI P4R. No longer recommended for mapping workflows.

Site Planning and Fusion Point Selection

Before arriving on site, identify 4 to 5 fusion point locations using satellite imagery or aerial photos. These are the physical locations where the drone will take off and land, and where the ground scanner will perform its fusion-specific collection routine. These points are what LCC Studio uses to align the two datasets in three-dimensional space.

What Makes a Good Fusion Point

A fusion point location must be accessible to both the drone (for takeoff and landing) and the scanner (for ground-level scanning). Beyond that, it needs to meet all four of these criteria:

  • Open area: The drone needs a clear vertical path for takeoff and landing. Overhead obstructions like trees, power lines, or building overhangs will prevent the required vertical image sequence from being captured correctly.
  • Rich surface features: The fusion algorithm matches geometry visible in both the drone images and the ground scan. A blank pavement square with no features provides almost nothing to match against. A paved area with planters, signage, bollards, or building corners is much more useful.
  • Stable and unchanging between visits: If the ground scan happens on Tuesday and the drone flight happens on Thursday, the fusion point area must look the same in both datasets. Parked vehicles, staging materials, or any moveable objects that might not be there both days will degrade the fusion result.
  • Accessible to the scanner from multiple angles: The scanner needs to circle the fusion point area to provide 360-degree ground-level coverage that the drone can match against from overhead.

Distribute fusion points evenly across the site. Four to five points should span the full extent of the project area. Two points clustered at one end of a large site will not provide the geometric constraint needed for the far end of the dataset to align correctly. Sketch a site plan and place fusion points so that no part of the project boundary is more than roughly 50 to 100 meters from the nearest fusion point.

Ground Collection: PortalCam Method

The PortalCam method uses LCC Scan's built-in Aerial-Ground Map Fusion control point feature to mark each takeoff and landing location during the ground scan. This creates the link between the ground scan segment and the specific drone takeoff/landing image folders during LCC Studio processing.

LCC Scan App version 1.2.0_p1 or higher and firmware version V3.2.3-20251104.144651 or higher are required for the Aerial-Ground Map Fusion control point button to appear in the interface. If your app or firmware is below these versions, the feature will not be available.

Ground Scanning Sequence (PortalCam)
1

Set the coordinate system to WGS84 before starting

In LCC Scan, confirm the coordinate system is set to WGS84 before beginning the scan. This cannot be changed after the scan has started. The drone must also be set to WGS84 on the DJI controller. Both must match.

2

Scan the site following standard PortalCam technique

Use normal PortalCam scanning procedure: multi-height trajectories, bidirectional passes on corridors, and loop closures throughout. Scan the full project area as you normally would. The fusion-specific steps only occur at each designated fusion point location, not throughout the entire scan.

3

When approaching each fusion point, place the PortalCam on the tripod

At each fusion point location, stop scanning, set the PortalCam on its tripod, and place it on a stable surface within the fusion zone. The device needs to be stationary for the control point marking step.

4

Tap the Aerial-Ground Map Fusion control point button and name the point

In LCC Scan, tap the control point button and select Aerial-Ground Map Fusion as the type. Assign a name that matches exactly what you will use when labeling the corresponding takeoff/landing image folder for this location. Capitalization counts. A point named FP_01 and a folder named fp_01 will not be recognized as the same point.

5

Walk one full circle around the control point area

With the device still scanning, pick it up from the tripod and walk one complete circle around the fusion point location. This gives the algorithm 360-degree ground-level coverage of the area, which significantly improves fusion stability. Keep the scan going and proceed to the rest of your route when the circle is complete.

6

Repeat at each fusion point and complete the full site scan

Visit each planned fusion point during the scan and repeat steps 3 through 5. Complete the full site coverage before stopping. Finish the scan with a loop closure at or near the starting point to stabilize the overall SLAM trajectory.

Ground Collection: K1 and L2 Pro Method

For K1 and L2 Pro, the ground scan procedure is the same as a standard RTK-enabled scan with one specific addition at each fusion point: the scanner must complete two loop closures within 2 meters of the takeoff and landing location. This gives the SLAM algorithm dense, well-constrained ground-level geometry precisely at the area the drone will be photographing from directly above.

Ground Scanning Sequence (K1 / L2 Pro)
1

Verify RTK is Fixed before starting the scan

In LixelGO, confirm satellite status is Fixed (not Float, not Single Point) and that you have at least 10 valid satellites. Set the coordinate system to WGS84. The drone must also be set to WGS84. Do not begin scanning until RTK is Fixed.

2

Scan the site following standard K1/L2 Pro technique

Scan the full project area normally. Maintain RTK Fixed status throughout as much of the scan as possible, particularly in and around the fusion point areas. If RTK signal is lost under cover or inside structures, the fusion point areas must be covered with Fixed RTK.

3

At each fusion point, complete two loop closures within 2 meters of the location

When your scanning path brings you to each fusion point location, make two distinct loops around the point, each staying within 2 meters of the takeoff/landing spot. Walk the loops at different heights if the area allows. These loops create the dense, geometrically constrained ground data at the exact location the drone will capture from above.

If you arrive at the fusion point from one direction and leave in the same direction without completing the loops, the scan data at that point will not be sufficiently constrained for reliable fusion alignment.

4

Continue through all fusion points and complete the full scan route

Visit all planned fusion point locations during the scan. Complete the full site coverage and close the overall scan loop before stopping. Stop the scan in LixelGO and wait for the solid green LED before powering off.

Drone Flight and Aerial Data Collection

The drone flight has two distinct phases that must both be completed at every fusion point location: the main grid mission over the site, and the takeoff/landing image sequences at each fusion point. The grid mission provides the broad aerial coverage. The takeoff/landing sequences provide the visual bridge between the aerial and ground perspectives. Without both, the fusion will fail or produce degraded results.

Main Grid Mission

1

Set the drone coordinate system to WGS84

On the DJI remote controller, confirm the coordinate system is WGS84 before setting up the flight mission. This must match the ground scanner's coordinate system setting.

2

Plan a smart grid flight pattern over the survey area

Use the DJI controller's smart grid function. Frame the survey area on the controller map to define the flight boundary. A tic-tac-toe pattern (two perpendicular sets of parallel flight lines) provides the most complete coverage for Aerial-Ground Fusion.

3

Set capture mode, GSD, gimbal angle, altitude, and overlap

Use oblique capture mode with the gimbal angle set to 45 degrees. Set GSD (ground sampling distance) to 1 centimeter or below. Set flight height above the tallest building in the survey area to ensure clearance. In advanced settings, set both side overlap and forward overlap to 85 percent.

For large areas with complex structures, plan 1 to 2 additional altitude passes above the main altitude. The height difference between adjacent passes must not exceed twice the lower altitude.

4

Execute the grid mission and monitor continuously

Tap Start. The drone will fly the planned route automatically. Monitor the drone throughout the flight. If conditions change (wind, unexpected obstacles, airspace alerts), pause the mission and resume when it is safe to continue. Do not leave the drone unmonitored during the mission.

Takeoff and Landing Image Sequences

After the main grid mission, the drone must visit each fusion point location and capture a continuous image sequence from ground level to flight height. This is the most critical and most frequently skipped step in Aerial-Ground Fusion. A grid flight without these sequences will not fuse reliably with the ground scan.

Plan at least 3 to 4 takeoff/landing sequences distributed throughout the scanning path. For larger sites, plan at least one sequence every 50 to 100 meters of the ground scan route. Each sequence must provide continuous coverage from approximately 1.5 meters above the ground up to a partial altitude, with adjacent image overlap of 85 percent or greater.

1

Fly to the first fusion point and descend to approximately 1.5 meters above ground

After the grid mission is complete, navigate the drone to the first fusion point. Descend to approximately 1.5 meters above the ground and position the drone so the camera is pointing at fixed, stable features that were also visible to the ground scanner at that location.

2

Capture 30 to 40 photos while ascending steadily from ground level to flight height

Begin capturing images at approximately 1-second intervals while slowly and steadily ascending from the 1.5-meter starting position up to the full flight altitude used during the grid mission. The sequence must be continuous with no large position jumps. The lowest images must overlap spatially with the ground scanner's trajectory. The highest images must overlap with the grid flight imagery.

Shoot from at least 3 different horizontal directions at each fusion point to ensure sufficient angular coverage for the fusion algorithm.

3

Keep adjacent image overlap at or above 85 percent throughout the sequence

The value of the takeoff/landing sequence is the visual continuity between ground level and flight altitude. If there are large gaps between images (the drone moved too fast, or too few images were captured), the transition between the ground and aerial perspectives will be broken and the fusion alignment will fail at that point.

4

Repeat at every fusion point before landing

Visit each planned fusion point location in sequence and capture the full takeoff/landing image sequence at each one. Keep the images from each fusion point in separate folders. Do not mix images from different fusion point locations into a single folder.

Folder Organization Before Processing

How you organize the drone images after the flight determines how you will import them in LCC Studio or LixelStudio. Organize the folders before you sit down to process. Doing it during an active processing session introduces errors.

PortalCam Method (LCC Studio)

LCC Studio expects the drone data organized into a main aerial folder and separate subfolders for each fusion point's takeoff/landing sequence. The system will automatically identify the number of fusion points from the PortalCam scan segment and prompt you to upload the corresponding image folder for each one.

ProjectName/ # parent folder
├─ PortalCam scan segment folder
└─ Drone/ # all drone image data
   ├─ DJI_2025XXXXX_001/ # main grid flight images
   ├─ DJI_2025XXXXX_002/ # grid flight continued (if multiple batteries)
   ├─ DJI_Link_1/ # takeoff/landing images: fusion point 1
   ├─ DJI_Link_2/ # takeoff/landing images: fusion point 2
   └─ DJI_Link_3/ # takeoff/landing images: fusion point 3

When importing in LCC Studio, first import all grid flight images. Then import the takeoff/landing folders in sequence, one per fusion point. LCC Studio will match each takeoff/landing folder to the corresponding fusion point marked in the PortalCam scan data by name.

For PortalCam projects, fusion points must have been marked in LCC Scan during collection. If you completed the ground scan without marking Aerial-Ground Map Fusion control points, uploading aerial images directly without the corresponding fusion point data will reduce fusion quality or cause reconstruction failure. There is no way to add fusion point markers after the scan is complete.

K1 and L2 Pro Method (LixelStudio)

LixelStudio expects a parent folder with two subfolders: Drone for the aerial imagery, and Lixel for the ground scan project folder. Before importing into LixelStudio, the XGRIDS Pose tool (extract_preview_poses) must be run on the drone data to prepare the camera pose files that LixelStudio requires.

ProjectName/ # parent folder
├─ Drone/ # all drone image data goes here
│   └─ (all drone images, organized by flight folder)
└─ Lixel/ # K1 or L2 Pro project folder goes here
    └─ (project folder from the device)
1

Place the K1 or L2 Pro project folder in the Lixel subfolder

Copy the project folder from the device into the Lixel/ subfolder of your project directory. Do not alter the internal structure of the project folder.

2

Place all drone images in the Drone subfolder

Copy all images from the drone's SD card into the Drone/ subfolder, preserving the DJI-generated folder structure. This includes the grid flight folders and the takeoff/landing sequence folders.

3

Run the XGRIDS Pose tool on the drone data

The XGRIDS Pose tool (extract_preview_poses application, provided by XGRIDS) processes the drone images and generates the camera pose data that LixelStudio needs to align the aerial imagery with the LiDAR point cloud. Run this tool before opening LixelStudio. You can use the included visualization tool to verify that all camera poses have been correctly extracted before proceeding.

4

Import the parent folder into LixelStudio and configure processing

Open LixelStudio and navigate to the Aerial-Ground Map Fusion project type. Point it to the parent folder containing both the Drone/ and Lixel/ subfolders. LixelStudio will read the project structure and configure the merge parameters. Verify the coordinate system and run processing.

Processing: LCC Studio vs LixelStudio in Detail

Both pipelines take the same field data and produce different outputs. The decision between them is a deliverable decision, not a quality decision. A Gaussian Splat from LCC Studio is not better or worse than a point cloud from LixelStudio. They serve different purposes.

Factor
LCC Studio
LixelStudio
Output format
3D Gaussian Splat (3DGS). Photorealistic rendered model.
Merged point cloud. E57, LAS/LAZ, RCP, or PLY.
Compatible ground devices
PortalCam, K1, L2 Pro
K1, L2 Pro only
Aerial image count
100 to 10,000 images. JPG/JPEG only. Min 1024×768.
No fixed limit documented; determined by Pose tool output.
Pre-processing requirement
No external tool required. Folder organization only.
XGRIDS Pose tool must be run on drone data before import.
PortalCam panoramic video
Supported for L2 devices with panoramic cameras. Grayed out for K1 and L2 Pro.
Not applicable. LixelStudio handles LiDAR point clouds, not Gaussian Splat video input.
Suitable deliverables
Walkthroughs, client presentations, web publishing, virtual tours, marketing.
BIM coordination, survey deliverables, measurement, CAD integration, dimension-verified documentation.
Hardware requirements
16-core CPU, 96 to 128 GB RAM, RTX 4090 recommended for full-site aerial-ground projects.
Standard LixelStudio hardware requirements. Less GPU-intensive than LCC Studio.
Processing time
24 to 48 hours or more for large sites.
Variable. Typically faster than LCC Studio for equivalent coverage.

You can run both pipelines on the same field data. If you collect the ground scan with a K1 or L2 Pro and organize the drone data with both the LCC Studio folder structure and the LixelStudio Drone/Lixel structure, you can process in both applications and deliver either a Gaussian Splat or a point cloud (or both) from a single field session. This requires slightly more organized file management but no additional field time.

©2026 Alpine Reality Capture LLC  •  XGRIDS Pro Guide™  •  Site Disclaimer