3260 papers • 126 benchmarks • 313 datasets
Synthesize a novel frame of LiDAR point clouds with an arbitrary LiDAR sensor pose from given source point clouds and their LiDAR sensor poses. For dynamic scenes, it also includes different spatial and temporal view synthesis.
(Image credit: Papersgraph)
These leaderboards are used to track progress in novel-lidar-view-synthesis-5
No benchmarks available.
Use these libraries to find novel-lidar-view-synthesis-5 models and implementations
No datasets available.
No subtasks available.
The extensive experiments on the scene-level KITTI-360 dataset, and on the object-level NeRF-MVL show that the LiDAR-NeRF surpasses the model-based algorithms significantly, and a structural regularization method to preserve local structural details is introduced.
The proposed LiDAR4D, a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis, designs a 4D hybrid representation combined with multi-planar and grid features to achieve effective reconstruction in a coarse-to-fine manner and introduces geometric constraints derived from point clouds to improve temporal consistency.
This work proposes a 3D scene reconstruction and novel view synthesis framework called parent-child neural radiance field (PC-NeRF), which implements hierarchical spatial partitioning and multi-level scene representation, including scene, segment, and point levels.
Adding a benchmark result helps the community track progress.