3260 papers • 126 benchmarks • 313 datasets
Indoor localization is a fundamental problem in indoor location-based applications.
(Image credit: Papersgraph)
These leaderboards are used to track progress in indoor-localization-5
No benchmarks available.
Use these libraries to find indoor-localization-5 models and implementations
No subtasks available.
This work proposes to use deep neural networks to significantly lower the work-force burden of the localization system design, while still achieving satisfactory results, and shows that stacked autoencoders allow to efficiently reduce the feature space in order to achieve robust and precise classification.
The proposed scalable DNN architecture for multi-building and multi-floor indoor localization based on Wi-Fi fingerprinting can achieve near state-of-the-art performance with just a single DNN and enables the implementation with lower complexity and energy consumption at mobile devices.
We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor spaces. The method proceeds along three steps: (i) efficient retrieval of candidate poses that scales to large-scale environments, (ii) pose estimation using dense matching rather than sparse local features to deal with weakly textured indoor scenes, and (iii) pose verification by virtual view synthesis that is robust to significant changes in viewpoint, scene layout, and occlusion. Second, we release a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data. Code and data are publicly available.
A localization method based on image retrieval that can efficiently result in high location accuracy as well as orientation estimation and attempts to use lightweight datum to present the scene.
A memory and computationally efficient monocular camera-based localization system that allows a robot to estimate its pose given an architectural floor plan and estimates the robot pose using a particle filter that matches the extracted edges to the given floor plan is proposed.
A fully convolutional network (FCN) is proposed, termed WiSPPN, to estimate single person pose from the collected data and annotations and replies to the natural question: can WiFi devices work like cameras for vision applications?
This work proposes an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector, and shows that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations.
This paper proposes a novel deep learning framework for joint activity recognition and indoor localization task using WiFi channel state information (CSI) fingerprints, and develops a system running standard IEEE 802.11n WiFi protocol and collects more than 1400 CSI fingerprints.
Adding a benchmark result helps the community track progress.