1
Semantic Cameras for 360-Degree Environment Perception in Automated Urban Driving
2
Unifying Panoptic Segmentation for Autonomous Driving
3
MMPTRACK: Large-scale Densely Annotated Multi-camera Multiple People Tracking Benchmark
4
KITTI-360: A Novel Dataset and Benchmarks for Urban Scene Understanding in 2D and 3D
5
DeepLab2: A TensorFlow Library for Deep Labeling
6
VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild
7
Capturing Omni-Range Context for Omnidirectional Segmentation
8
Offboard 3D Object Detection from Point Cloud Sequences
9
Panoramic Panoptic Segmentation: Towards Complete Surrounding Understanding via Unsupervised Contrastive Learning
10
STEP: Segmenting and Tracking Every Pixel
11
Auto4D: Learning to Label 4D Objects from Sequential Point Clouds
12
GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving
13
ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation
14
MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking
15
PASS: Panoramic Annular Semantic Segmentation
16
HOTA: A Higher Order Metric for Evaluating Multi-object Tracking
17
Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D
18
World-Consistent Video-to-Video Synthesis
19
Video Panoptic Segmentation
20
A2D2: Audi Autonomous Driving Dataset
21
Pixel Consensus Voting for Panoptic Segmentation
22
Predicting Semantic Map Representations From Images Using Pyramid Occupancy Networks
23
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation
24
Scalability in Perception for Autonomous Driving: Waymo Open Dataset
25
PolyTransform: Deep Polygon Transformer for Instance Segmentation
26
Autolabeling 3D Objects With Differentiable Rendering of SDF Shape Priors
27
Single-Shot Panoptic Segmentation
28
Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation
29
SSAP: Single-Shot Instance Segmentation With Affinity Pyramid
30
Orientation-Aware Semantic Segmentation on Icosahedron Spheres
31
Argoverse: 3D Tracking and Forecasting With Rich Maps
32
Video Instance Segmentation
33
WoodScape: A Multi-Task, Multi-Camera Fisheye Dataset for Autonomous Driving
34
Seamless Scene Segmentation
35
SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences
36
nuScenes: A Multimodal Dataset for Autonomous Driving
37
CityFlow: A City-Scale Benchmark for Multi-Target Multi-Camera Vehicle Tracking and Re-Identification
38
An End-To-End Network for Panoptic Segmentation
39
DeeperLab: Single-Shot Image Parser
40
MOTS: Multi-Object Tracking and Segmentation
41
UPSNet: A Unified Panoptic Segmentation Network
42
Panoptic Feature Pyramid Networks
43
Attention-Guided Unified Network for Panoptic Segmentation
44
Distortion-Aware Convolutional Filters for Dense Prediction in Panoramic Images
45
Understanding 3D Semantic Structure around the Vehicle with Monocular Cameras
46
BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning
47
Features for Multi-target Multi-camera Tracking and Re-identification
48
The ApolloScape Open Dataset for Autonomous Driving and Its Application
49
WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection
51
Im2Pano3D: Extrapolating 360° Structure and Semantics Beyond the Field of View
52
The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes
53
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
54
Deep Occlusion Reasoning for Multi-camera Multi-target Detection
55
Making 360° Video Watchable in 2D: Learning Videography for Click Free Viewing
56
Pixelwise View Selection for Unstructured Multi-View Stereo
57
Performance Measures and a Data Set for Multi-target, Multi-camera Tracking
58
Multi-view People Tracking via Hierarchical Trajectory Composition
59
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
60
The Cityscapes Dataset for Semantic Urban Scene Understanding
61
Deep Residual Learning for Image Recognition
62
GMMCP tracker: Globally optimal Generalized Maximum Multi Clique problem for multiple object tracking
63
Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs
64
Adam: A Method for Stochastic Optimization
65
Fully convolutional networks for semantic segmentation
66
ImageNet Large Scale Visual Recognition Challenge
67
Simultaneous Detection and Segmentation
68
Microsoft COCO: Common Objects in Context
69
Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking
70
Online Object Tracking: A Benchmark
71
GMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs
72
Streaming Hierarchical Video Segmentation
73
Are we ready for autonomous driving? The KITTI vision benchmark suite
74
Describing the scene as a whole: Joint object detection, scene classification and semantic segmentation
75
Inter-camera Association of Multi-target Tracks by On-Line Learned Appearance Affinity Models
76
What, Where and How Many? Combining Object Detectors and CRFs
77
PETS2009: Dataset and challenge
78
Semantic object classes in video: A high-definition ground truth database
79
Homography based multiple camera detection and tracking of people in a dense crowd
80
Multicamera People Tracking with a Probabilistic Occupancy Map
81
The Graph SLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures
82
Efficient Graph-Based Image Segmentation
83
Multiscale conditional random fields for image labeling
84
Image Parsing: Unifying Segmentation, Detection, and Recognition
85
Normalized cuts and image segmentation
86
Variational Amodal Object Completion
87
Twelfth IEEE international workshop on performance evaluation of tracking and surveillance
88
Ieee Transactions on Pattern Analysis and Machine Intelligence 1 Multiple Object Tracking Using K-shortest Paths Optimization
89
Edinburgh Research Explorer The PASCAL Visual Object Classes (VOC) Challenge