3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in abnormal-event-detection-in-video
Use these libraries to find abnormal-event-detection-in-video models and implementations
This work introduces a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space and shows the model efficacy and superiority over previous state-of-the-art approaches.
The experimental results show that the MIL method for anomaly detection achieves significant improvement on anomaly detection performance as compared to the state-of-the-art approaches, and the results of several recent deep learning baselines on anomalous activity recognition are provided.
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method.
This work proposes a spatiotemporal architecture for anomaly detection in videos including crowded scenes that includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features.
This work proposes two methods that are built upon the autoencoders for their ability to work with little to no supervision, and builds a fully convolutional feed-forward autoencoder to learn both the local features and the classifiers as an end-to-end learning framework.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies, and theoretically and empirically justify the robustness of the model w.r.t. anomaly contamination in the unlabeled data.
Security surveillance is critical to social harmony and people’s peaceful life. It has a great impact on strengthening social stability and life safeguarding. Detecting anomaly timely, effectively and efficiently in video surveillance remains challenging. This paper proposes a new approach, called <inline-formula> <tex-math notation="LaTeX">$S^{2}$ </tex-math></inline-formula>-VAE, for anomaly detection from video data. The <inline-formula> <tex-math notation="LaTeX">$S^{2}$ </tex-math></inline-formula>-VAE consists of two proposed neural networks: a Stacked Fully Connected Variational AutoEncoder (<inline-formula> <tex-math notation="LaTeX">$S_{F}$ </tex-math></inline-formula>-VAE) and a Skip Convolutional VAE (<inline-formula> <tex-math notation="LaTeX">$S_{C}$ </tex-math></inline-formula>-VAE). The <inline-formula> <tex-math notation="LaTeX">$S_{F}$ </tex-math></inline-formula>-VAE is a shallow generative network to obtain a model like Gaussian mixture to fit the distribution of the actual data. The <inline-formula> <tex-math notation="LaTeX">$S_{C}$ </tex-math></inline-formula>-VAE, as a key component of <inline-formula> <tex-math notation="LaTeX">$S^{2}$ </tex-math></inline-formula>-VAE, is a deep generative network to take advantages of CNN, VAE and skip connections. Both <inline-formula> <tex-math notation="LaTeX">$S_{F}$ </tex-math></inline-formula>-VAE and <inline-formula> <tex-math notation="LaTeX">$S_{C}$ </tex-math></inline-formula>-VAE are efficient and effective generative networks and they can achieve better performance for detecting both local abnormal events and global abnormal events. The proposed <inline-formula> <tex-math notation="LaTeX">$S^{2}$ </tex-math></inline-formula>-VAE is evaluated using four public datasets. The experimental results show that the <inline-formula> <tex-math notation="LaTeX">$S^{2}$ </tex-math></inline-formula>-VAE outperforms the state-of-the-art algorithms. The code is available publicly at <uri>https://github.com/tianwangbuaa/</uri>.
This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection.
This work proposes a lightweight feature extractor that processes an image in less than a millisecond on a modern GPU and proposes a training loss that hinders the student from imitating the teacher feature extractor beyond the normal images, allowing it to drastically reduce the computational cost of the student–teacher model.
It is proved that under some mild conditions, the proposed PuriGANs are guaranteed to converge to the distribution of desired instances, and the usefulness of Purigan on downstream applications is demonstrated by applying it to the tasks of semi-supervised anomaly detection on contaminated datasets and PU-learning.
Adding a benchmark result helps the community track progress.