3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in event-data-classification-10
Use these libraries to find event-data-classification-10 models and implementations
No subtasks available.
The work is about predicting learners’ learning styles based on their learning traces, and the Felder Silverman learning style model (FSLSM) is adopted since it is one of the most commonly used models in technology-enhanced learning.
It is shown that the shallow convolutional SNN outperforms spatio-temporal feature extractor methods such as C3D, ConvLstm, and cascaded Conv and LSTM and is presented a new deep spiking architecture to tackle real-world classification and activity recognition tasks.
This work proposes Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.
A novel synergistic learning approach that involves simultaneously training synaptic weights and spike thresholds in SNNs is developed and indicates that biologically plausible synergies between synaptic and intrinsic nonsynaptic mechanisms may provide a promising approach for developing highly efficient SNN learning methods.
The neuromorphic event cameras can efficiently sense the latent geometric structures and motion clues of a scene by generating asynchronous and sparse event signals. Due to the irregular layout of the event signals, how to leverage their plentiful spatio-temporal information for recognition tasks remains a significant challenge. Existing methods tend to treat events as dense image-like or point-serie representations. However, they either suffer from severe destruction on the sparsity of event data or fail to encode robust spatial cues. To fully exploit their inherent sparsity with reconciling the spatio-temporal information, we introduce a compact event representation, namely 2D-1T event cloud sequence (2D-1T ECS). We couple this representation with a novel light-weight spatio-temporal learning framework (ECSNet) that accommodates both object classification and action recognition tasks. The core of our framework is a hierarchical spatial relation module. Equipped with specially designed surface-event-based sampling unit and local event normalization unit to enhance the inter-event relation encoding, this module learns robust geometric features from the 2D event clouds. And we propose a motion attention module for efficiently capturing long-term temporal context evolving with the 1T cloud sequence. Empirically, the experiments show that our framework achieves par or even better state-of-the-art performance. Importantly, our approach cooperates well with the sparsity of event data without any sophisticated operations, hence leading to low computational costs and prominent inference speeds.
This work proposes online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients, and theoretically analyze and prove that gradients of OTTT can provide a similar descent direction for optimization as gradients based on spike representations under both feedforward and recurrent conditions.
A novel absorbing graph convolutional network (AGCN) is proposed for event stream data representation which can effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes.
Adding a benchmark result helps the community track progress.