3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in point-processes-2
No benchmarks available.
Use these libraries to find point-processes-2 models and implementations
No subtasks available.
This work proposes a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes, and provides metrics that enable comparison ofcounterfactual-based methods to other local explanation methods.
Determinantal Point Processes for Machine Learning provides a comprehensible introduction to DPPs, focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how they can be applied to real-world applications.
This work enables efficient sampling and learning for DPPs by introducing KronDPP, a DPP model whose kernel matrix decomposes as a tensor product of multiple smaller kernel matrices, which enables fast exact sampling.
A new type of dependent thinning for point processes in continuous space is proposed, which leverages the advantages of determinantal point processes defined on finite spaces and, as such, is particularly amenable to statistical, numerical, and simulation techniques.
It is found that Lanczos is generally superior to Chebyshev for kernel learning, and that a surrogate approach can be highly efficient and accurate with popular kernels.
A simple mixture model is proposed that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form and is suitable for novel applications, such as learning sequence embeddings and imputing missing data.
This work offers a geometric interpretation of behavioural diversity in games and introduces a novel diversity metric based on determinantal point processes (DPP) that develops diverse fictitious play and diverse policy-space response oracle for solving normal-form games and open-ended games.
This paper model the background by a Recurrent Neural Network with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics.
Adding a benchmark result helps the community track progress.