3260 papers • 126 benchmarks • 313 datasets
A branch of predictive analysis that attempts to predict some future state of a business process.
(Image credit: Papersgraph)
These leaderboards are used to track progress in predictive-process-monitoring-11
No benchmarks available.
Use these libraries to find predictive-process-monitoring-11 models and implementations
No subtasks available.
This paper investigates Long Short-Term Memory neural networks as an approach to build consistently accurate models for a wide range of predictive process monitoring tasks and shows that LSTMs outperform existing techniques to predict the next event of a running case and its timestamp.
A framework for prescriptive process monitoring is proposed, which extends predictive process monitoring approaches with the concepts of alarms, interventions, compensations, and mitigation effects and incorporates a parameterized cost model to assess the cost-benefit tradeoffs of applying prescriptives process monitoring in a given setting.
This work proposes the Heterogeneous Object Event Graph encoding (HOEG), which integrates events and objects into a graph structure with diverse node types, thus creating a more nuanced and informative representation.
This paper defines a notion of temporal stability for binary classification tasks in predictive process monitoring and evaluates existing methods with respect to both temporal stability and accuracy and finds that methods based on XGBoost and LSTM neural networks exhibit the highest temporal stability.
A framework for prescriptive process monitoring is proposed, which extends predictive monitoring with the ability to generate alarms that trigger interventions to prevent an undesired outcome or mitigate its effect and incorporates a parameterized cost model to assess the cost–benefit trade-off of generating alarms.
A novel adversarial training framework based on an adaptation of Generative Adversarial Networks to the realm of sequential temporal data is proposed, which systematically outperforms all baselines both in terms of accuracy and earliness of the prediction, despite using a simple network architecture and a naive feature encoding.
This paper draws on evaluation measures used in the field of explainable AI and proposes functionally-grounded evaluation metrics for assessing explainable methods in predictive process analytics and applies the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost.
This paper is the first to apply Bayesian neural networks' uncertainty estimates themselves to predictive process monitoring and found that they contribute towards more accurate predictions and work quickly.
Adding a benchmark result helps the community track progress.